The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support)

We built Gonzo to make log analysis faster and friendlier in the terminal. Think of it like k9s for logs — a TUI that can ingest JSON, text, or OpenTelemetry (OTLP) logs, highlight and boil up patterns, and even run AI models locally or via API to summarize logs. We’re still iterating, so ideas and contributions are welcome!

Show HN: Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support)

We built Gonzo to make log analysis faster and friendlier in the terminal. Think of it like k9s for logs — a TUI that can ingest JSON, text, or OpenTelemetry (OTLP) logs, highlight and boil up patterns, and even run AI models locally or via API to summarize logs. We’re still iterating, so ideas and contributions are welcome!

Show HN: Smart email filters to unfuck your email

Show HN: I integrated my from-scratch TCP/IP stack into the xv6-riscv OS

Hi HN,<p>To truly understand how operating systems and network protocols work, I decided to combine two classic learning tools: the xv6 teaching OS and a from-scratch TCP/IP stack.<p>I'm excited to share the result: my own from-scratch TCP/IP networking stack running directly inside the xv6-riscv (<a href="https://github.com/pandax381/xv6-riscv-net" rel="nofollow">https://github.com/pandax381/xv6-riscv-net</a>) kernel.<p>The project uses a modern virtio-net driver, allowing it to run seamlessly in QEMU and communicate with the host machine.<p>Key features:<p>- From-Scratch Stack: The core is powered by microps (<a href="https://github.com/pandax381/microps" rel="nofollow">https://github.com/pandax381/microps</a>), a TCP/IP stack I originally wrote to run in user-space as a personal project to learn the low-level details of networking.<p>- Kernel Integration: This project ports microps from user-space into the xv6-riscv kernel.<p>- Socket API: Implements standard system calls (socket, bind, accept, etc.) to enable network application development.<p>- User-level Tools: Comes with a simple ifconfig command, plus tcpecho and udpecho servers to demonstrate its capabilities.<p>This has been a fantastic learning experience. My goal was to demystify the magic behind network-aware operating systems by building the components myself.<p>I'd love to hear your feedback and answer any questions!

Show HN: I integrated my from-scratch TCP/IP stack into the xv6-riscv OS

Hi HN,<p>To truly understand how operating systems and network protocols work, I decided to combine two classic learning tools: the xv6 teaching OS and a from-scratch TCP/IP stack.<p>I'm excited to share the result: my own from-scratch TCP/IP networking stack running directly inside the xv6-riscv (<a href="https://github.com/pandax381/xv6-riscv-net" rel="nofollow">https://github.com/pandax381/xv6-riscv-net</a>) kernel.<p>The project uses a modern virtio-net driver, allowing it to run seamlessly in QEMU and communicate with the host machine.<p>Key features:<p>- From-Scratch Stack: The core is powered by microps (<a href="https://github.com/pandax381/microps" rel="nofollow">https://github.com/pandax381/microps</a>), a TCP/IP stack I originally wrote to run in user-space as a personal project to learn the low-level details of networking.<p>- Kernel Integration: This project ports microps from user-space into the xv6-riscv kernel.<p>- Socket API: Implements standard system calls (socket, bind, accept, etc.) to enable network application development.<p>- User-level Tools: Comes with a simple ifconfig command, plus tcpecho and udpecho servers to demonstrate its capabilities.<p>This has been a fantastic learning experience. My goal was to demystify the magic behind network-aware operating systems by building the components myself.<p>I'd love to hear your feedback and answer any questions!

Show HN: I integrated my from-scratch TCP/IP stack into the xv6-riscv OS

Hi HN,<p>To truly understand how operating systems and network protocols work, I decided to combine two classic learning tools: the xv6 teaching OS and a from-scratch TCP/IP stack.<p>I'm excited to share the result: my own from-scratch TCP/IP networking stack running directly inside the xv6-riscv (<a href="https://github.com/pandax381/xv6-riscv-net" rel="nofollow">https://github.com/pandax381/xv6-riscv-net</a>) kernel.<p>The project uses a modern virtio-net driver, allowing it to run seamlessly in QEMU and communicate with the host machine.<p>Key features:<p>- From-Scratch Stack: The core is powered by microps (<a href="https://github.com/pandax381/microps" rel="nofollow">https://github.com/pandax381/microps</a>), a TCP/IP stack I originally wrote to run in user-space as a personal project to learn the low-level details of networking.<p>- Kernel Integration: This project ports microps from user-space into the xv6-riscv kernel.<p>- Socket API: Implements standard system calls (socket, bind, accept, etc.) to enable network application development.<p>- User-level Tools: Comes with a simple ifconfig command, plus tcpecho and udpecho servers to demonstrate its capabilities.<p>This has been a fantastic learning experience. My goal was to demystify the magic behind network-aware operating systems by building the components myself.<p>I'd love to hear your feedback and answer any questions!

Show HN: Turn Markdown into React/Svelte/Vue UI at runtime, zero build step

Show HN: Turn Markdown into React/Svelte/Vue UI at runtime, zero build step

Show HN: Turn Markdown into React/Svelte/Vue UI at runtime, zero build step

Show HN: A zoomable, searchable archive of BYTE magazine

A while ago I was looking for information on a obscure and short lived British computer.<p>I found an article[1] in the archives of BYTE magazine[2] - and was captivated immediately by the tech adverts of bygone eras.<p>This led to a long side project to be able to see all 100k pages of BYTE in a single searchable place.<p>[1]: <a href="https://byte.tsundoku.io/#198502-381" rel="nofollow">https://byte.tsundoku.io/#198502-381</a><p>[2]: <a href="https://news.ycombinator.com/item?id=17683184">https://news.ycombinator.com/item?id=17683184</a>

Show HN: A zoomable, searchable archive of BYTE magazine

A while ago I was looking for information on a obscure and short lived British computer.<p>I found an article[1] in the archives of BYTE magazine[2] - and was captivated immediately by the tech adverts of bygone eras.<p>This led to a long side project to be able to see all 100k pages of BYTE in a single searchable place.<p>[1]: <a href="https://byte.tsundoku.io/#198502-381" rel="nofollow">https://byte.tsundoku.io/#198502-381</a><p>[2]: <a href="https://news.ycombinator.com/item?id=17683184">https://news.ycombinator.com/item?id=17683184</a>

Show HN: A zoomable, searchable archive of BYTE magazine

A while ago I was looking for information on a obscure and short lived British computer.<p>I found an article[1] in the archives of BYTE magazine[2] - and was captivated immediately by the tech adverts of bygone eras.<p>This led to a long side project to be able to see all 100k pages of BYTE in a single searchable place.<p>[1]: <a href="https://byte.tsundoku.io/#198502-381" rel="nofollow">https://byte.tsundoku.io/#198502-381</a><p>[2]: <a href="https://news.ycombinator.com/item?id=17683184">https://news.ycombinator.com/item?id=17683184</a>

Show HN: A zoomable, searchable archive of BYTE magazine

A while ago I was looking for information on a obscure and short lived British computer.<p>I found an article[1] in the archives of BYTE magazine[2] - and was captivated immediately by the tech adverts of bygone eras.<p>This led to a long side project to be able to see all 100k pages of BYTE in a single searchable place.<p>[1]: <a href="https://byte.tsundoku.io/#198502-381" rel="nofollow">https://byte.tsundoku.io/#198502-381</a><p>[2]: <a href="https://news.ycombinator.com/item?id=17683184">https://news.ycombinator.com/item?id=17683184</a>

Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.<p>What Async does:<p><pre><code> - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud - Breaks work into reviewable subtasks with stack diffs for easier code review - Handles the full workflow from issue to merged PR without leaving the app </code></pre> Demo here: <a href="https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_" rel="nofollow">https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_</a><p>I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.<p>The problems I kept running into:<p><pre><code> - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it. - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains. - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat. - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks. </code></pre> So I built Async:<p><pre><code> - Forces upfront planning, always asks clarifying questions and requires confirmation before executing - Simple task tracking that imports Github issues automatically (other integration coming soon!) - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs - Built-in code review with stacked diffs - comment and iterate without leaving the app - Works on desktop and mobile </code></pre> It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.<p>You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.<p>This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.<p>P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.<p>What Async does:<p><pre><code> - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud - Breaks work into reviewable subtasks with stack diffs for easier code review - Handles the full workflow from issue to merged PR without leaving the app </code></pre> Demo here: <a href="https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_" rel="nofollow">https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_</a><p>I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.<p>The problems I kept running into:<p><pre><code> - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it. - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains. - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat. - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks. </code></pre> So I built Async:<p><pre><code> - Forces upfront planning, always asks clarifying questions and requires confirmation before executing - Simple task tracking that imports Github issues automatically (other integration coming soon!) - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs - Built-in code review with stacked diffs - comment and iterate without leaving the app - Works on desktop and mobile </code></pre> It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.<p>You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.<p>This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.<p>P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.<p>What Async does:<p><pre><code> - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud - Breaks work into reviewable subtasks with stack diffs for easier code review - Handles the full workflow from issue to merged PR without leaving the app </code></pre> Demo here: <a href="https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_" rel="nofollow">https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_</a><p>I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.<p>The problems I kept running into:<p><pre><code> - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it. - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains. - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat. - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks. </code></pre> So I built Async:<p><pre><code> - Forces upfront planning, always asks clarifying questions and requires confirmation before executing - Simple task tracking that imports Github issues automatically (other integration coming soon!) - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs - Built-in code review with stacked diffs - comment and iterate without leaving the app - Works on desktop and mobile </code></pre> It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.<p>You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.<p>This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.<p>P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

Show HN: Timep – A next-gen profiler and flamegraph-generator for bash code

Note: this is an update to [this](<a href="https://news.ycombinator.com/item?id=44568529">https://news.ycombinator.com/item?id=44568529</a>) "Show HN" post.<p>timep is a state-of-the-art [debug-]trap-based bash profiler that is efficient and extremely accurate. Unlike other profilers, timep records:<p>1. per-command wall-clock time<p>2. per-command CPU time, and<p>3. the hierarchy of parent function calls /subshells for each command<p>the wall-clock + CPU time combination allows you to determine if a particular command is CPU-bound or IO-bound, and the hierarchical logging gives you a map of <i>how</i> the code actually executed.<p>The standout feature of timep is that it will take these records and automatically generate a bash-native flamegraph (that shows bash commands, not syscalls).<p>------------------------------------------------<p>USAGE<p>timep is extremely easy to use - just source the `timep.bash` file from the repo and add "timep" in front of whatever you want to profile. for example:<p><pre><code> . /path/to/timep.bash timep ./some_script echo "stdin" | timep some_function </code></pre> ZERO changes need to be made to the code being profiled!<p>------------------------------------------------<p>EXAMPLES<p>[test code that will be profiled](<a href="https://github.com/jkool702/timep/blob/main/TESTS/timep.tests.bash" rel="nofollow">https://github.com/jkool702/timep/blob/main/TESTS/timep.test...</a>)<p>[output profile for that test code](<a href="https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/out.profile" rel="nofollow">https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/out...</a>)<p>[flamegraph for that test code](<a href="https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/flamegraph.ALL.svg" rel="nofollow">https://github.com/jkool702/timep/blob/main/TESTS/OUTPUT/fla...</a>)<p>[flamegraph from a "real world" test of "forkrun", a parallelization engine written in bash](<a href="https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/flamegraph.ALL.svg" rel="nofollow">https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/fl...</a>)<p>In the "forkrun test", 13 different checksums were computed for ~670k small files on a ramdisk using 28 parallel workers. this was repeated twice. In total, this test ran around 67,000 individual bash commands. [This is its `perf stat` (without timep)](<a href="https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/perf.compare.txt" rel="nofollow">https://github.com/jkool702/timep/blob/main/TESTS/FORKRUN/pe...</a>).<p>------------------------------------------------<p>EFFICIENCY AND ACCURACY<p>The forkrun test (see "examples" section above) was basically as demanding of a workload as one can have in bash. it fully utilized 24.5 cores on a 14c/28t i9-7940x CPU, racking up >840 seconds of CPU time in ~34.5 seconds of wall-clock time. When profiling this group of 67,000 commands with timep:<p>1. the time it took for the code to run with the debug-trap instrumentation was ~38 seconds, an increase of just slightly over 10%. CPU time had a similiar increase.<p>2. the time profile was ready at +2 minutes (1 minute + 15 seconds after the profiling run finished)<p>3. the flamegraphs were ready at +5 minutes (4 minute + 15 seconds after the profiling run finished)<p>Note that timep records both "start" and "stop" timestamps for every command, and the debug trap instrumentation runs between one commands "stop" timestamp and the next commands "start" timestamp, meaning the error in the profiles timings is far less than the 10% overhead. Comparing the total (sys+user) CPU time that perf stat gave (without using timep) and the CPU time timep gives (from summing together the CPU time of all 67,000-ish commands), the difference is virtually always less than 0.5%, and often less than 0.2%. Ive seen as low as 0.04%, which is 1/3 of a second on a run that took ~850 seconds of CPU time.<p>------------------------------------------------<p>MAJOR CHANGES SINCE THE LAST "SHOW HN" POST<p>1. CPU time is now recorded too (instead of just wall-clock time). This is done via a loadable builtin that calls `getrusage` and (if available) `clock_gettime` to efficiently and accurate determine the CPU time of the process and all its descendants.<p>2. the .so file required to use the loadable builtin mentioned in #1 is built directly into the script has an embedded compressed base64 sequence. I also developed the bash-native compression scheme that it uses. The .so files for x86_64, aarch64, ppc64le and i686 are all included. Im hoping to add arm7 soon as well. the flamegraph generator perl script is also embedded, making the script 100% fully self-contained. NOTE: these embedded base64 strings include both sha256 and md5 checksums of the resulting .so file that are verified on extraction.<p>3. the flamegraph generation has been completely overhauled. The flamegraphs now a) are colored based on runtime (hot colors = longer runtime), b) desaturate colors for commands where cpu time << wall-clock time (e.g., blocking reads, sleep, wait, ...), and c) use a runtime-weighted CDF color mapping that ensures, regardless of the distribution of the underlying data, that the resulting flamegraph has a roughly equal amount of each color in the colorspace (where "equal" means "the same number of pixels are showing each color"). timep also combines multiple flamegraphs (that show wallclock time vs cpu time and that us the full vs folded set of traces) by vertically stacking them into a single SVG image, giving "dual stack" and "quad stack" flamegraphs.<p>4. the post-processing workflow has been basically completely re-written, making it more robust, easier to understand/maintain, and much faster. The "forkrun" test linked above (that ran 67,000 commands) previously took ~20 minutes. With the new version, you can get a profile in 2 minutes or a profile + flamegraph in 5 minutes - a 4x to 10x speedup!

Show HN: Stagewise – frontend coding agent for real codebases

Hey HN, we're Glenn and Julian, and we're building stagewise (<a href="https://stagewise.io">https://stagewise.io</a>), a frontend coding agent that inside your app’s dev mode and that makes changes in your local codebase.<p>We’re compatible with any framework and any component library. Think of it like a v0 of Lovable that works locally and with any existing codebase.<p>You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code.<p>Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development.<p>The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise.<p>Since our last Show HN (<a href="https://news.ycombinator.com/item?id=44798553">https://news.ycombinator.com/item?id=44798553</a>), we launched a few very important features and changes: You now have a proprietary chat history with the agent, an undo button to revert changes, and we increased the amount of free credits AND reduced the pricing by 50%. We made a video about all these changes, showing you how stagewise works: <a href="https://x.com/goetzejulian/status/1959835222712955140/video/1" rel="nofollow">https://x.com/goetzejulian/status/1959835222712955140/video/...</a>.<p>So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console (<a href="https://console.stagewise.io">https://console.stagewise.io</a>).<p>If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with.<p>We're very excited to hear your feedback!

Show HN: I Built a XSLT Blog Framework

A few weeks ago a friend sent me grug-brain XSLT (1) which inspired me to redo my personal blog in XSLT.<p>Rather than just build my own blog on it, I wrote it up for others to use and I've published it on GitHub <a href="https://github.com/vgr-land/vgr-xslt-blog-framework" rel="nofollow">https://github.com/vgr-land/vgr-xslt-blog-framework</a> (2)<p>Since others have XSLT on the mind, now seems just as good of a time as any to share it with the world. Evidlo@ did a fine job explaining the "how" xslt works (3)<p>The short version on how to publish using this framework is:<p>1. Create a new post in HTML wrapped in the XML headers and footers the framework expects.<p>2. Tag the post so that its unique and the framework can find it on build<p>3. Add the post to the posts.xml file<p>And that's it. No build system to update menus, no RSS file to update (posts.xml is the rss file). As a reusable framework, there are likely bugs lurking in CSS, but otherwise I'm finding it perfectly usable for my needs.<p>Finally, it'd be a shame if XSLT is removed from the HTML spec (4), I've found it quite eloquent in its simplicity.<p>(1) <a href="https://news.ycombinator.com/item?id=44393817">https://news.ycombinator.com/item?id=44393817</a><p>(2) <a href="https://github.com/vgr-land/vgr-xslt-blog-framework" rel="nofollow">https://github.com/vgr-land/vgr-xslt-blog-framework</a><p>(3) <a href="https://news.ycombinator.com/item?id=44988271">https://news.ycombinator.com/item?id=44988271</a><p>(4) <a href="https://news.ycombinator.com/item?id=44952185">https://news.ycombinator.com/item?id=44952185</a><p>(Aside - First time caller long time listener to hn, thanks!)

Show HN: I Built a XSLT Blog Framework

A few weeks ago a friend sent me grug-brain XSLT (1) which inspired me to redo my personal blog in XSLT.<p>Rather than just build my own blog on it, I wrote it up for others to use and I've published it on GitHub <a href="https://github.com/vgr-land/vgr-xslt-blog-framework" rel="nofollow">https://github.com/vgr-land/vgr-xslt-blog-framework</a> (2)<p>Since others have XSLT on the mind, now seems just as good of a time as any to share it with the world. Evidlo@ did a fine job explaining the "how" xslt works (3)<p>The short version on how to publish using this framework is:<p>1. Create a new post in HTML wrapped in the XML headers and footers the framework expects.<p>2. Tag the post so that its unique and the framework can find it on build<p>3. Add the post to the posts.xml file<p>And that's it. No build system to update menus, no RSS file to update (posts.xml is the rss file). As a reusable framework, there are likely bugs lurking in CSS, but otherwise I'm finding it perfectly usable for my needs.<p>Finally, it'd be a shame if XSLT is removed from the HTML spec (4), I've found it quite eloquent in its simplicity.<p>(1) <a href="https://news.ycombinator.com/item?id=44393817">https://news.ycombinator.com/item?id=44393817</a><p>(2) <a href="https://github.com/vgr-land/vgr-xslt-blog-framework" rel="nofollow">https://github.com/vgr-land/vgr-xslt-blog-framework</a><p>(3) <a href="https://news.ycombinator.com/item?id=44988271">https://news.ycombinator.com/item?id=44988271</a><p>(4) <a href="https://news.ycombinator.com/item?id=44952185">https://news.ycombinator.com/item?id=44952185</a><p>(Aside - First time caller long time listener to hn, thanks!)

< 1 2 3 ... 8 9 10 11 12 ... 864 865 866 >