The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser)
Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3).<p>Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native.<p>So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime.<p>What it does:
- Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust)
- Native window & events via SDL3
- Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https)
- V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC
- ES modules, TypeScript via SWC
- Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game`
- macOS .app bundles with code signing, Linux/Windows standalone executables
- Embedding API for iOS and Android (JSC/QuickJS + wgpu-native)<p>It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go!<p>MIT licensed.<p>Repo: <a href="https://github.com/mystralengine/mystralnative" rel="nofollow">https://github.com/mystralengine/mystralnative</a><p>Docs: <a href="https://mystralengine.github.io/mystralnative/" rel="nofollow">https://mystralengine.github.io/mystralnative/</a>
Show HN: Cicada – A scripting language that integrates with C
I wrote a lightweight scripting language that runs together with C. Specifically, it's a C library, you run it through a C function call, and it can callback your own C functions. Compiles to ~250 kB. No dependencies beyond the C standard library.<p>Key language features:
* Uses aliases not pointers, so it's memory-safe
* Arrays are N-dimensional and resizable
* Runs scripts or its own 'shell'
* Error trapping
* Methods, inheritance, etc.
* Customizable syntax
Show HN: I built an AI conversation partner to practice speaking languages
Hi,<p>I built TalkBits because most language apps focus on vocabulary or exercises, but not actual conversation. The hard part of learning a language is speaking naturally under pressure.<p>TalkBits lets you have real-time spoken conversations with an AI that acts like a native speaker. You can choose different scenarios (travel, daily life, work, etc.), speak naturally, and the AI responds with natural speech back.<p>The goal is to make it feel like talking to a real person rather than doing lessons.<p>Techwise, it uses realtime speech input, transcription, LLM responses, and tts streaming to keep latency low so the conversation feels fluid.<p>I’m specially interested in feedback about:
– Does it feel natural?
– Where does the conversation break immersion?
– What would make you use this regularly?<p>Happy to answer technical questions too.<p>Thanks
Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents
WASM sandbox for running LLM-generated code safely.<p>Agents get a bash-like shell and can only call tools you provide, with constraints you define.
No Docker, no subprocess, no SaaS — just pip install amla-sandbox
Show HN: Build Web Automations via Demonstration
Hey HN,<p>We’ve been building browser agents for a while. In production, we kept converging on the same pattern: deterministic scripts for the happy path, agents only for edge cases. So we built Demonstrate Mode.<p>The idea is simple: You perform your workflow once in a remote browser. Notte records the interactions and generates deterministic automation code.<p>How it works:
- Record clicks, inputs, navigations in a cloud browser
- Compile them into deterministic code (no LLM at runtime)
- Run and deploy on managed browser infrastructure<p>Closest analog is Playwright codegen but:
- Infrastructure is handled (remote browsers, proxies, auth state)
- Code runs in a deployable runtime with logs, retries, and optional agent fallback<p>Agents are great for prototyping and dynamic steps, but for production we usually want versioned code and predictable cost/behavior. Happy to dive into implementation details in the comments.<p>Demo: <a href="https://www.loom.com/share/f83cb83ecd5e48188dd9741724cde49a" rel="nofollow">https://www.loom.com/share/f83cb83ecd5e48188dd9741724cde49a</a><p>--
Andrea & Lucas,
Notte Founders
Show HN: Kolibri, a DIY music club in Sweden
We’re Maria and Jonatan, and we run a small DIY music club in Norrköping, Sweden, called Kolibri.<p>We run it through a small Swedish company. We pay artists, handle logistics, and take operations seriously. But it has still behaved like a tiny cultural startup in the most relevant way: you have to build trust, form a recognisable identity, pace yourself, avoid burnout, and make something people genuinely return to, without big budgets or growth hacks.
We run it on the last Friday of every month in a small restaurant venue, typically 50–70 paying guests.<p>What we built isn’t an app. It’s a repeatable local format: a standing night where strangers become regulars, centred on music rather than networking.<p>We put up a simple anchor site with schedule + photos/video: <a href="https://kolibrinkpg.com/" rel="nofollow">https://kolibrinkpg.com/</a><p>What you can “try” on the site:<p><pre><code> * Photos and short videos from nights (atmosphere + scale)
* A sense of programming/curation (what we book, how we sequence a night)
* Enough context to copy parts of the format if you’re building something similar locally
</code></pre>
How it started: almost accidentally. I was doing one of many remote music sessions with a friend from London, passing Ableton projects back and forth while talking over FaceTime. One evening I ran out of beer and wandered into a nearby restaurant (Mitropa). A few conversations later we had a date on the calendar.<p>That restaurant is still the venue. It’s owned by a local family: one runs the kitchen, another manages the space. Over time they’ve become close to us, so I’ll put it plainly: if they called and needed help, we’d drop everything.<p>Maria was quickly dubbed klubbvärdinnan (hostess), partly as a joke. In Sweden in the 1970s, posh nightclubs sometimes had a klubbvärdinna, a kind of social anchor. She later adopted it as her DJ alias, and the role became real: greeting people, recognising newcomers who look uncertain, and quietly setting the tone for how people treat one another.<p>The novelty (if there is any) is that we treat the night like a designed social system:<p><pre><code> * Curation is governance. If the music is coherent and emotionally “true”, people relax. If it’s generic, people perform.
* The room needs a host layer. Someone has to make it socially safe to arrive alone.
* Regulars are made, not acquired. People return when they feel recognised and when the night has a consistent identity.
* DIY constraints create legitimacy. Turning a corner restaurant into a club on a shoestring sounds amateurish, but it reads as real.
* Behavioural boundaries are practical. If newcomers can’t trust the room, the whole thing stops working.
</code></pre>
On marketing: we learned quickly that “posting harder” isn’t the same as building a local thing. What worked best was analogue outreach: we walked around town, visited local businesses we genuinely like, bought something, introduced ourselves, and asked if we could leave a flyer. It’s boring, but it builds trust because it’s human, not algorithmic.<p>A concrete example: early on we needed Instagram content that could show music visually without filming crowds in a club. We started filming headphone-walk clips: one person, headphones on, walking through town to a track we chose. It looked good, stylised, cinematic, and that mattered more than we expected. People didn’t just tolerate being filmed; many wanted to be in the videos. Then we’d invite them for a couple of free drinks afterwards as a thank-you and a chance to actually talk. That was a reliable early trust-building mechanism.<p>At one point we were offered a larger venue with a proper budget. It was tempting. But we’d just hosted our first live gig at Mitropa and felt something click. We realised the format works because it’s small and grounded. Scale would change the social physics.
Show HN: Kolibri, a DIY music club in Sweden
We’re Maria and Jonatan, and we run a small DIY music club in Norrköping, Sweden, called Kolibri.<p>We run it through a small Swedish company. We pay artists, handle logistics, and take operations seriously. But it has still behaved like a tiny cultural startup in the most relevant way: you have to build trust, form a recognisable identity, pace yourself, avoid burnout, and make something people genuinely return to, without big budgets or growth hacks.
We run it on the last Friday of every month in a small restaurant venue, typically 50–70 paying guests.<p>What we built isn’t an app. It’s a repeatable local format: a standing night where strangers become regulars, centred on music rather than networking.<p>We put up a simple anchor site with schedule + photos/video: <a href="https://kolibrinkpg.com/" rel="nofollow">https://kolibrinkpg.com/</a><p>What you can “try” on the site:<p><pre><code> * Photos and short videos from nights (atmosphere + scale)
* A sense of programming/curation (what we book, how we sequence a night)
* Enough context to copy parts of the format if you’re building something similar locally
</code></pre>
How it started: almost accidentally. I was doing one of many remote music sessions with a friend from London, passing Ableton projects back and forth while talking over FaceTime. One evening I ran out of beer and wandered into a nearby restaurant (Mitropa). A few conversations later we had a date on the calendar.<p>That restaurant is still the venue. It’s owned by a local family: one runs the kitchen, another manages the space. Over time they’ve become close to us, so I’ll put it plainly: if they called and needed help, we’d drop everything.<p>Maria was quickly dubbed klubbvärdinnan (hostess), partly as a joke. In Sweden in the 1970s, posh nightclubs sometimes had a klubbvärdinna, a kind of social anchor. She later adopted it as her DJ alias, and the role became real: greeting people, recognising newcomers who look uncertain, and quietly setting the tone for how people treat one another.<p>The novelty (if there is any) is that we treat the night like a designed social system:<p><pre><code> * Curation is governance. If the music is coherent and emotionally “true”, people relax. If it’s generic, people perform.
* The room needs a host layer. Someone has to make it socially safe to arrive alone.
* Regulars are made, not acquired. People return when they feel recognised and when the night has a consistent identity.
* DIY constraints create legitimacy. Turning a corner restaurant into a club on a shoestring sounds amateurish, but it reads as real.
* Behavioural boundaries are practical. If newcomers can’t trust the room, the whole thing stops working.
</code></pre>
On marketing: we learned quickly that “posting harder” isn’t the same as building a local thing. What worked best was analogue outreach: we walked around town, visited local businesses we genuinely like, bought something, introduced ourselves, and asked if we could leave a flyer. It’s boring, but it builds trust because it’s human, not algorithmic.<p>A concrete example: early on we needed Instagram content that could show music visually without filming crowds in a club. We started filming headphone-walk clips: one person, headphones on, walking through town to a track we chose. It looked good, stylised, cinematic, and that mattered more than we expected. People didn’t just tolerate being filmed; many wanted to be in the videos. Then we’d invite them for a couple of free drinks afterwards as a thank-you and a chance to actually talk. That was a reliable early trust-building mechanism.<p>At one point we were offered a larger venue with a proper budget. It was tempting. But we’d just hosted our first live gig at Mitropa and felt something click. We realised the format works because it’s small and grounded. Scale would change the social physics.
Show HN: Shelvy Books
Hey HN! I built a little side project I wanted to share.<p>Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection.<p>Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit.<p>Live: <a href="https://shelvybooks.com" rel="nofollow">https://shelvybooks.com</a><p>Would love any feedback on the UX or feature ideas!
Show HN: SHDL – A minimal hardware description language built from logic gates
Hi, everyone!<p>I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals.<p>In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed.<p>SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent.<p>This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates.<p>I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you.<p>Repo: <a href="https://github.com/rafa-rrayes/SHDL" rel="nofollow">https://github.com/rafa-rrayes/SHDL</a><p>Python package: PySHDL on PyPI<p>To make this concrete, here are a few small working examples written in SHDL:<p>1. Full Adder<p>component FullAdder(A, B, Cin) -> (Sum, Cout) {<p><pre><code> x1: XOR; a1: AND;
x2: XOR; a2: AND;
o1: OR;
connect {
A -> x1.A; B -> x1.B;
A -> a1.A; B -> a1.B;
x1.O -> x2.A; Cin -> x2.B;
x1.O -> a2.A; Cin -> a2.B;
a1.O -> o1.A; a2.O -> o1.B;
x2.O -> Sum; o1.O -> Cout;
}</code></pre>
}<p>2. 16 bit register<p># clk must be high for two cycles to store a value<p>component Register16(In[16], clk) -> (Out[16]) {<p><pre><code> >i[16]{
a1{i}: AND;
a2{i}: AND;
not1{i}: NOT;
nor1{i}: NOR;
nor2{i}: NOR;
}
connect {
>i[16]{
# Capture on clk
In[{i}] -> a1{i}.A;
In[{i}] -> not1{i}.A;
not1{i}.O -> a2{i}.A;
clk -> a1{i}.B;
clk -> a2{i}.B;
a1{i}.O -> nor1{i}.A;
a2{i}.O -> nor2{i}.A;
nor1{i}.O -> nor2{i}.B;
nor2{i}.O -> nor1{i}.B;
nor2{i}.O -> Out[{i}];
}
}</code></pre>
}<p>3. 16-bit Ripple-Carry Adder<p>use fullAdder::{FullAdder};<p>component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) {<p><pre><code> >i[16]{ fa{i}: FullAdder; }
connect {
A[1] -> fa1.A;
B[1] -> fa1.B;
Cin -> fa1.Cin;
fa1.Sum -> Sum[1];
>i[2,16]{
A[{i}] -> fa{i}.A;
B[{i}] -> fa{i}.B;
fa{i-1}.Cout -> fa{i}.Cin;
fa{i}.Sum -> Sum[{i}];
}
fa16.Cout -> Cout;
}
}</code></pre>
Show HN: SHDL – A minimal hardware description language built from logic gates
Hi, everyone!<p>I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals.<p>In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed.<p>SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent.<p>This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates.<p>I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you.<p>Repo: <a href="https://github.com/rafa-rrayes/SHDL" rel="nofollow">https://github.com/rafa-rrayes/SHDL</a><p>Python package: PySHDL on PyPI<p>To make this concrete, here are a few small working examples written in SHDL:<p>1. Full Adder<p>component FullAdder(A, B, Cin) -> (Sum, Cout) {<p><pre><code> x1: XOR; a1: AND;
x2: XOR; a2: AND;
o1: OR;
connect {
A -> x1.A; B -> x1.B;
A -> a1.A; B -> a1.B;
x1.O -> x2.A; Cin -> x2.B;
x1.O -> a2.A; Cin -> a2.B;
a1.O -> o1.A; a2.O -> o1.B;
x2.O -> Sum; o1.O -> Cout;
}</code></pre>
}<p>2. 16 bit register<p># clk must be high for two cycles to store a value<p>component Register16(In[16], clk) -> (Out[16]) {<p><pre><code> >i[16]{
a1{i}: AND;
a2{i}: AND;
not1{i}: NOT;
nor1{i}: NOR;
nor2{i}: NOR;
}
connect {
>i[16]{
# Capture on clk
In[{i}] -> a1{i}.A;
In[{i}] -> not1{i}.A;
not1{i}.O -> a2{i}.A;
clk -> a1{i}.B;
clk -> a2{i}.B;
a1{i}.O -> nor1{i}.A;
a2{i}.O -> nor2{i}.A;
nor1{i}.O -> nor2{i}.B;
nor2{i}.O -> nor1{i}.B;
nor2{i}.O -> Out[{i}];
}
}</code></pre>
}<p>3. 16-bit Ripple-Carry Adder<p>use fullAdder::{FullAdder};<p>component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) {<p><pre><code> >i[16]{ fa{i}: FullAdder; }
connect {
A[1] -> fa1.A;
B[1] -> fa1.B;
Cin -> fa1.Cin;
fa1.Sum -> Sum[1];
>i[2,16]{
A[{i}] -> fa{i}.A;
B[{i}] -> fa{i}.B;
fa{i-1}.Cout -> fa{i}.Cin;
fa{i}.Sum -> Sum[{i}];
}
fa16.Cout -> Cout;
}
}</code></pre>
Show HN: Cursor for Userscripts
I’ve been experimenting with embedding an Claude Code/Cursor-style coding agent directly into the browser.<p>At a high level, the agent generates and maintains userscripts and CSS that are re-applied on page load. Rather than just editing DOM via JS in console the agent is treating the page, and the DOM as a file.<p>The models are often trained in RL sandboxes with full access to the filesystem and bash, so they are really good at using it. So to make the agent behave well, I've simulated this environment.<p>The whole state of a page and scripts is implemented as a virtual filesystem hacked on top of browser.local storage. URL is mapped to directories, and the agent starts inside this directory. It has the tools to read/edit files, grep around and a fake bash command that is just used for running scripts and executing JS code.<p>I've tested only with Opus 4.5 so far, and it works pretty reliably.
The state of the file system can be synced to the real filesystem, although because Firefox doesn't support Filesystem API, you need to manually import the fs contents first.<p>This agent is <i>really</i> useful for extracting things to CSV, but it's also can be used for fun.<p>Demo: <a href="https://x.com/ichebykin/status/2015686974439608607" rel="nofollow">https://x.com/ichebykin/status/2015686974439608607</a>
Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG
Hi HN,<p>I’m Tullie, founder of Shaped. Previously, I was a researcher at Meta AI, worked on ranking for Instagram Reels, and was a contributor to PyTorch Lightning.<p>We built ShapedQL because we noticed that while retrieval (finding 1,000 items) has been commoditized by vector DBs, ranking (finding the best 10 items) is still an infrastructure problem.<p>To build a decent for you feed or a RAG system with long-term memory, you usually have to put together a vector DB (Pinecone/Milvus), a feature store (Redis), an inference service, and thousands of lines of Python to handle business logic and reranking.<p>We built an engine that consolidates this into a single SQL dialect. It compiles declarative queries into high-performance, multi-stage ranking pipelines.<p>HOW IT WORKS:<p>Instead of just SELECT <i>, ShapedQL operates in four stages native to recommendation systems:<p>RETRIEVE: Fetch candidates via Hybrid Search (Keywords + Vectors) or Collaborative Filtering.
FILTER: Apply hard constraints (e.g., "inventory > 0").
SCORE: Rank results using real-time models (e.g., p(click) or p(relevance)).
REORDER: Apply diversity logic so your Agent/User doesn’t see 10 nearly identical results.<p>THE SYNTAX: Here is what a RAG query looks like. This replaces about 500 lines of standard Python/LangChain code:<p>SELECT item_id, description, price<p>FROM<p><pre><code> -- Retrieval: Hybrid search across multiple indexes
search_flights("$param.user_prompt", "$param.context"),
search_hotels("$param.user_prompt", "$param.context")
</code></pre>
WHERE<p><pre><code> -- Filtering: Hard business constraints
price <= "$param.budget" AND is_available("$param.dates")
</code></pre>
ORDER BY<p><pre><code> -- Scoring: Real-time reranking (Personalization + Relevance)
0.5 * preference_score(user, item) +
0.3 * relevance_score(item, "$param.user_prompt")
</code></pre>
LIMIT 20<p>If you don’t like SQL, you can also use our Python and Typescript SDKs. I’d love to know what you think of the syntax and the abstraction layer!</i>
Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG
Hi HN,<p>I’m Tullie, founder of Shaped. Previously, I was a researcher at Meta AI, worked on ranking for Instagram Reels, and was a contributor to PyTorch Lightning.<p>We built ShapedQL because we noticed that while retrieval (finding 1,000 items) has been commoditized by vector DBs, ranking (finding the best 10 items) is still an infrastructure problem.<p>To build a decent for you feed or a RAG system with long-term memory, you usually have to put together a vector DB (Pinecone/Milvus), a feature store (Redis), an inference service, and thousands of lines of Python to handle business logic and reranking.<p>We built an engine that consolidates this into a single SQL dialect. It compiles declarative queries into high-performance, multi-stage ranking pipelines.<p>HOW IT WORKS:<p>Instead of just SELECT <i>, ShapedQL operates in four stages native to recommendation systems:<p>RETRIEVE: Fetch candidates via Hybrid Search (Keywords + Vectors) or Collaborative Filtering.
FILTER: Apply hard constraints (e.g., "inventory > 0").
SCORE: Rank results using real-time models (e.g., p(click) or p(relevance)).
REORDER: Apply diversity logic so your Agent/User doesn’t see 10 nearly identical results.<p>THE SYNTAX: Here is what a RAG query looks like. This replaces about 500 lines of standard Python/LangChain code:<p>SELECT item_id, description, price<p>FROM<p><pre><code> -- Retrieval: Hybrid search across multiple indexes
search_flights("$param.user_prompt", "$param.context"),
search_hotels("$param.user_prompt", "$param.context")
</code></pre>
WHERE<p><pre><code> -- Filtering: Hard business constraints
price <= "$param.budget" AND is_available("$param.dates")
</code></pre>
ORDER BY<p><pre><code> -- Scoring: Real-time reranking (Personalization + Relevance)
0.5 * preference_score(user, item) +
0.3 * relevance_score(item, "$param.user_prompt")
</code></pre>
LIMIT 20<p>If you don’t like SQL, you can also use our Python and Typescript SDKs. I’d love to know what you think of the syntax and the abstraction layer!</i>
Show HN: Dwm.tmux – a dwm-inspired window manager for tmux
Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago.<p>I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful.<p>I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality.
Show HN: Dwm.tmux – a dwm-inspired window manager for tmux
Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago.<p>I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful.<p>I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality.
Show HN: Dwm.tmux – a dwm-inspired window manager for tmux
Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago.<p>I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful.<p>I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality.
Show HN: I built a small browser engine from scratch in C++
Hi HN! Korean high school senior here, about to start CS in college.<p>I built a browser engine from scratch in C++ to understand how browsers work. First time using C++, 8 weeks of development, lots of debugging—but it works!<p>Features:<p>- HTML parsing with error correction<p>- CSS cascade and inheritance<p>- Block/inline layout engine<p>- Async image loading + caching<p>- Link navigation + history<p>Hardest parts:<p>- String parsing(html, css)<p>- Rendering<p>- Image Caching & Layout Reflowing<p>What I learned (beyond code):<p>- Systematic debugging is crucial<p>- Ship with known bugs rather than chase perfection<p>- The Power of "Why?"<p>~3,000 lines of C++17/Qt6. Would love feedback on code architecture and C++ best practices!<p>GitHub: <a href="https://github.com/beginner-jhj/mini_browser" rel="nofollow">https://github.com/beginner-jhj/mini_browser</a>
Show HN: I built a small browser engine from scratch in C++
Hi HN! Korean high school senior here, about to start CS in college.<p>I built a browser engine from scratch in C++ to understand how browsers work. First time using C++, 8 weeks of development, lots of debugging—but it works!<p>Features:<p>- HTML parsing with error correction<p>- CSS cascade and inheritance<p>- Block/inline layout engine<p>- Async image loading + caching<p>- Link navigation + history<p>Hardest parts:<p>- String parsing(html, css)<p>- Rendering<p>- Image Caching & Layout Reflowing<p>What I learned (beyond code):<p>- Systematic debugging is crucial<p>- Ship with known bugs rather than chase perfection<p>- The Power of "Why?"<p>~3,000 lines of C++17/Qt6. Would love feedback on code architecture and C++ best practices!<p>GitHub: <a href="https://github.com/beginner-jhj/mini_browser" rel="nofollow">https://github.com/beginner-jhj/mini_browser</a>
Show HN: A MitM proxy to see what your LLM tools are sending
I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.<p>Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.
Show HN: A MitM proxy to see what your LLM tools are sending
I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.<p>Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.