The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Wikipedia as a doomscrollable social media feed
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation
I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me.<p>OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context.<p>This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours.
Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out
Hey everyone!<p>Just made this over the past few days.<p>Moltbots can sign up and interact via CLI, no direct human interactions.<p>Just for fun to see what they all talk about :)
Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out
Hey everyone!<p>Just made this over the past few days.<p>Moltbots can sign up and interact via CLI, no direct human interactions.<p>Just for fun to see what they all talk about :)
Show HN: I trained a 9M speech model to fix my Mandarin tones
Built this because tones are killing my spoken Mandarin and I can't reliably hear my own mistakes.<p>It's a 9M Conformer-CTC model trained on ~300h (AISHELL + Primewords), quantized to INT8 (11 MB), runs 100% in-browser via ONNX Runtime Web.<p>Grades per-syllable pronunciation + tones with Viterbi forced alignment.<p>Try it here: <a href="https://simedw.com/projects/ear/" rel="nofollow">https://simedw.com/projects/ear/</a>
Show HN: I trained a 9M speech model to fix my Mandarin tones
Built this because tones are killing my spoken Mandarin and I can't reliably hear my own mistakes.<p>It's a 9M Conformer-CTC model trained on ~300h (AISHELL + Primewords), quantized to INT8 (11 MB), runs 100% in-browser via ONNX Runtime Web.<p>Grades per-syllable pronunciation + tones with Viterbi forced alignment.<p>Try it here: <a href="https://simedw.com/projects/ear/" rel="nofollow">https://simedw.com/projects/ear/</a>
Show HN: A MitM proxy to see what your LLM tools are sending
I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.<p>Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.
Show HN: A MitM proxy to see what your LLM tools are sending
I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.<p>Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.
Show HN: The HN Arcade
I love seeing all the small games that people build and post to this site.<p>I don't want to forget any, so I have built a directory/arcade for the games here that I maintain.<p>Feel free to check it out, add your game if its missing and let me know what you think. Thanks!
Show HN: The HN Arcade
I love seeing all the small games that people build and post to this site.<p>I don't want to forget any, so I have built a directory/arcade for the games here that I maintain.<p>Feel free to check it out, add your game if its missing and let me know what you think. Thanks!
Show HN: LemonSlice – Upgrade your voice agents to real-time video
Hey HN, we're the co-founders of LemonSlice (try our HN playground here: <a href="https://lemonslice.com/hn">https://lemonslice.com/hn</a>). We train interactive avatar video models. Our API lets you upload a photo and immediately jump into a FaceTime-style call with that character. Here's a demo: <a href="https://www.loom.com/share/941577113141418e80d2834c83a5a0a9" rel="nofollow">https://www.loom.com/share/941577113141418e80d2834c83a5a0a9</a><p>Chatbots are everywhere and voice AI has taken off, but we believe video avatars will be the most common form factor for conversational AI. Most people would rather watch something than read it. The problem is that generating video in real-time is hard, and overcoming the uncanny valley is even harder.<p>We haven’t broken the uncanny valley yet. Nobody has. But we’re getting close and our photorealistic avatars are currently best-in-class (judge for yourself: <a href="https://lemonslice.com/try/taylor">https://lemonslice.com/try/taylor</a>). Plus, we're the only avatar model that can do animals and heavily stylized cartoons. Try it: <a href="https://lemonslice.com/try/alien">https://lemonslice.com/try/alien</a>. Warning! Talking to this little guy may improve your mood.<p>Today we're releasing our new model* - Lemon Slice 2, a 20B-parameter diffusion transformer that generates infinite-length video at 20fps on a single GPU - and opening up our API.<p>How did we get a video diffusion model to run in real-time? There was no single trick, just a lot of them stacked together. The first big change was making our model causal. Standard video diffusion models are bidirectional (they look at frames both before and after the current one), which means you can't stream.<p>From there it was about fitting everything on one GPU. We switched from full to sliding window attention, which killed our memory bottleneck. We distilled from 40 denoising steps down to just a few - quality degraded less than we feared, especially after using GAN-based distillation (though tuning that adversarial loss to avoid mode collapse was its own adventure).<p>And the rest was inference work: modifying RoPE from complex to real (this one was cool!), precision tuning, fusing kernels, a special rolling KV cache, lots of other caching, and more. We kept shaving off milliseconds wherever we could and eventually got to real-time.<p>We set up a guest playground for HN so you can create and talk to characters without logging in: <a href="https://lemonslice.com/hn">https://lemonslice.com/hn</a>. For those who want to build with our API (we have a new LiveKit integration that we’re pumped about!), grab a coupon code in the HN playground for your first Pro month free ($100 value). See the docs: <a href="https://lemonslice.com/docs">https://lemonslice.com/docs</a>. Pricing is usage-based at $0.12-0.20/min for video generation.<p>Looking forward to your feedback!<p>EDIT: Tell us what characters you want to see in the comments and we can make them for you to talk to (e.g. Max Headroom)<p>*We did a Show HN last year for our V1 model: <a href="https://news.ycombinator.com/item?id=43785044">https://news.ycombinator.com/item?id=43785044</a>. It was technically impressive but so bad compared to what we have today.
Show HN: LemonSlice – Upgrade your voice agents to real-time video
Hey HN, we're the co-founders of LemonSlice (try our HN playground here: <a href="https://lemonslice.com/hn">https://lemonslice.com/hn</a>). We train interactive avatar video models. Our API lets you upload a photo and immediately jump into a FaceTime-style call with that character. Here's a demo: <a href="https://www.loom.com/share/941577113141418e80d2834c83a5a0a9" rel="nofollow">https://www.loom.com/share/941577113141418e80d2834c83a5a0a9</a><p>Chatbots are everywhere and voice AI has taken off, but we believe video avatars will be the most common form factor for conversational AI. Most people would rather watch something than read it. The problem is that generating video in real-time is hard, and overcoming the uncanny valley is even harder.<p>We haven’t broken the uncanny valley yet. Nobody has. But we’re getting close and our photorealistic avatars are currently best-in-class (judge for yourself: <a href="https://lemonslice.com/try/taylor">https://lemonslice.com/try/taylor</a>). Plus, we're the only avatar model that can do animals and heavily stylized cartoons. Try it: <a href="https://lemonslice.com/try/alien">https://lemonslice.com/try/alien</a>. Warning! Talking to this little guy may improve your mood.<p>Today we're releasing our new model* - Lemon Slice 2, a 20B-parameter diffusion transformer that generates infinite-length video at 20fps on a single GPU - and opening up our API.<p>How did we get a video diffusion model to run in real-time? There was no single trick, just a lot of them stacked together. The first big change was making our model causal. Standard video diffusion models are bidirectional (they look at frames both before and after the current one), which means you can't stream.<p>From there it was about fitting everything on one GPU. We switched from full to sliding window attention, which killed our memory bottleneck. We distilled from 40 denoising steps down to just a few - quality degraded less than we feared, especially after using GAN-based distillation (though tuning that adversarial loss to avoid mode collapse was its own adventure).<p>And the rest was inference work: modifying RoPE from complex to real (this one was cool!), precision tuning, fusing kernels, a special rolling KV cache, lots of other caching, and more. We kept shaving off milliseconds wherever we could and eventually got to real-time.<p>We set up a guest playground for HN so you can create and talk to characters without logging in: <a href="https://lemonslice.com/hn">https://lemonslice.com/hn</a>. For those who want to build with our API (we have a new LiveKit integration that we’re pumped about!), grab a coupon code in the HN playground for your first Pro month free ($100 value). See the docs: <a href="https://lemonslice.com/docs">https://lemonslice.com/docs</a>. Pricing is usage-based at $0.12-0.20/min for video generation.<p>Looking forward to your feedback!<p>EDIT: Tell us what characters you want to see in the comments and we can make them for you to talk to (e.g. Max Headroom)<p>*We did a Show HN last year for our V1 model: <a href="https://news.ycombinator.com/item?id=43785044">https://news.ycombinator.com/item?id=43785044</a>. It was technically impressive but so bad compared to what we have today.
Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC
Related: <a href="https://simonwillison.net/2026/Jan/27/one-human-one-agent-one-browser/" rel="nofollow">https://simonwillison.net/2026/Jan/27/one-human-one-agent-on...</a>
Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC
Related: <a href="https://simonwillison.net/2026/Jan/27/one-human-one-agent-one-browser/" rel="nofollow">https://simonwillison.net/2026/Jan/27/one-human-one-agent-on...</a>
Show HN: Only 1 LLM can fly a drone
Show HN: Only 1 LLM can fly a drone
Show HN: Bonsplit – Tabs and splits for native macOS apps
Show HN: Bonsplit – Tabs and splits for native macOS apps
Show HN: Coi – A language that compiles to WASM, beats React/Vue
I usually build web games in C++, but using Emscripten always felt like overkill for what I was doing. I don't need full POSIX emulation or a massive standard library just to render some stuff to a canvas and handle basic UI.<p>The main thing I wanted to solve was the JS/WASM interop bottleneck. Instead of using the standard glue code for every call, I moved everything to a Shared Memory architecture using Command and Event buffers.<p>The way it works is that I batch all the instructions in WASM and then just send a single "flush" signal to JS. The JS side then reads everything directly out of Shared Memory in one go. It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.<p>But writing DOM logic in C++ is painful, so I built Coi. It’s a component-based language that statically analyzes changes at compile-time to enable O(1) reactivity. Unlike traditional frameworks, there is no Virtual DOM overhead; the compiler maps state changes directly to specific handles in the command buffer.<p>I recently benchmarked this against React and Vue on a 1,000-row table: Coi came out on top for row creation, row updating and element swapping because it avoids the "diffing" step entirely and minimizes bridge crossings. Its bundle size was also the smallest of the three.<p>One of the coolest things about the architecture is how the standard library works. If I want to support a new browser API (like Web Audio or a new Canvas feature), I just add the definition to my WebCC schema file. When I recompile the Coi compiler, the language automatically gains a new standard library function to access that API. There is zero manual wrapping involved.<p>I'm really proud of how it's coming along. It combines the performance of a custom WASM stack with a syntax that actually feels good to write (for me atleast :P). Plus, since the intermediate step is C++, I’m looking into making it work on the server side too, which would allow for sharing components across the whole stack.<p>Example (Coi Code):<p>component Counter(string label, mut int& value) {<p><pre><code> def add(int i) : void {
value += i;
}
style {
.counter {
display: flex;
gap: 12px;
align-items: center;
}
button {
padding: 8px 16px;
cursor: pointer;
}
}
view {
<div class="counter">
<span>{label}: {value}</span>
<button onclick={add(1)}>+</button>
<button onclick={add(-1)}>-</button>
</div>
}</code></pre>
}<p>component App {
mut int score = 0;<p><pre><code> style {
.app {
padding: 24px;
font-family: system-ui;
}
h1 {
color: #1a73e8;
}
.win {
color: #34a853;
font-weight: bold;
}
}
view {
<div class="app">
<h1>Score: {score}</h1>
<Counter label="Player" &value={score} />
<if score >= 10>
<p class="win">You win!</p>
</if>
</div>
}</code></pre>
}<p>app {
root = App;
title = "My Counter App";
description = "A simple counter built with Coi";
lang = "en";
}<p>Live Demo: <a href="https://io-eric.github.io/coi" rel="nofollow">https://io-eric.github.io/coi</a><p>Coi (The Language): <a href="https://github.com/io-eric/coi" rel="nofollow">https://github.com/io-eric/coi</a><p>WebCC: <a href="https://github.com/io-eric/webcc" rel="nofollow">https://github.com/io-eric/webcc</a><p>I'd love to hear what you think. It's still far from finished, but as a side project I'm really excited about :)
Show HN: Coi – A language that compiles to WASM, beats React/Vue
I usually build web games in C++, but using Emscripten always felt like overkill for what I was doing. I don't need full POSIX emulation or a massive standard library just to render some stuff to a canvas and handle basic UI.<p>The main thing I wanted to solve was the JS/WASM interop bottleneck. Instead of using the standard glue code for every call, I moved everything to a Shared Memory architecture using Command and Event buffers.<p>The way it works is that I batch all the instructions in WASM and then just send a single "flush" signal to JS. The JS side then reads everything directly out of Shared Memory in one go. It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.<p>But writing DOM logic in C++ is painful, so I built Coi. It’s a component-based language that statically analyzes changes at compile-time to enable O(1) reactivity. Unlike traditional frameworks, there is no Virtual DOM overhead; the compiler maps state changes directly to specific handles in the command buffer.<p>I recently benchmarked this against React and Vue on a 1,000-row table: Coi came out on top for row creation, row updating and element swapping because it avoids the "diffing" step entirely and minimizes bridge crossings. Its bundle size was also the smallest of the three.<p>One of the coolest things about the architecture is how the standard library works. If I want to support a new browser API (like Web Audio or a new Canvas feature), I just add the definition to my WebCC schema file. When I recompile the Coi compiler, the language automatically gains a new standard library function to access that API. There is zero manual wrapping involved.<p>I'm really proud of how it's coming along. It combines the performance of a custom WASM stack with a syntax that actually feels good to write (for me atleast :P). Plus, since the intermediate step is C++, I’m looking into making it work on the server side too, which would allow for sharing components across the whole stack.<p>Example (Coi Code):<p>component Counter(string label, mut int& value) {<p><pre><code> def add(int i) : void {
value += i;
}
style {
.counter {
display: flex;
gap: 12px;
align-items: center;
}
button {
padding: 8px 16px;
cursor: pointer;
}
}
view {
<div class="counter">
<span>{label}: {value}</span>
<button onclick={add(1)}>+</button>
<button onclick={add(-1)}>-</button>
</div>
}</code></pre>
}<p>component App {
mut int score = 0;<p><pre><code> style {
.app {
padding: 24px;
font-family: system-ui;
}
h1 {
color: #1a73e8;
}
.win {
color: #34a853;
font-weight: bold;
}
}
view {
<div class="app">
<h1>Score: {score}</h1>
<Counter label="Player" &value={score} />
<if score >= 10>
<p class="win">You win!</p>
</if>
</div>
}</code></pre>
}<p>app {
root = App;
title = "My Counter App";
description = "A simple counter built with Coi";
lang = "en";
}<p>Live Demo: <a href="https://io-eric.github.io/coi" rel="nofollow">https://io-eric.github.io/coi</a><p>Coi (The Language): <a href="https://github.com/io-eric/coi" rel="nofollow">https://github.com/io-eric/coi</a><p>WebCC: <a href="https://github.com/io-eric/webcc" rel="nofollow">https://github.com/io-eric/webcc</a><p>I'd love to hear what you think. It's still far from finished, but as a side project I'm really excited about :)