The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: An interactive map of US lighthouses and navigational aids

This is an interactive map of US navigational aids and lighthouses, which indicates their location, color, characteristic and any remarks the Coast Guard has attached.<p>I was sick at home with the flu this weekend, and went on a bit of a Wikipedia deep dive about active American lighthouses. Searching around a bit, it was very hard to find a single source or interactive map of active beacons, and a description of what the "characteristic" meant. The Coast Guard maintains a list of active lights though, that they publish annually (<a href="https://www.navcen.uscg.gov/light-list-annual-publication" rel="nofollow">https://www.navcen.uscg.gov/light-list-annual-publication</a>). With some help from Claude Code, it wasn't hard to extract the lat/long and put together a small webapp that shows a map of these light stations and illustrates their characteristic with an animated visualization..<p>Of course, this shouldn't be used as a navigational aid, merely for informational purposes! Though having lived in Seattle and San Francisco I thought it was quite interesting.

Show HN: A small programming language where everything is pass-by-value

This is a hobby project of mine that I started a few years ago to learn about programming language implementation. It was created 95% without AI, although a few recent commits include code from Gemini CLI.<p>I started out following Crafting Interpreters, but gradually branched off that until I had almost nothing left in common.<p>Tech stack: Rust, Cranelift (JIT compilation), LALRPOP (parser).<p>Original title: "A small programming language where everything is a value" (edited based on comments)

Show HN: A small programming language where everything is pass-by-value

This is a hobby project of mine that I started a few years ago to learn about programming language implementation. It was created 95% without AI, although a few recent commits include code from Gemini CLI.<p>I started out following Crafting Interpreters, but gradually branched off that until I had almost nothing left in common.<p>Tech stack: Rust, Cranelift (JIT compilation), LALRPOP (parser).<p>Original title: "A small programming language where everything is a value" (edited based on comments)

Show HN: TUI for managing XDG default applications

Author here. I made this little TUI program for managing default applications on the Linux desktop.<p>Maybe some of you will find it useful.<p>Happy to answer any questions.

Show HN: TUI for managing XDG default applications

Author here. I made this little TUI program for managing default applications on the Linux desktop.<p>Maybe some of you will find it useful.<p>Happy to answer any questions.

Show HN: TUI for managing XDG default applications

Author here. I made this little TUI program for managing default applications on the Linux desktop.<p>Maybe some of you will find it useful.<p>Happy to answer any questions.

Show HN: Bonsplit – Tabs and splits for native macOS apps

Show HN: Bonsplit – Tabs and splits for native macOS apps

Show HN: I built a space travel calculator using Vanilla JavaScript

I built this because measuring my age in years felt boring—I wanted to see the kilometers.<p>The first version only used Earth's orbital speed (~30km/s), but the number moved too slowly. To get the "existential dread" feeling, I switched to using the Milky Way's velocity relative to the CMB (~600km/s). The math takes some liberties (using scalar sum instead of vector) to make the speed feel "fast," but it gets the point across.<p>Under the hood, it's a single HTML file with zero dependencies. No React, no build step. The main challenge was the canvas starfield—I had to pre-allocate the star objects to stop the garbage collector from causing stutters on mobile.<p>Let me know if the physics makes you angry or if the stars run smooth on your device.

Show HN: Coi – A language that compiles to WASM, beats React/Vue

I usually build web games in C++, but using Emscripten always felt like overkill for what I was doing. I don't need full POSIX emulation or a massive standard library just to render some stuff to a canvas and handle basic UI.<p>The main thing I wanted to solve was the JS/WASM interop bottleneck. Instead of using the standard glue code for every call, I moved everything to a Shared Memory architecture using Command and Event buffers.<p>The way it works is that I batch all the instructions in WASM and then just send a single "flush" signal to JS. The JS side then reads everything directly out of Shared Memory in one go. It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.<p>But writing DOM logic in C++ is painful, so I built Coi. It’s a component-based language that statically analyzes changes at compile-time to enable O(1) reactivity. Unlike traditional frameworks, there is no Virtual DOM overhead; the compiler maps state changes directly to specific handles in the command buffer.<p>I recently benchmarked this against React and Vue on a 1,000-row table: Coi came out on top for row creation, row updating and element swapping because it avoids the "diffing" step entirely and minimizes bridge crossings. Its bundle size was also the smallest of the three.<p>One of the coolest things about the architecture is how the standard library works. If I want to support a new browser API (like Web Audio or a new Canvas feature), I just add the definition to my WebCC schema file. When I recompile the Coi compiler, the language automatically gains a new standard library function to access that API. There is zero manual wrapping involved.<p>I'm really proud of how it's coming along. It combines the performance of a custom WASM stack with a syntax that actually feels good to write (for me atleast :P). Plus, since the intermediate step is C++, I’m looking into making it work on the server side too, which would allow for sharing components across the whole stack.<p>Example (Coi Code):<p>component Counter(string label, mut int& value) {<p><pre><code> def add(int i) : void { value += i; } style { .counter { display: flex; gap: 12px; align-items: center; } button { padding: 8px 16px; cursor: pointer; } } view { <div class="counter"> <span>{label}: {value}</span> <button onclick={add(1)}>+</button> <button onclick={add(-1)}>-</button> </div> }</code></pre> }<p>component App { mut int score = 0;<p><pre><code> style { .app { padding: 24px; font-family: system-ui; } h1 { color: #1a73e8; } .win { color: #34a853; font-weight: bold; } } view { <div class="app"> <h1>Score: {score}</h1> <Counter label="Player" &value={score} /> <if score >= 10> <p class="win">You win!</p> </if> </div> }</code></pre> }<p>app { root = App; title = "My Counter App"; description = "A simple counter built with Coi"; lang = "en"; }<p>Live Demo: <a href="https://io-eric.github.io/coi" rel="nofollow">https://io-eric.github.io/coi</a><p>Coi (The Language): <a href="https://github.com/io-eric/coi" rel="nofollow">https://github.com/io-eric/coi</a><p>WebCC: <a href="https://github.com/io-eric/webcc" rel="nofollow">https://github.com/io-eric/webcc</a><p>I'd love to hear what you think. It's still far from finished, but as a side project I'm really excited about :)

Show HN: Coi – A language that compiles to WASM, beats React/Vue

I usually build web games in C++, but using Emscripten always felt like overkill for what I was doing. I don't need full POSIX emulation or a massive standard library just to render some stuff to a canvas and handle basic UI.<p>The main thing I wanted to solve was the JS/WASM interop bottleneck. Instead of using the standard glue code for every call, I moved everything to a Shared Memory architecture using Command and Event buffers.<p>The way it works is that I batch all the instructions in WASM and then just send a single "flush" signal to JS. The JS side then reads everything directly out of Shared Memory in one go. It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.<p>But writing DOM logic in C++ is painful, so I built Coi. It’s a component-based language that statically analyzes changes at compile-time to enable O(1) reactivity. Unlike traditional frameworks, there is no Virtual DOM overhead; the compiler maps state changes directly to specific handles in the command buffer.<p>I recently benchmarked this against React and Vue on a 1,000-row table: Coi came out on top for row creation, row updating and element swapping because it avoids the "diffing" step entirely and minimizes bridge crossings. Its bundle size was also the smallest of the three.<p>One of the coolest things about the architecture is how the standard library works. If I want to support a new browser API (like Web Audio or a new Canvas feature), I just add the definition to my WebCC schema file. When I recompile the Coi compiler, the language automatically gains a new standard library function to access that API. There is zero manual wrapping involved.<p>I'm really proud of how it's coming along. It combines the performance of a custom WASM stack with a syntax that actually feels good to write (for me atleast :P). Plus, since the intermediate step is C++, I’m looking into making it work on the server side too, which would allow for sharing components across the whole stack.<p>Example (Coi Code):<p>component Counter(string label, mut int& value) {<p><pre><code> def add(int i) : void { value += i; } style { .counter { display: flex; gap: 12px; align-items: center; } button { padding: 8px 16px; cursor: pointer; } } view { <div class="counter"> <span>{label}: {value}</span> <button onclick={add(1)}>+</button> <button onclick={add(-1)}>-</button> </div> }</code></pre> }<p>component App { mut int score = 0;<p><pre><code> style { .app { padding: 24px; font-family: system-ui; } h1 { color: #1a73e8; } .win { color: #34a853; font-weight: bold; } } view { <div class="app"> <h1>Score: {score}</h1> <Counter label="Player" &value={score} /> <if score >= 10> <p class="win">You win!</p> </if> </div> }</code></pre> }<p>app { root = App; title = "My Counter App"; description = "A simple counter built with Coi"; lang = "en"; }<p>Live Demo: <a href="https://io-eric.github.io/coi" rel="nofollow">https://io-eric.github.io/coi</a><p>Coi (The Language): <a href="https://github.com/io-eric/coi" rel="nofollow">https://github.com/io-eric/coi</a><p>WebCC: <a href="https://github.com/io-eric/webcc" rel="nofollow">https://github.com/io-eric/webcc</a><p>I'd love to hear what you think. It's still far from finished, but as a side project I'm really excited about :)

Show HN: S2-lite, an open source Stream Store

S2 was on HN for our intro blog post a year ago (<a href="https://news.ycombinator.com/item?id=42480105">https://news.ycombinator.com/item?id=42480105</a>). S2 started out as a serverless API — think S3, but for streams.<p>The idea of streams as a cloud storage primitive resonated with a lot of folks, but not having an open source option was a sticking point for adoption – especially from projects that were themselves open source! So we decided to build it: <a href="https://github.com/s2-streamstore/s2" rel="nofollow">https://github.com/s2-streamstore/s2</a><p>s2-lite is MIT-licensed, written in Rust, and uses SlateDB (<a href="https://slatedb.io" rel="nofollow">https://slatedb.io</a>) as its storage engine. SlateDB is an embedded LSM-style key-value database on top of object storage, which made it a great match for delivering the same durability guarantees as s2.dev.<p>You can specify a bucket and path to run against an object store like AWS S3 — or skip to run entirely in-memory. (This also makes it a great emulator for dev/test environments).<p>Why not just open up the backend of our cloud service? s2.dev has a decoupled architecture with multiple components running in Kubernetes, including our own K8S operator – we made tradeoffs that optimize for operation of a thoroughly multi-tenant cloud infra SaaS. With s2-lite, our goal was to ship something dead simple to operate. There is a lot of shared code between the two that now lives in the OSS repo.<p>A few features remain (notably deletion of resources and records), but s2-lite is substantially ready. Try the Quickstart in the README to stream Star Wars using the s2 CLI!<p>The key difference between S2 vs a Kafka or Redis Streams: supporting tons of durable streams. I have blogged about the landscape in the context of agent sessions (<a href="https://s2.dev/blog/agent-sessions#landscape">https://s2.dev/blog/agent-sessions#landscape</a>). Kafka and NATS Jetstream treat streams as provisioned resources, and the protocols/implementations are oriented around such assumptions. Redis Streams and NATS allow for larger numbers of streams, but without proper durability.<p>The cloud service is completely elastic, but you can also get pretty far with lite despite it being a single-node binary that needs to be scaled vertically. Streams in lite are "just keys" in SlateDB, and cloud object storage is bottomless – although of course there is metadata overhead.<p>One thing I am excited to improve in s2-lite is pipelining of writes for performance (already supported behind a knob, but needs upstream interface changes for safety). It's a technique we use extensively in s2.dev. Essentially when you are dealing with high latencies like S3, you want to keep data flowing throughout the pipe between client and storage, rather than go lock-step where you first wait for an acknowledgment and then issue another write. This is why S2 has a session protocol over HTTP/2, in addition to stateless REST.<p>You can test throughput/latency for lite yourself using the `s2 bench` CLI command. The main factors are: your network quality to the storage bucket region, the latency characteristics of the remote store, SlateDB's flush interval (`SL8_FLUSH_INTERVAL=..ms`), and whether pipelining is enabled (`S2LITE_PIPELINE=true` to taste the future).<p>I'll be here to get thoughts and feedback, and answer any questions!

Show HN: S2-lite, an open source Stream Store

S2 was on HN for our intro blog post a year ago (<a href="https://news.ycombinator.com/item?id=42480105">https://news.ycombinator.com/item?id=42480105</a>). S2 started out as a serverless API — think S3, but for streams.<p>The idea of streams as a cloud storage primitive resonated with a lot of folks, but not having an open source option was a sticking point for adoption – especially from projects that were themselves open source! So we decided to build it: <a href="https://github.com/s2-streamstore/s2" rel="nofollow">https://github.com/s2-streamstore/s2</a><p>s2-lite is MIT-licensed, written in Rust, and uses SlateDB (<a href="https://slatedb.io" rel="nofollow">https://slatedb.io</a>) as its storage engine. SlateDB is an embedded LSM-style key-value database on top of object storage, which made it a great match for delivering the same durability guarantees as s2.dev.<p>You can specify a bucket and path to run against an object store like AWS S3 — or skip to run entirely in-memory. (This also makes it a great emulator for dev/test environments).<p>Why not just open up the backend of our cloud service? s2.dev has a decoupled architecture with multiple components running in Kubernetes, including our own K8S operator – we made tradeoffs that optimize for operation of a thoroughly multi-tenant cloud infra SaaS. With s2-lite, our goal was to ship something dead simple to operate. There is a lot of shared code between the two that now lives in the OSS repo.<p>A few features remain (notably deletion of resources and records), but s2-lite is substantially ready. Try the Quickstart in the README to stream Star Wars using the s2 CLI!<p>The key difference between S2 vs a Kafka or Redis Streams: supporting tons of durable streams. I have blogged about the landscape in the context of agent sessions (<a href="https://s2.dev/blog/agent-sessions#landscape">https://s2.dev/blog/agent-sessions#landscape</a>). Kafka and NATS Jetstream treat streams as provisioned resources, and the protocols/implementations are oriented around such assumptions. Redis Streams and NATS allow for larger numbers of streams, but without proper durability.<p>The cloud service is completely elastic, but you can also get pretty far with lite despite it being a single-node binary that needs to be scaled vertically. Streams in lite are "just keys" in SlateDB, and cloud object storage is bottomless – although of course there is metadata overhead.<p>One thing I am excited to improve in s2-lite is pipelining of writes for performance (already supported behind a knob, but needs upstream interface changes for safety). It's a technique we use extensively in s2.dev. Essentially when you are dealing with high latencies like S3, you want to keep data flowing throughout the pipe between client and storage, rather than go lock-step where you first wait for an acknowledgment and then issue another write. This is why S2 has a session protocol over HTTP/2, in addition to stateless REST.<p>You can test throughput/latency for lite yourself using the `s2 bench` CLI command. The main factors are: your network quality to the storage bucket region, the latency characteristics of the remote store, SlateDB's flush interval (`SL8_FLUSH_INTERVAL=..ms`), and whether pipelining is enabled (`S2LITE_PIPELINE=true` to taste the future).<p>I'll be here to get thoughts and feedback, and answer any questions!

Show HN: Interactive physics simulations I built while teaching my daughter

I started teaching my daughter physics by showing her how things actually work - plucking guitar strings to explain vibration, mixing paints to understand light, dropping objects to see gravity in action.<p>She learned so much faster through hands-on exploration than through books or videos. That's when I realized: what if I could recreate these physical experiments as interactive simulations?<p>Lumen is the result - an interactive physics playground covering sound, light, motion, life, and mechanics. Each module lets you manipulate variables in real-time and see/hear the results immediately.<p>Try it: <a href="https://www.projectlumen.app/" rel="nofollow">https://www.projectlumen.app/</a>

Show HN: Interactive physics simulations I built while teaching my daughter

I started teaching my daughter physics by showing her how things actually work - plucking guitar strings to explain vibration, mixing paints to understand light, dropping objects to see gravity in action.<p>She learned so much faster through hands-on exploration than through books or videos. That's when I realized: what if I could recreate these physical experiments as interactive simulations?<p>Lumen is the result - an interactive physics playground covering sound, light, motion, life, and mechanics. Each module lets you manipulate variables in real-time and see/hear the results immediately.<p>Try it: <a href="https://www.projectlumen.app/" rel="nofollow">https://www.projectlumen.app/</a>

Show HN: Zsweep – Play Minesweeper using only Vim motions

Show HN: Zsweep – Play Minesweeper using only Vim motions

Show HN: Bible translated using LLMs from source Greek and Hebrew

Built an auditable AI (Bible) translation pipeline: Hebrew/Greek source packets -> verse JSON with notes rolling up to chapters, books, and testaments. Final texts compiled with metrics (TTR, n-grams).<p>This is the first full-text example as far as I know (Gen Z bible doesn't count).<p>There are hallucinations and issues, but the overall quality surprised me.<p>LLMs have a lot of promise translating and rendering 'accessible' more ancient texts.<p>The technology has a lot of benefit for the faithful, that I think is only beginning to be explored.

Show HN: BrowserOS – "Claude Cowork" in the browser

Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.<p>The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!<p>Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.<p>The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).<p>But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.<p>In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: <a href="https://git.new/browseros-agent" rel="nofollow">https://git.new/browseros-agent</a><p>How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.<p>What you can actually do today:<p>a) Organize files in my desktop folder <a href="https://youtu.be/NOZ7xjto6Uc" rel="nofollow">https://youtu.be/NOZ7xjto6Uc</a><p>b) Open top 5 HN links, extract the details and write summary into a HTML file <a href="https://youtu.be/uXvqs_TCmMQ" rel="nofollow">https://youtu.be/uXvqs_TCmMQ</a><p>--- Where we are now If you haven't tried us since the last Show HN (<a href="https://news.ycombinator.com/item?id=44523409">https://news.ycombinator.com/item?id=44523409</a>), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:<p>c) You can now build more reliable workflows using n8n-like graph <a href="https://youtu.be/H_bFfWIevSY" rel="nofollow">https://youtu.be/H_bFfWIevSY</a><p>d) You can also use BrowserOS as an MCP server in Cursor or Claude Code <a href="https://youtu.be/5nevh00lckM" rel="nofollow">https://youtu.be/5nevh00lckM</a><p>We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.<p>Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."<p>Curious to hear your take on this and the overall thesis.<p>We’ll be in the comments. Thanks for reading!<p>GitHub: <a href="https://github.com/browseros-ai/BrowserOS" rel="nofollow">https://github.com/browseros-ai/BrowserOS</a><p>Download: <a href="https://browseros.com">https://browseros.com</a> (available for Mac, Windows, Linux!)

Show HN: BrowserOS – "Claude Cowork" in the browser

Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.<p>The big differentiator: on BrowserOS you can use local LLMs or BYOK and run the agent entirely on the client side, so your company/sensitive data stays on your machine!<p>Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But honestly, we didn't plan for this. It turns out the privacy decision we made 9 months ago accidentally positioned us for this moment.<p>The architectural bet we made 9 months ago: Unlike other AI browsers (ChatGPT Atlas, Perplexity Comet) where the agent loop runs server-side, we decided early on to run our agent entirely on your machine (client side).<p>But building everything on the client side wasn't smooth. We initially built our agent loop inside a Chrome extension. But we kept hitting walls -- service worker being single thread JS; not having access to NodeJS libraries. So we made the hard decision 2 months ago to throw away everything and start from scratch.<p>In the new architecture, our agent loop sits in a standalone binary that we ship alongside our Chromium. And we use gemini-cli for the agent loop with some tweaks! We wrote a neat adapter to translate between Gemini format and Vercel AI SDK format. You can look at our entire codebase here: <a href="https://git.new/browseros-agent" rel="nofollow">https://git.new/browseros-agent</a><p>How we give browser access to filesystem: When Claude Cowork launched, we realized something: because Atlas and Comet run their agent loop server-side, there's no good way for their agent to access your files without uploading them to the server first. But our agent was already local. Adding filesystem access meant just... opening the door (with your permissions ofc). Our agent can now read and write files just like Claude Code.<p>What you can actually do today:<p>a) Organize files in my desktop folder <a href="https://youtu.be/NOZ7xjto6Uc" rel="nofollow">https://youtu.be/NOZ7xjto6Uc</a><p>b) Open top 5 HN links, extract the details and write summary into a HTML file <a href="https://youtu.be/uXvqs_TCmMQ" rel="nofollow">https://youtu.be/uXvqs_TCmMQ</a><p>--- Where we are now If you haven't tried us since the last Show HN (<a href="https://news.ycombinator.com/item?id=44523409">https://news.ycombinator.com/item?id=44523409</a>), give us another shot. The new architecture unlocked a ton of new features, and we've grown to 8.5K GitHub stars and 100K+ downloads:<p>c) You can now build more reliable workflows using n8n-like graph <a href="https://youtu.be/H_bFfWIevSY" rel="nofollow">https://youtu.be/H_bFfWIevSY</a><p>d) You can also use BrowserOS as an MCP server in Cursor or Claude Code <a href="https://youtu.be/5nevh00lckM" rel="nofollow">https://youtu.be/5nevh00lckM</a><p>We are very bullish on browser being the right platform for a Claude Cowork like agent. Browser is the most commonly used app by knowledge workers (emails, docs, spreadsheets, research, etc). And even Anthropic recognizes this -- for Claude Cowork, they have janky integration with browser via a chrome extension. But owning the entire stack allows us to build differentiated features that wouldn't be possible otherwise. Ex: Browser ACLs.<p>Agents can do dumb or destructive things, so we're adding browser-level guardrails (think IAM for agents): "role(agent): can never click buy" or "role(agent): read-only access on my bank's homepage."<p>Curious to hear your take on this and the overall thesis.<p>We’ll be in the comments. Thanks for reading!<p>GitHub: <a href="https://github.com/browseros-ai/BrowserOS" rel="nofollow">https://github.com/browseros-ai/BrowserOS</a><p>Download: <a href="https://browseros.com">https://browseros.com</a> (available for Mac, Windows, Linux!)

< 1 2 3 4 5 ... 931 932 933 >