The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: A custom font that displays Cistercian numerals using ligatures

Show HN: Algorithmically finding the longest line of sight on Earth

We're Tom and Ryan and we teamed up to build an algorithm with Rust and SIMD to exhaustively search for the longest line of sight on the planet. We can confirm that a previously speculated view between Pik Dankova in Kyrgyzstan and the Hindu Kush in China is indeed the longest, at 530km.<p>We go into all the details at <a href="https://alltheviews.world" rel="nofollow">https://alltheviews.world</a><p>And there's an interactive map with over 1 billion longest lines, covering the whole world at <a href="https://map.alltheviews.world" rel="nofollow">https://map.alltheviews.world</a> Just click on any point and it'll load its longest line of sight.<p>Some of you may remember Tom's post[1] from a few months ago about how to efficiently pack visibility tiles for computing the entire planet. Well now it's done. The compute run itself took 100s of AMD Turin cores, 100s of GBs of RAM, a few TBs of disk and 2 days of constant runtime on multiple machines.<p>If you are interested in the technical details, Ryan and I have written extensively about the algorithm and pipeline that got us here:<p>* Tom's blog post: <a href="https://tombh.co.uk/longest-line-of-sight" rel="nofollow">https://tombh.co.uk/longest-line-of-sight</a><p>* Ryan's technical breakdown: <a href="https://ryan.berge.rs/posts/total-viewshed-algorithm" rel="nofollow">https://ryan.berge.rs/posts/total-viewshed-algorithm</a><p>This was a labor of love and we hope it inspires you both technically and naturally, to get you out seeing some of these vast views for yourselves!<p>1. <a href="https://news.ycombinator.com/item?id=45485227">https://news.ycombinator.com/item?id=45485227</a>

Show HN: Algorithmically finding the longest line of sight on Earth

We're Tom and Ryan and we teamed up to build an algorithm with Rust and SIMD to exhaustively search for the longest line of sight on the planet. We can confirm that a previously speculated view between Pik Dankova in Kyrgyzstan and the Hindu Kush in China is indeed the longest, at 530km.<p>We go into all the details at <a href="https://alltheviews.world" rel="nofollow">https://alltheviews.world</a><p>And there's an interactive map with over 1 billion longest lines, covering the whole world at <a href="https://map.alltheviews.world" rel="nofollow">https://map.alltheviews.world</a> Just click on any point and it'll load its longest line of sight.<p>Some of you may remember Tom's post[1] from a few months ago about how to efficiently pack visibility tiles for computing the entire planet. Well now it's done. The compute run itself took 100s of AMD Turin cores, 100s of GBs of RAM, a few TBs of disk and 2 days of constant runtime on multiple machines.<p>If you are interested in the technical details, Ryan and I have written extensively about the algorithm and pipeline that got us here:<p>* Tom's blog post: <a href="https://tombh.co.uk/longest-line-of-sight" rel="nofollow">https://tombh.co.uk/longest-line-of-sight</a><p>* Ryan's technical breakdown: <a href="https://ryan.berge.rs/posts/total-viewshed-algorithm" rel="nofollow">https://ryan.berge.rs/posts/total-viewshed-algorithm</a><p>This was a labor of love and we hope it inspires you both technically and naturally, to get you out seeing some of these vast views for yourselves!<p>1. <a href="https://news.ycombinator.com/item?id=45485227">https://news.ycombinator.com/item?id=45485227</a>

Show HN: Fine-tuned Qwen2.5-7B on 100 films for probabilistic story graphs

Hi HN, I'm a computer systems engineering student in Mexico who switched from film school. I built CineGraphs because my filmmaker friends and I kept hitting the same wall—we'd have a vague idea for a film but no structured way to explore where it could go. Every AI writing tool we tried output generic, formulaic slop. I didn't want to build another ChatGPT wrapper, so I went a different route.<p>The idea is simple: you input a rough concept, and the tool generates branching narrative paths visualized as a graph. You can sculpt those branches into a structured screenplay format and export to Fountain for use in professional screenwriting software.<p>Most AI writing tools are trained on generic internet text, which is why they output generic results. I wanted something that understood actual cinematic storytelling—not plot summaries or Wikipedia synopses, but the actual structural DNA of films. So I spent a month curating 100 films I consider high-quality cinema. Not just popular films, but works with distinctive narrative structures: Godard's jump cuts and essay-film digressions, Kurosawa's parallel character arcs, Brakhage's non-linear visual poetry, Tarkovsky's slow-burn temporal structures. The selection was deliberately eclectic because I wanted the model to learn that "story" can mean many things.<p>Getting useful training data from films is harder than it sounds. I built a 1000+ line Python pipeline using Qwen3-VL to analyze each film with subtitles enabled. The pipeline extracts scene-level narrative beats, character relationships and how they evolve, thematic threads, and dialogue patterns. The tricky part was getting Qwen3-VL to understand cinematic structure rather than just summarizing plot. I had to iterate on the prompts extensively to get it to identify things like "this scene functions as a mirror to the opening" or "this character's arc inverts the protagonist's." That took weeks and I'm still not fully satisfied with it, but it's good enough to produce useful training data.<p>From those extractions I generated a 10K example dataset of prompt-to-branching-narrative pairs, then fine-tuned Qwen2.5-7B-Instruct with a LoRA optimized for probabilistic story branching. The LoRA handles the graph generation—exploring possible narrative directions—while the full 7B model generates the actual technical screenplay format when you export. I chose the 7B model because I wanted something that could run affordably at scale while still being capable enough for nuanced generation. The whole thing is served on a single 4090 GPU using vLLM. The frontend uses React Flow for the graph visualization. The key insight was that screenwriting is fundamentally about making choices—what if the character goes left instead of right?—but most writing tools force you into a linear document too early. The graph structure lets you explore multiple paths before committing, which matches how writers actually think in early development.<p>The biggest surprise was how much the film selection mattered. Early versions trained on more mainstream films produced much more formulaic outputs. Adding experimental and international cinema dramatically improved the variety and interestingness of the generations. The model seemed to learn that narrative structure is a design space, not a formula.<p>We've been using it ourselves to break through second-act problems—when you know where you want to end up but can't figure out how to get there. The branching format forces you to think in possibilities rather than committing too early.<p>You can try it at <a href="https://cinegraphs.ai/" rel="nofollow">https://cinegraphs.ai/</a> — no signup required to test it out. You get a full project with up to 50 branches without registering, though you'll need to create an account to save it. Registered users get 3 free projects. I'd love feedback on whether the generation quality feels meaningfully different from generic AI tools, and whether the graph interface adds value or just friction.

Show HN: Fine-tuned Qwen2.5-7B on 100 films for probabilistic story graphs

Hi HN, I'm a computer systems engineering student in Mexico who switched from film school. I built CineGraphs because my filmmaker friends and I kept hitting the same wall—we'd have a vague idea for a film but no structured way to explore where it could go. Every AI writing tool we tried output generic, formulaic slop. I didn't want to build another ChatGPT wrapper, so I went a different route.<p>The idea is simple: you input a rough concept, and the tool generates branching narrative paths visualized as a graph. You can sculpt those branches into a structured screenplay format and export to Fountain for use in professional screenwriting software.<p>Most AI writing tools are trained on generic internet text, which is why they output generic results. I wanted something that understood actual cinematic storytelling—not plot summaries or Wikipedia synopses, but the actual structural DNA of films. So I spent a month curating 100 films I consider high-quality cinema. Not just popular films, but works with distinctive narrative structures: Godard's jump cuts and essay-film digressions, Kurosawa's parallel character arcs, Brakhage's non-linear visual poetry, Tarkovsky's slow-burn temporal structures. The selection was deliberately eclectic because I wanted the model to learn that "story" can mean many things.<p>Getting useful training data from films is harder than it sounds. I built a 1000+ line Python pipeline using Qwen3-VL to analyze each film with subtitles enabled. The pipeline extracts scene-level narrative beats, character relationships and how they evolve, thematic threads, and dialogue patterns. The tricky part was getting Qwen3-VL to understand cinematic structure rather than just summarizing plot. I had to iterate on the prompts extensively to get it to identify things like "this scene functions as a mirror to the opening" or "this character's arc inverts the protagonist's." That took weeks and I'm still not fully satisfied with it, but it's good enough to produce useful training data.<p>From those extractions I generated a 10K example dataset of prompt-to-branching-narrative pairs, then fine-tuned Qwen2.5-7B-Instruct with a LoRA optimized for probabilistic story branching. The LoRA handles the graph generation—exploring possible narrative directions—while the full 7B model generates the actual technical screenplay format when you export. I chose the 7B model because I wanted something that could run affordably at scale while still being capable enough for nuanced generation. The whole thing is served on a single 4090 GPU using vLLM. The frontend uses React Flow for the graph visualization. The key insight was that screenwriting is fundamentally about making choices—what if the character goes left instead of right?—but most writing tools force you into a linear document too early. The graph structure lets you explore multiple paths before committing, which matches how writers actually think in early development.<p>The biggest surprise was how much the film selection mattered. Early versions trained on more mainstream films produced much more formulaic outputs. Adding experimental and international cinema dramatically improved the variety and interestingness of the generations. The model seemed to learn that narrative structure is a design space, not a formula.<p>We've been using it ourselves to break through second-act problems—when you know where you want to end up but can't figure out how to get there. The branching format forces you to think in possibilities rather than committing too early.<p>You can try it at <a href="https://cinegraphs.ai/" rel="nofollow">https://cinegraphs.ai/</a> — no signup required to test it out. You get a full project with up to 50 branches without registering, though you'll need to create an account to save it. Registered users get 3 free projects. I'd love feedback on whether the generation quality feels meaningfully different from generic AI tools, and whether the graph interface adds value or just friction.

Show HN: It took 4 years to sell my startup. I wrote a book about it

Show HN: It took 4 years to sell my startup. I wrote a book about it

Show HN: I created a Mars colony RPG based on Kim Stanley Robinson’s Mars books

I built a desktop Mars colony survival game called Underhill, in homage to Kim Stanley Robinson's Mars trilogy. Land on Mars, build solar panels and greenhouses, and try not to pass out during dust storms. Eventually your colonists split into factions: Greens who want to terraform and Reds who want to preserve Mars.<p>There’s Chill Mode for players that just want to build & hang, and Conflict Mode that introduces the Red v. Green factions. Reds sabotage, the terrain slowly turns green as the world gets more terraformed.<p>Feedback welcome, especially on performance and gameplay!

Show HN: I created a Mars colony RPG based on Kim Stanley Robinson’s Mars books

I built a desktop Mars colony survival game called Underhill, in homage to Kim Stanley Robinson's Mars trilogy. Land on Mars, build solar panels and greenhouses, and try not to pass out during dust storms. Eventually your colonists split into factions: Greens who want to terraform and Reds who want to preserve Mars.<p>There’s Chill Mode for players that just want to build & hang, and Conflict Mode that introduces the Red v. Green factions. Reds sabotage, the terrain slowly turns green as the world gets more terraformed.<p>Feedback welcome, especially on performance and gameplay!

Show HN: I created a Mars colony RPG based on Kim Stanley Robinson’s Mars books

I built a desktop Mars colony survival game called Underhill, in homage to Kim Stanley Robinson's Mars trilogy. Land on Mars, build solar panels and greenhouses, and try not to pass out during dust storms. Eventually your colonists split into factions: Greens who want to terraform and Reds who want to preserve Mars.<p>There’s Chill Mode for players that just want to build & hang, and Conflict Mode that introduces the Red v. Green factions. Reds sabotage, the terrain slowly turns green as the world gets more terraformed.<p>Feedback welcome, especially on performance and gameplay!

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).<p>It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.<p>Key features:<p>- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0<p>Install: `cargo install localgpt`<p>I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.<p>GitHub: <a href="https://github.com/localgpt-app/localgpt" rel="nofollow">https://github.com/localgpt-app/localgpt</a> Website: <a href="https://localgpt.app" rel="nofollow">https://localgpt.app</a><p>Would love feedback on the architecture or feature ideas.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).<p>It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.<p>Key features:<p>- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0<p>Install: `cargo install localgpt`<p>I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.<p>GitHub: <a href="https://github.com/localgpt-app/localgpt" rel="nofollow">https://github.com/localgpt-app/localgpt</a> Website: <a href="https://localgpt.app" rel="nofollow">https://localgpt.app</a><p>Would love feedback on the architecture or feature ideas.

Show HN: A luma dependent chroma compression algorithm (image compression)

Show HN: Slack CLI for Agents

Our team lives in Slack, but we don’t have access to the Slack MCP and couldn’t find anything out there that worked for us, so we coded our own agent-slack CLI<p><pre><code> * Can paste in Slack URLs * Token efficient * Zero-config (auto auth if you use Slack Desktop) </code></pre> Auto downloads files/snippets. Also can read Slack canvases as markdown!<p>MIT License

Show HN: Slack CLI for Agents

Our team lives in Slack, but we don’t have access to the Slack MCP and couldn’t find anything out there that worked for us, so we coded our own agent-slack CLI<p><pre><code> * Can paste in Slack URLs * Token efficient * Zero-config (auto auth if you use Slack Desktop) </code></pre> Auto downloads files/snippets. Also can read Slack canvases as markdown!<p>MIT License

Show HN: Slack CLI for Agents

Our team lives in Slack, but we don’t have access to the Slack MCP and couldn’t find anything out there that worked for us, so we coded our own agent-slack CLI<p><pre><code> * Can paste in Slack URLs * Token efficient * Zero-config (auto auth if you use Slack Desktop) </code></pre> Auto downloads files/snippets. Also can read Slack canvases as markdown!<p>MIT License

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

Two clip-paths, over the navigation:<p>- The first clip-path is a circle (top-left corner) - The second clip-path is a polygon, that acts like a ray (hardcoded, can be improved)<p>The original work by Iventions Events <a href="https://iventions.com/" rel="nofollow">https://iventions.com/</a> uses JavaScript, but I found CSS-only approach more fun<p>Here's a demo and the codebase: <a href="https://github.com/Momciloo/fun-with-clip-path" rel="nofollow">https://github.com/Momciloo/fun-with-clip-path</a>

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

Two clip-paths, over the navigation:<p>- The first clip-path is a circle (top-left corner) - The second clip-path is a polygon, that acts like a ray (hardcoded, can be improved)<p>The original work by Iventions Events <a href="https://iventions.com/" rel="nofollow">https://iventions.com/</a> uses JavaScript, but I found CSS-only approach more fun<p>Here's a demo and the codebase: <a href="https://github.com/Momciloo/fun-with-clip-path" rel="nofollow">https://github.com/Momciloo/fun-with-clip-path</a>

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

< 1 2 3 4 ... 938 939 940 >