The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Termcraft – terminal-first 2D sandbox survival in Rust
I’ve been building termcraft, a terminal-first 2D sandbox survival game in Rust.<p>The idea is to take the classic early survival progression and adapt it to a side-on terminal format instead of a tile or pixel-art engine.<p>Current build includes:
- procedural Overworld, Nether, and End generation
- mining, placement, crafting, furnaces, brewing, and boats
- hostile and passive mobs
- villages, dungeons, strongholds, Nether fortresses, and dragon progression<p>This is still early alpha, but it’s already playable.<p>Project:
<a href="https://github.com/pagel-s/termcraft" rel="nofollow">https://github.com/pagel-s/termcraft</a><p>Docs:
<a href="https://pagel-s.github.io/termcraft/" rel="nofollow">https://pagel-s.github.io/termcraft/</a><p>Demo:
<a href="https://youtu.be/kR986Xqzj7E" rel="nofollow">https://youtu.be/kR986Xqzj7E</a>
Show HN: Termcraft – terminal-first 2D sandbox survival in Rust
I’ve been building termcraft, a terminal-first 2D sandbox survival game in Rust.<p>The idea is to take the classic early survival progression and adapt it to a side-on terminal format instead of a tile or pixel-art engine.<p>Current build includes:
- procedural Overworld, Nether, and End generation
- mining, placement, crafting, furnaces, brewing, and boats
- hostile and passive mobs
- villages, dungeons, strongholds, Nether fortresses, and dragon progression<p>This is still early alpha, but it’s already playable.<p>Project:
<a href="https://github.com/pagel-s/termcraft" rel="nofollow">https://github.com/pagel-s/termcraft</a><p>Docs:
<a href="https://pagel-s.github.io/termcraft/" rel="nofollow">https://pagel-s.github.io/termcraft/</a><p>Demo:
<a href="https://youtu.be/kR986Xqzj7E" rel="nofollow">https://youtu.be/kR986Xqzj7E</a>
Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran
Yes, that Fortran.
Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran
Yes, that Fortran.
Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran
Yes, that Fortran.
Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran
Yes, that Fortran.
Show HN: I made an email app inspired by Arc browser
Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this.<p>The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel.<p>I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files?<p>I built a frontend PoC to showcase the idea.<p>Try it: <a href="https://demo.define.app" rel="nofollow">https://demo.define.app</a><p>I’m not sure about it though... Is it worth continuing to explore this idea?
Show HN: I made an email app inspired by Arc browser
Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this.<p>The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel.<p>I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files?<p>I built a frontend PoC to showcase the idea.<p>Try it: <a href="https://demo.define.app" rel="nofollow">https://demo.define.app</a><p>I’m not sure about it though... Is it worth continuing to explore this idea?
Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost
Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost
Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost
Show HN: Browser grand strategy game for hundreds of players on huge maps
Hi HN,<p>I've been building a browser-based multiplayer strategy game called Borderhold.<p>Matches run on large maps designed for hundreds of players. Players expand territory, attack neighbors, and adapt as borders shift across the map. You can put buildings down, build ships, and launch nukes.<p>The main thing I wanted to explore was scale: most strategy games are small matches, modest maps, or modest player counts, but here maps are large and game works well with hundreds of players.<p>Matches are relatively short so you can jump in and see a full game play out.<p>Curious what people think.<p><a href="https://borderhold.io/play" rel="nofollow">https://borderhold.io/play</a><p>Gameplay:<p><a href="https://youtu.be/nrJTZEP-Cw8" rel="nofollow">https://youtu.be/nrJTZEP-Cw8</a><p>Discord:<p><a href="https://discord.gg/xVDNt2G5" rel="nofollow">https://discord.gg/xVDNt2G5</a>
Show HN: Playing LongTurn FreeCiv with Friends
Show HN: Playing LongTurn FreeCiv with Friends
Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training
I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs
(RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that
act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning
pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once:
- BBH Logical Deduction: 0.22 → 0.76
- GSM8K (strict): 0.48 → 0.64
- MBPP (code gen): 0.72 → 0.78
- Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once:
- Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same
weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling
(13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts.
Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo.
The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.
Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training
I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs
(RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that
act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning
pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once:
- BBH Logical Deduction: 0.22 → 0.76
- GSM8K (strict): 0.48 → 0.64
- MBPP (code gen): 0.72 → 0.78
- Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once:
- Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same
weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling
(13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts.
Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo.
The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.
Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training
I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs
(RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that
act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning
pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once:
- BBH Logical Deduction: 0.22 → 0.76
- GSM8K (strict): 0.48 → 0.64
- MBPP (code gen): 0.72 → 0.78
- Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once:
- Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same
weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling
(13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts.
Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo.
The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.
Show HN: Three new Kitten TTS models – smallest less than 25MB
Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!
Show HN: Three new Kitten TTS models – smallest less than 25MB
Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!
Show HN: Three new Kitten TTS models – smallest less than 25MB
Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!