The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Atomic – Self-hosted, semantically-connected personal knowledge base

Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts

Show HN: Red Grid Link – peer-to-peer team tracking over Bluetooth, no servers

I go on a lot of backcountry trips where I barely get cell service. If my group splits, nobody knows knows where anyone is until you regroup at camp or at your destination. You can buy Garmin radios or try to set up an ATAK, but ATAK is Android-only and assumes you have a TAK Server running somewhere to make use of all of the functionality. Cool tools themselves, but expensive to set up correctly. I just wanted two iPhones to share their location directly over Bluetooth when cell coverage was lacking.<p>Red Grid Link does that. Start a session, and anyone nearby running the app shows up on your offline map. When they walk out of range their marker stays as a "ghost" that slowly fades.<p>The hard part was making sync reliable over BLE. The connections drop all the time. Someone turns a corner, walks behind a vehicle, whatever. I built a CRDT sync layer (LWW Register + G-Counter) so there's never merge conflicts. Each update is just under 200 bytes (from what I have tested so far). When a user/teammate disappears the app does exponential backoff from 2 to 30 seconds before giving up and marking them as a ghost.<p>Everything is encrypted (AES-256-GCM, ECDH P-256 key exchange per peer pair). Sessions can require a PIN or QR code to join. It also offers offline topo maps with MGRS grid coordinates, same system as in my other app, Red Grid MGRS.<p>The app is free, and I'm looking for some honest feedback from other real-world users. Let me know if you have any questions!

Show HN: Joonote – A note-taking app on your lock screen and notification panel

I finally built this app after many years of being sick of unlocking my phone every goddamn time I need to take or view my notes. It particularly sucks when I'm doing my grocery and going down the list.<p>I started building last year June. This is a native app written in Kotlin. And since I'm a 100% Web dev guy, I gotta say this wouldn't have been possible without this AI to assist me. So this isn't "vibe-coded". I simply used the chat interface in Gemini website, manually copy paste codes to build and integrate every single thing in the app! I used gemini to build it just because I was piggybacking on my last company's enterprise subscription. I personally didn't subscribe to any AI (and still don't cuz the free quota seems enough for me :)<p>So I certainly have learnt alot about Android development, architecture patterns, Kotlin syntax, and obeying Google's whims. Can't say I love it all, but for the sake of this app, I will :)<p>Anyway, I finally have the app I wish existed, and I'm using it everyday. It not only does the main thing I needed it to do, but there's also all this stuff:<p>- Make your notes private if you don't want to show them on lock screen. - Create check/to-do lists. - Set one time or recurring reminders. - Full-text search your notes in the app. - Speech-to-text. - Organize your notes with custom or color labels. - Pin the app as a widget on your home screen. - You can auto backup and restore your notes on new install or Android device. - Works offline. - And no funny business happening in the background <a href="https://joonote.com/privacy" rel="nofollow">https://joonote.com/privacy</a><p>It's 30-day trial, then a one-time $9.99 to go Pro forever.<p>I would love you all to check it out, FWIW.<p>Ok thanks!

Show HN: Termcraft – terminal-first 2D sandbox survival in Rust

I’ve been building termcraft, a terminal-first 2D sandbox survival game in Rust.<p>The idea is to take the classic early survival progression and adapt it to a side-on terminal format instead of a tile or pixel-art engine.<p>Current build includes: - procedural Overworld, Nether, and End generation - mining, placement, crafting, furnaces, brewing, and boats - hostile and passive mobs - villages, dungeons, strongholds, Nether fortresses, and dragon progression<p>This is still early alpha, but it’s already playable.<p>Project: <a href="https://github.com/pagel-s/termcraft" rel="nofollow">https://github.com/pagel-s/termcraft</a><p>Docs: <a href="https://pagel-s.github.io/termcraft/" rel="nofollow">https://pagel-s.github.io/termcraft/</a><p>Demo: <a href="https://youtu.be/kR986Xqzj7E" rel="nofollow">https://youtu.be/kR986Xqzj7E</a>

Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran

Yes, that Fortran.

Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran

Yes, that Fortran.

Show HN: I made an email app inspired by Arc browser

Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this.<p>The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel.<p>I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files?<p>I built a frontend PoC to showcase the idea.<p>Try it: <a href="https://demo.define.app" rel="nofollow">https://demo.define.app</a><p>I’m not sure about it though... Is it worth continuing to explore this idea?

Show HN: I made an email app inspired by Arc browser

Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this.<p>The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel.<p>I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files?<p>I built a frontend PoC to showcase the idea.<p>Try it: <a href="https://demo.define.app" rel="nofollow">https://demo.define.app</a><p>I’m not sure about it though... Is it worth continuing to explore this idea?

Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost

Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost

Show HN: Browser grand strategy game for hundreds of players on huge maps

Hi HN,<p>I've been building a browser-based multiplayer strategy game called Borderhold.<p>Matches run on large maps designed for hundreds of players. Players expand territory, attack neighbors, and adapt as borders shift across the map. You can put buildings down, build ships, and launch nukes.<p>The main thing I wanted to explore was scale: most strategy games are small matches, modest maps, or modest player counts, but here maps are large and game works well with hundreds of players.<p>Matches are relatively short so you can jump in and see a full game play out.<p>Curious what people think.<p><a href="https://borderhold.io/play" rel="nofollow">https://borderhold.io/play</a><p>Gameplay:<p><a href="https://youtu.be/nrJTZEP-Cw8" rel="nofollow">https://youtu.be/nrJTZEP-Cw8</a><p>Discord:<p><a href="https://discord.gg/xVDNt2G5" rel="nofollow">https://discord.gg/xVDNt2G5</a>

Show HN: Playing LongTurn FreeCiv with Friends

Show HN: Playing LongTurn FreeCiv with Friends

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.

Show HN: Three new Kitten TTS models – smallest less than 25MB

Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!

Show HN: Three new Kitten TTS models – smallest less than 25MB

Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!

Show HN: Three new Kitten TTS models – smallest less than 25MB

Kitten TTS (<a href="https://github.com/KittenML/KittenTTS" rel="nofollow">https://github.com/KittenML/KittenTTS</a>) is an open-source series of tiny and expressive text-to-speech models for on-device applications. We had a thread last year here: <a href="https://news.ycombinator.com/item?id=44807868">https://news.ycombinator.com/item?id=44807868</a>.<p>Today we're releasing three new models with 80M, 40M and 14M parameters.<p>The largest model (80M) has the highest quality. The 14M variant reaches new SOTA in expressivity among similar sized models, despite being <25MB in size. This release is a major upgrade from the previous one and supports English text-to-speech applications in eight voices: four male and four female.<p>Here's a short demo: <a href="https://www.youtube.com/watch?v=ge3u5qblqZA" rel="nofollow">https://www.youtube.com/watch?v=ge3u5qblqZA</a>.<p>Most models are quantized to int8 + fp16, and they use ONNX for runtime. Our models are designed to run anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required! This release aims to bridge the gap between on-device and cloud models for tts applications. Multi-lingual model release is coming soon.<p>On-device AI is bottlenecked by one thing: a lack of tiny models that actually perform. Our goal is to open-source more models to run production-ready voice agents and apps entirely on-device.<p>We would love your feedback!

1 2 3 ... 954 955 956 >