The best Hacker News stories from Show from the past week
Latest posts:
Show HN: memEx, a personal knowledge base inspired by zettlekasten and org-mode
Show HN: Atari Missile Command Game Built Using AI Gemini 2.5 Pro
A modern HTML5 canvas remake of the classic Atari game from 1980. Defend your cities and missile bases from incoming enemy attacks using your missile launchers. Initially built using the Google Gemini 2.5 Pro AI LLM model.
Show HN: Aqua Voice 2 – Fast Voice Input for Mac and Windows
Hey HN - It’s Finn and Jack from Aqua Voice (<a href="https://withaqua.com">https://withaqua.com</a>). Aqua is fast AI dictation for your desktop and our attempt to make voice a first-class input method.<p>Video: <a href="https://withaqua.com/watch">https://withaqua.com/watch</a><p>Try it here: <a href="https://withaqua.com/sandbox">https://withaqua.com/sandbox</a><p>Finn is uber dyslexic and has been using dictation software since sixth grade. For over a decade, he’s been chasing a dream that never quite worked — using your voice instead of a keyboard.<p>Our last post (<a href="https://news.ycombinator.com/item?id=39828686">https://news.ycombinator.com/item?id=39828686</a>) about this seemed to resonate with the community - though it turned out that version of Aqua was a better demo than product. But it gave us (and others) a lot of good ideas about what should come next.<p>Since then, we’ve remade Aqua from scratch for speed and usability. It now lives on your desktop, and it lets you talk into any text field -- Cursor, Gmail, Slack, even your terminal.<p>It starts up in under 50ms, inserts text in about a second (sometimes as fast as 450ms), and has state-of-the-art accuracy. It does a lot more, but that’s the core. We’d love your feedback — and if you’ve got ideas for what voice should do next, let’s hear them!
Show HN: DrawDB – open-source online database diagram editor (a retro)
One year ago I open-sourced my very first 'real' project and shared it here. I was a college student in my senior year and desperately looking for a job. At the time of sharing it i couldn't even afford a domain and naively let someone buy the one i had my eyes on lol. It's been a hell of a year with this blowing up, me moving to another country, and switching 2 jobs.<p>In a year we somehow managed to hit 26k stars, grow a 1000+ person discord community, and support 37 languages. I couldn't be more grateful for the community that helped this grow, but now i don't know what direction to take this project in.<p>All of this was an accident. But now I feel like I'm missing out on not using this success. I have been thinking of monetization options, but not sure if I wanna go down that route. I like the idea of it being free and available for everyone but also can't help but think of everything that could be done if committed full-time or even had a small team. I keep telling myself(and others) i'll do something if i meet a co-founder, but doubt and fear of blowing this up keeps back.<p>How would you proceed?
Show HN: Lux – A luxurious package manager for Lua
Show HN: Browser MCP – Automate your browser using Cursor, Claude, VS Code
Show HN: OCR pipeline for ML training (tables, diagrams, math, multilingual)
Hi HN,<p>I’ve been working on an OCR pipeline specifically optimized for machine learning dataset preparation. It’s designed to process complex academic materials — including math formulas, tables, figures, and multilingual text — and output clean, structured formats like JSON and Markdown.<p>Some features:
• Multi-stage OCR combining DocLayout-YOLO, Google Vision, MathPix, and Gemini Pro Vision
• Extracts and understands diagrams, tables, LaTeX-style math, and multilingual text (Japanese/Korean/English)
• Highly tuned for ML training pipelines, including dataset generation and preprocessing for RAG or fine-tuning tasks<p>Sample outputs and real exam-based examples are included (EJU Biology, UTokyo Math, etc.)
Would love to hear any feedback or ideas for improvement.<p>GitHub: <a href="https://github.com/ses4255/Versatile-OCR-Program">https://github.com/ses4255/Versatile-OCR-Program</a>
Show HN: I built a word game. My mom thinks it's great. What do you think?
Show HN: I built a word game. My mom thinks it's great. What do you think?
Show HN: Hatchet v1 – A task orchestration platform built on Postgres
Hey HN - this is Alexander from Hatchet. We’re building an open-source platform for managing background tasks, using Postgres as the underlying database.<p>Just over a year ago, we launched Hatchet as a distributed task queue built on top of Postgres with a 100% MIT license (<a href="https://news.ycombinator.com/item?id=39643136">https://news.ycombinator.com/item?id=39643136</a>). The feedback and response we got from the HN community was overwhelming. In the first month after launching, we processed about 20k tasks on the platform — today, we’re processing over 20k tasks per minute (>1 billion per month).<p>Scaling up this quickly was difficult — every task in Hatchet corresponds to at minimum 5 Postgres transactions and we would see bursts on Hatchet Cloud instances to over 5k tasks/second, which corresponds to roughly 25k transactions/second. As it turns out, a simple Postgres queue utilizing FOR UPDATE SKIP LOCKED doesn’t cut it at this scale. After provisioning the largest instance type that CloudSQL offers, we even discussed potentially moving some load off of Postgres in favor of something trendy like Clickhouse + Kafka.<p>But we doubled down on Postgres, and spent about 6 months learning how to operate Postgres databases at scale and reading the Postgres manual and several other resources [0] during commutes and at night. We stuck with Postgres for two reasons:<p>1. We wanted to make Hatchet as portable and easy to administer as possible, and felt that implementing our own storage engine specifically on Hatchet Cloud would be disingenuous at best, and in the worst case, would take our focus away from the open source community.<p>2. More importantly, Postgres is general-purpose, which is what makes it both great but hard to scale for some types of workloads. This is also what allows us to offer a general-purpose orchestration platform — we heavily utilize Postgres features like transactions, SKIP LOCKED, recursive queries, triggers, COPY FROM, and much more.<p>Which brings us to today. We’re announcing a full rewrite of the Hatchet engine — still built on Postgres — together with our task orchestration layer which is built on top of our underlying queue. To be more specific, we’re launching:<p>1. DAG-based workflows that support a much wider array of conditions, including sleep conditions, event-based triggering, and conditional execution based on parent output data [1].<p>2. Durable execution — durable execution refers to a function’s ability to recover from failure by caching intermediate results and automatically replaying them on a retry. We call a function with this ability a durable task. We also support durable sleep and durable events, which you can read more about here [2]<p>3. Queue features such as key-based concurrency queues (for implementing fair queueing), rate limiting, sticky assignment, and worker affinity.<p>4. Improved performance across every dimension we’ve tested, which we attribute to six improvements to the Hatchet architecture: range-based partitioning of time series tables, hash-based partitioning of task events (for updating task statuses), separating our monitoring tables from our queue, buffered reads and writes, switching all high-volume tables to use identity columns, and aggressive use of Postgres triggers.<p>We've also removed RabbitMQ as a required dependency for self-hosting.<p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Hatchet.<p>[0] <a href="https://www.postgresql.org/docs/" rel="nofollow">https://www.postgresql.org/docs/</a><p>[1] <a href="https://docs.hatchet.run/home/conditional-workflows">https://docs.hatchet.run/home/conditional-workflows</a><p>[2] <a href="https://docs.hatchet.run/home/durable-execution">https://docs.hatchet.run/home/durable-execution</a>
Show HN: Hatchet v1 – A task orchestration platform built on Postgres
Hey HN - this is Alexander from Hatchet. We’re building an open-source platform for managing background tasks, using Postgres as the underlying database.<p>Just over a year ago, we launched Hatchet as a distributed task queue built on top of Postgres with a 100% MIT license (<a href="https://news.ycombinator.com/item?id=39643136">https://news.ycombinator.com/item?id=39643136</a>). The feedback and response we got from the HN community was overwhelming. In the first month after launching, we processed about 20k tasks on the platform — today, we’re processing over 20k tasks per minute (>1 billion per month).<p>Scaling up this quickly was difficult — every task in Hatchet corresponds to at minimum 5 Postgres transactions and we would see bursts on Hatchet Cloud instances to over 5k tasks/second, which corresponds to roughly 25k transactions/second. As it turns out, a simple Postgres queue utilizing FOR UPDATE SKIP LOCKED doesn’t cut it at this scale. After provisioning the largest instance type that CloudSQL offers, we even discussed potentially moving some load off of Postgres in favor of something trendy like Clickhouse + Kafka.<p>But we doubled down on Postgres, and spent about 6 months learning how to operate Postgres databases at scale and reading the Postgres manual and several other resources [0] during commutes and at night. We stuck with Postgres for two reasons:<p>1. We wanted to make Hatchet as portable and easy to administer as possible, and felt that implementing our own storage engine specifically on Hatchet Cloud would be disingenuous at best, and in the worst case, would take our focus away from the open source community.<p>2. More importantly, Postgres is general-purpose, which is what makes it both great but hard to scale for some types of workloads. This is also what allows us to offer a general-purpose orchestration platform — we heavily utilize Postgres features like transactions, SKIP LOCKED, recursive queries, triggers, COPY FROM, and much more.<p>Which brings us to today. We’re announcing a full rewrite of the Hatchet engine — still built on Postgres — together with our task orchestration layer which is built on top of our underlying queue. To be more specific, we’re launching:<p>1. DAG-based workflows that support a much wider array of conditions, including sleep conditions, event-based triggering, and conditional execution based on parent output data [1].<p>2. Durable execution — durable execution refers to a function’s ability to recover from failure by caching intermediate results and automatically replaying them on a retry. We call a function with this ability a durable task. We also support durable sleep and durable events, which you can read more about here [2]<p>3. Queue features such as key-based concurrency queues (for implementing fair queueing), rate limiting, sticky assignment, and worker affinity.<p>4. Improved performance across every dimension we’ve tested, which we attribute to six improvements to the Hatchet architecture: range-based partitioning of time series tables, hash-based partitioning of task events (for updating task statuses), separating our monitoring tables from our queue, buffered reads and writes, switching all high-volume tables to use identity columns, and aggressive use of Postgres triggers.<p>We've also removed RabbitMQ as a required dependency for self-hosting.<p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Hatchet.<p>[0] <a href="https://www.postgresql.org/docs/" rel="nofollow">https://www.postgresql.org/docs/</a><p>[1] <a href="https://docs.hatchet.run/home/conditional-workflows">https://docs.hatchet.run/home/conditional-workflows</a><p>[2] <a href="https://docs.hatchet.run/home/durable-execution">https://docs.hatchet.run/home/durable-execution</a>
Show HN: The C3 programming language (C alternative language)
Get it from here: <a href="https://github.com/c3lang/c3c" rel="nofollow">https://github.com/c3lang/c3c</a><p>In 2019, while contributing to the C2 language, I started up "C3" as a pet project while waiting for pull requests to be approved...<p>Now it's 6 years later and C3 well on its way to 1.0, having released 0.7.0 last week.<p>Unlike other C alternatives, C3 tries to evolve C – but without concern to backwards compatibility to the latter.<p>What it adds to C is among other things:<p>- Module system<p>- Semantic macros and compile time introspection<p>- Lightweight generic modules<p>- Zero overhead errors<p>- Build-in slices and SIMD types<p>- Gradual contracts<p>- Built-in checks in debug mode<p>You can find more details on the site: <a href="https://c3-lang.org" rel="nofollow">https://c3-lang.org</a>
It might be interesting to look at the examples: <a href="https://c3-lang.org/language-overview/examples/" rel="nofollow">https://c3-lang.org/language-overview/examples/</a> so see how the language looks for some simple examples.<p><i>Some other links that might be interesting follows:</i><p>I've posted about C3 on HN before, notably<p>- <a href="https://news.ycombinator.com/item?id=24108980">https://news.ycombinator.com/item?id=24108980</a><p>- <a href="https://news.ycombinator.com/item?id=27876570">https://news.ycombinator.com/item?id=27876570</a><p>- <a href="https://news.ycombinator.com/item?id=32005678">https://news.ycombinator.com/item?id=32005678</a><p>Here are some interviews on C3:<p>- <a href="https://www.youtube.com/watch?v=UC8VDRJqXfc" rel="nofollow">https://www.youtube.com/watch?v=UC8VDRJqXfc</a><p>- <a href="https://www.youtube.com/watch?v=9rS8MVZH-vA" rel="nofollow">https://www.youtube.com/watch?v=9rS8MVZH-vA</a><p>Here is a series doing various tasks in C3:<p>- <a href="https://ebn.codeberg.page/programming/c3/c3-file-io/" rel="nofollow">https://ebn.codeberg.page/programming/c3/c3-file-io/</a><p>Some projects:<p>- Gameboy emulator <a href="https://github.com/OdnetninI/Gameboy-Emulator/" rel="nofollow">https://github.com/OdnetninI/Gameboy-Emulator/</a><p>- RISCV Bare metal Hello World: <a href="https://www.youtube.com/watch?v=0iAJxx6Ok4E" rel="nofollow">https://www.youtube.com/watch?v=0iAJxx6Ok4E</a><p>- "Depths of Daemonheim" roguelike <a href="https://github.com/TechnicalFowl/7DRL-2025" rel="nofollow">https://github.com/TechnicalFowl/7DRL-2025</a>
Show HN: The C3 programming language (C alternative language)
Get it from here: <a href="https://github.com/c3lang/c3c" rel="nofollow">https://github.com/c3lang/c3c</a><p>In 2019, while contributing to the C2 language, I started up "C3" as a pet project while waiting for pull requests to be approved...<p>Now it's 6 years later and C3 well on its way to 1.0, having released 0.7.0 last week.<p>Unlike other C alternatives, C3 tries to evolve C – but without concern to backwards compatibility to the latter.<p>What it adds to C is among other things:<p>- Module system<p>- Semantic macros and compile time introspection<p>- Lightweight generic modules<p>- Zero overhead errors<p>- Build-in slices and SIMD types<p>- Gradual contracts<p>- Built-in checks in debug mode<p>You can find more details on the site: <a href="https://c3-lang.org" rel="nofollow">https://c3-lang.org</a>
It might be interesting to look at the examples: <a href="https://c3-lang.org/language-overview/examples/" rel="nofollow">https://c3-lang.org/language-overview/examples/</a> so see how the language looks for some simple examples.<p><i>Some other links that might be interesting follows:</i><p>I've posted about C3 on HN before, notably<p>- <a href="https://news.ycombinator.com/item?id=24108980">https://news.ycombinator.com/item?id=24108980</a><p>- <a href="https://news.ycombinator.com/item?id=27876570">https://news.ycombinator.com/item?id=27876570</a><p>- <a href="https://news.ycombinator.com/item?id=32005678">https://news.ycombinator.com/item?id=32005678</a><p>Here are some interviews on C3:<p>- <a href="https://www.youtube.com/watch?v=UC8VDRJqXfc" rel="nofollow">https://www.youtube.com/watch?v=UC8VDRJqXfc</a><p>- <a href="https://www.youtube.com/watch?v=9rS8MVZH-vA" rel="nofollow">https://www.youtube.com/watch?v=9rS8MVZH-vA</a><p>Here is a series doing various tasks in C3:<p>- <a href="https://ebn.codeberg.page/programming/c3/c3-file-io/" rel="nofollow">https://ebn.codeberg.page/programming/c3/c3-file-io/</a><p>Some projects:<p>- Gameboy emulator <a href="https://github.com/OdnetninI/Gameboy-Emulator/" rel="nofollow">https://github.com/OdnetninI/Gameboy-Emulator/</a><p>- RISCV Bare metal Hello World: <a href="https://www.youtube.com/watch?v=0iAJxx6Ok4E" rel="nofollow">https://www.youtube.com/watch?v=0iAJxx6Ok4E</a><p>- "Depths of Daemonheim" roguelike <a href="https://github.com/TechnicalFowl/7DRL-2025" rel="nofollow">https://github.com/TechnicalFowl/7DRL-2025</a>
Show HN: OpenNutrition – A free, public nutrition database
Hi HN!<p>Today I’m excited to launch OpenNutrition: a free, ODbL-licenced nutrition database of everyday generic, branded, and restaurant foods, a search engine that can browse the web to import new foods, and a companion app that bundles the database and search as a free macro tracking app.<p>Consistently logging the foods you eat has been shown to support long-term health outcomes (1)(2), but doing so easily depends on having a large, accurate, and up-to-date nutrition database. Free, public databases are often out-of-date, hard to navigate, and missing critical coverage (like branded restaurant foods). User-generated databases can be unreliable or closed-source. Commercial databases come with ongoing, often per-seat licensing costs, and usage restrictions that limit innovation.<p>As an amateur powerlifter and long-term weight loss maintainer, helping others pursue their health goals is something I care about deeply. After exiting my previous startup last year, I wanted to investigate the possibility of using LLMs to create the database and infrastructure required to make a great food logging app that was cost engineered for free and accessible distribution, as I believe that the availability of these tools is a public good. That led to creating the dataset I’m releasing today; nutritional data is public record, and its organization and dissemination should be, too.<p>What’s in the database?<p>- 5,287 common everyday foods, 3,836 prepared and generic restaurant foods, and 4,182 distinct menu items from ~50 popular US restaurant chains; foods have standardized naming, consistent numeric serving sizes, estimated micronutrient profiles, descriptions, and citations/groundings to USDA, AUSNUT, FRIDA, CNF, etc, when possible.<p>- 313,442 of the most popular US branded grocery products with standardized naming, parsed serving sizes, and additive/allergen data, grounded in branded USDA data; the most popular 1% have estimated micronutrient data, with the goal of full coverage.<p>Even the largest commercial databases can be frustrating to work with when searching for foods or customizations without existing coverage. To solve this, I created a real-time version of the same approach used to build the core database that can browse the web to learn about new foods or food customizations if needed (e.g., a highly customized Starbucks order). There is a limited demo on the web, and in-app you can log foods with text search, via barcode scan, or by image, all of which can search the web to import foods for you if needed. Foods discovered via these searches are fed back into the database, and I plan to publish updated versions as coverage expands.<p>- Search & Explore: <a href="https://www.opennutrition.app/search" rel="nofollow">https://www.opennutrition.app/search</a><p>- Methodology/About: <a href="https://www.opennutrition.app/about" rel="nofollow">https://www.opennutrition.app/about</a><p>- Get the iOS App: <a href="https://apps.apple.com/us/app/opennutrition-macro-tracker/id6670272666">https://apps.apple.com/us/app/opennutrition-macro-tracker/id...</a><p>- Download the dataset: <a href="https://www.opennutrition.app/download" rel="nofollow">https://www.opennutrition.app/download</a><p>OpenNutrition’s iOS app offers free essential logging and a limited number of agentic searches, plus expenditure tracking and ongoing diet recommendations like best-in-class paid apps. A paid tier ($49/year) unlocks additional searches and features (data backup, prioritized micronutrient coverage for logged foods), and helps fund further development and broader library coverage.<p>I’d love to hear your feedback, questions, and suggestions—whether it’s about the database itself, a really great/bad search result, or the app.<p>1. Burke et al., 2011, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3268700/" rel="nofollow">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3268700/</a><p>2. Patel et al., 2019, <a href="https://mhealth.jmir.org/2019/2/e12209/" rel="nofollow">https://mhealth.jmir.org/2019/2/e12209/</a>
Show HN: I vibecoded a 35k LoC recipe app
Over the last 2-3 weeks, I vibecoded the recipe app that I always wished existed - recipeninja.ai . It now includes a fully interactive voice assistant so you don't need to get your dirty hands over your new iPad when you're cooking.<p>Background: I’m a startup founder turned investor. I taught myself (bad) PHP in 2000, and picked up Ruby on Rails in 2011. I’d guess 2015 was the last time I wrote a line of Ruby professionally. Last month, I decided to use Windsurf to build a Rails 8 API backend and React front-end app, using OpenAI's realtime API for voice-to-voice responses. Over the last few days, I also used Claude Code and Gemini 2.5 Pro for some of the trickier features. 35,000 LoC later, this is what I built!<p>The site uses function-calling to navigate the site in realtime as you chat with the voice assistant, which I think is pretty neat.<p>For the long version, see <a href="https://tomblomfield.com/post/778601470234918912/vibecoding-a-production-app" rel="nofollow">https://tomblomfield.com/post/778601470234918912/vibecoding-...</a><p>I'd love any feedback you have!<p>Demo video of the voice assistant: <a href="https://www.youtube.com/watch?v=kRhVc9D5kcg" rel="nofollow">https://www.youtube.com/watch?v=kRhVc9D5kcg</a><p>Generate and edit new recipes: <a href="https://www.youtube.com/watch?v=VwwZF6dHcHg" rel="nofollow">https://www.youtube.com/watch?v=VwwZF6dHcHg</a>
Show HN: Qwen-2.5-32B is now the best open source OCR model
Last week was big for open source LLMs. We got:<p>- Qwen 2.5 VL (72b and 32b)<p>- Gemma-3 (27b)<p>- DeepSeek-v3-0324<p>And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models.<p>We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways:<p>- Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error.<p>- Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR.<p>- Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart.<p>The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here:<p>- <a href="https://getomni.ai/blog/benchmarking-open-source-models-for-ocr">https://getomni.ai/blog/benchmarking-open-source-models-for-...</a><p>- <a href="https://github.com/getomni-ai/benchmark" rel="nofollow">https://github.com/getomni-ai/benchmark</a><p>- <a href="https://huggingface.co/datasets/getomni-ai/ocr-benchmark" rel="nofollow">https://huggingface.co/datasets/getomni-ai/ocr-benchmark</a>
Show HN: Duolingo-style exercises but with real-world content like the news
I've been working on a little side project that combines Duolingo-like listening comprehension exercises with real content .<p>Every video is transcribed to get much better transcripts than the closed captions. I filter on high quality transcripts, and afterwards a LLM selects only plausible segments for the exercises. This seems to work well for quality control and seems to be reliable enough for these short exercises.<p>Would love your thoughts!
Show HN: Duolingo-style exercises but with real-world content like the news
I've been working on a little side project that combines Duolingo-like listening comprehension exercises with real content .<p>Every video is transcribed to get much better transcripts than the closed captions. I filter on high quality transcripts, and afterwards a LLM selects only plausible segments for the exercises. This seems to work well for quality control and seems to be reliable enough for these short exercises.<p>Would love your thoughts!
Show HN: Nue – Apps lighter than a React button
Show HN: Nue – Apps lighter than a React button