The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Goldbach Conjecture up to 4*10^18+7*10^13
Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).<p>My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.<p>[Website]
<a href="https://gridbach.com/" rel="nofollow">https://gridbach.com/</a><p>[Medium]
<a href="https://medium.com/@jay_gridbach/grid-computing-shatters-world-record-for-goldbach-conjecture-verification-1ef3dc58a38d" rel="nofollow">https://medium.com/@jay_gridbach/grid-computing-shatters-wor...</a>
Show HN: I built an AI that turns GitHub codebases into easy tutorials
<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>
Show HN: I built an AI that turns GitHub codebases into easy tutorials
<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>
Less Slow C++
Earlier this year, I took a month to reexamine my coding habits and rethink some past design choices. I hope to rewrite and improve my FOSS libraries this year, and I needed answers to a few questions first. Perhaps some of these questions will resonate with others in the community, too.<p><pre><code> - Are coroutines viable for high-performance work?
- Should I use SIMD intrinsics for clarity or drop to assembly for easier library distribution?
- Has hardware caught up with vectorized scatter/gather in AVX-512 & SVE?
- How do secure enclaves & pointer tagging differ on Intel, Arm, & AMD?
- What's the throughput gap between CPU and GPU Tensor Cores (TCs)?
- How costly are misaligned memory accesses & split-loads, and what gains do non-temporal loads/stores offer?
- Which parts of the standard library hit performance hardest?
- How do error-handling strategies compare overhead-wise?
- What's the compile-time vs. run-time trade-off for lazily evaluated ranges?
- What practical, non-trivial use cases exist for meta-programming?
- How challenging is Linux Kernel bypass with io_uring vs. POSIX sockets?
- How close are we to effectively using Networking TS or heterogeneous Executors in C++?
- What are best practices for propagating stateful allocators in nested containers, and which libraries support them?
</code></pre>
These questions span from micro-kernel optimizations (nanoseconds) to distributed systems (micro/millisecond latencies). Rather than tackling them all in one post, I compiled my explorations into a repository—extending my previous Google Benchmark tutorial (<<a href="https://ashvardanian.com/posts/google-benchmark" rel="nofollow">https://ashvardanian.com/posts/google-benchmark</a>>)—to serve as a sandbox for performance experimentation.<p>Some fun observations:<p><pre><code> - Compilers now vectorize 3x3x3 and 4x4x4 single/double precision multiplications well! The smaller one is ~60% slower despite 70% fewer operations, outperforming my vanilla SSE/AVX and coming within 10% of AVX-512.
- Nvidia TCs vary dramatically across generations in numeric types, throughput, tile shapes, thread synchronization (thread/quad-pair/warp/warp-groups), and operand storage. Post-Volta, manual PTX is often needed (as intrinsics lag), though the new TileIR (introduced at GTC) promises improvements for dense linear algebra kernels.
- The AI wave drives CPUs and GPUs to converge in mat-mul throughput & programming complexity. It took me a day to debug TMM register initialization, and SME is equally odd. Sierra Forest packs 288 cores/socket, and AVX10.2 drops 256-bit support for 512-bit... I wonder if discrete Intel GPUs are even needed, given CPU advances?
- In common floating-point ranges, scalar sine approximations can be up to 40x faster than standard implementations, even without SIMD. It's a bit hand-wavy, though; I wish more projects documented error bounds and had 1 & 3.5 ULP variants like Sleef.
- Meta-programming tools like CTRE can outperform typical RegEx engines by 5x and simplify building parsers compared to hand-crafted FSMs.
- Once clearly distinct in complexity and performance (DPDK/SPDK vs. io_uring), the gap is narrowing. While pre-5.5 io_uring can boost UDP throughput by 4x on loopback IO, newer zero-copy and concurrency optimizations remain challenging.
</code></pre>
The repository is loaded with links to favorite CppCon lectures, GitHub snippets, and tech blog posts. Recognizing that many high-level concepts are handled differently across languages, I've also started porting examples to Rust & Python in separate repos. Coroutines look bad everywhere :(<p>Overall, this research project was rewarding! Most questions found answers in code — except pointer tagging and secure enclaves, which still elude me in public cloud. I'd love to hear from others, especially on comparing High-Level Synthesis for small matrix multiplications on FPGAs versus hand-written VHDL/Verilog for integral types. Let me know if you have ideas for other cool, obscure topics to cover!
Show HN: I made a Doom-like game fit inside a QR code
I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha)
I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
<a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
Show HN: I made a Doom-like game fit inside a QR code
I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha)
I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
<a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
Show HN: Plandex v2 – open source AI coding agent for large projects and tasks
Hey HN! I’m Dane, the creator of Plandex (<a href="https://github.com/plandex-ai/plandex">https://github.com/plandex-ai/plandex</a>), an open source AI coding agent focused especially on tackling large tasks in real world software projects.<p>You can watch a 2 minute demo of Plandex in action here: <a href="https://www.youtube.com/watch?v=SFSu2vNmlLk" rel="nofollow">https://www.youtube.com/watch?v=SFSu2vNmlLk</a><p>And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: <a href="https://www.youtube.com/watch?v=g-_76U_nK0Y" rel="nofollow">https://www.youtube.com/watch?v=g-_76U_nK0Y</a>.<p>I launched Plandex v1 here on HN a little less than a year ago (<a href="https://news.ycombinator.com/item?id=39918500">https://news.ycombinator.com/item?id=39918500</a>).<p>Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.<p>In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.<p>I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.<p>A bit more on some of Plandex’s key features:<p>- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.<p>- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.<p>- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.<p>- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.<p>- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.<p>- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.<p>I’d love to get more HNers in the Plandex Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>). Please join and say hi!<p>And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!
Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator
Show HN: I made a free tool that analyzes SEC filings and posts detailed reports
(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings.
By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples:
<a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>
Show HN: I made a free tool that analyzes SEC filings and posts detailed reports
(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings.
By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples:
<a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>
Show HN: memEx, a personal knowledge base inspired by zettlekasten and org-mode
Show HN: Atari Missile Command Game Built Using AI Gemini 2.5 Pro
A modern HTML5 canvas remake of the classic Atari game from 1980. Defend your cities and missile bases from incoming enemy attacks using your missile launchers. Initially built using the Google Gemini 2.5 Pro AI LLM model.
Show HN: Aqua Voice 2 – Fast Voice Input for Mac and Windows
Hey HN - It’s Finn and Jack from Aqua Voice (<a href="https://withaqua.com">https://withaqua.com</a>). Aqua is fast AI dictation for your desktop and our attempt to make voice a first-class input method.<p>Video: <a href="https://withaqua.com/watch">https://withaqua.com/watch</a><p>Try it here: <a href="https://withaqua.com/sandbox">https://withaqua.com/sandbox</a><p>Finn is uber dyslexic and has been using dictation software since sixth grade. For over a decade, he’s been chasing a dream that never quite worked — using your voice instead of a keyboard.<p>Our last post (<a href="https://news.ycombinator.com/item?id=39828686">https://news.ycombinator.com/item?id=39828686</a>) about this seemed to resonate with the community - though it turned out that version of Aqua was a better demo than product. But it gave us (and others) a lot of good ideas about what should come next.<p>Since then, we’ve remade Aqua from scratch for speed and usability. It now lives on your desktop, and it lets you talk into any text field -- Cursor, Gmail, Slack, even your terminal.<p>It starts up in under 50ms, inserts text in about a second (sometimes as fast as 450ms), and has state-of-the-art accuracy. It does a lot more, but that’s the core. We’d love your feedback — and if you’ve got ideas for what voice should do next, let’s hear them!
Show HN: DrawDB – open-source online database diagram editor (a retro)
One year ago I open-sourced my very first 'real' project and shared it here. I was a college student in my senior year and desperately looking for a job. At the time of sharing it i couldn't even afford a domain and naively let someone buy the one i had my eyes on lol. It's been a hell of a year with this blowing up, me moving to another country, and switching 2 jobs.<p>In a year we somehow managed to hit 26k stars, grow a 1000+ person discord community, and support 37 languages. I couldn't be more grateful for the community that helped this grow, but now i don't know what direction to take this project in.<p>All of this was an accident. But now I feel like I'm missing out on not using this success. I have been thinking of monetization options, but not sure if I wanna go down that route. I like the idea of it being free and available for everyone but also can't help but think of everything that could be done if committed full-time or even had a small team. I keep telling myself(and others) i'll do something if i meet a co-founder, but doubt and fear of blowing this up keeps back.<p>How would you proceed?
Show HN: Lux – A luxurious package manager for Lua
Show HN: Browser MCP – Automate your browser using Cursor, Claude, VS Code
Show HN: OCR pipeline for ML training (tables, diagrams, math, multilingual)
Hi HN,<p>I’ve been working on an OCR pipeline specifically optimized for machine learning dataset preparation. It’s designed to process complex academic materials — including math formulas, tables, figures, and multilingual text — and output clean, structured formats like JSON and Markdown.<p>Some features:
• Multi-stage OCR combining DocLayout-YOLO, Google Vision, MathPix, and Gemini Pro Vision
• Extracts and understands diagrams, tables, LaTeX-style math, and multilingual text (Japanese/Korean/English)
• Highly tuned for ML training pipelines, including dataset generation and preprocessing for RAG or fine-tuning tasks<p>Sample outputs and real exam-based examples are included (EJU Biology, UTokyo Math, etc.)
Would love to hear any feedback or ideas for improvement.<p>GitHub: <a href="https://github.com/ses4255/Versatile-OCR-Program">https://github.com/ses4255/Versatile-OCR-Program</a>
Show HN: I built a word game. My mom thinks it's great. What do you think?
Show HN: I built a word game. My mom thinks it's great. What do you think?
Show HN: Hatchet v1 – A task orchestration platform built on Postgres
Hey HN - this is Alexander from Hatchet. We’re building an open-source platform for managing background tasks, using Postgres as the underlying database.<p>Just over a year ago, we launched Hatchet as a distributed task queue built on top of Postgres with a 100% MIT license (<a href="https://news.ycombinator.com/item?id=39643136">https://news.ycombinator.com/item?id=39643136</a>). The feedback and response we got from the HN community was overwhelming. In the first month after launching, we processed about 20k tasks on the platform — today, we’re processing over 20k tasks per minute (>1 billion per month).<p>Scaling up this quickly was difficult — every task in Hatchet corresponds to at minimum 5 Postgres transactions and we would see bursts on Hatchet Cloud instances to over 5k tasks/second, which corresponds to roughly 25k transactions/second. As it turns out, a simple Postgres queue utilizing FOR UPDATE SKIP LOCKED doesn’t cut it at this scale. After provisioning the largest instance type that CloudSQL offers, we even discussed potentially moving some load off of Postgres in favor of something trendy like Clickhouse + Kafka.<p>But we doubled down on Postgres, and spent about 6 months learning how to operate Postgres databases at scale and reading the Postgres manual and several other resources [0] during commutes and at night. We stuck with Postgres for two reasons:<p>1. We wanted to make Hatchet as portable and easy to administer as possible, and felt that implementing our own storage engine specifically on Hatchet Cloud would be disingenuous at best, and in the worst case, would take our focus away from the open source community.<p>2. More importantly, Postgres is general-purpose, which is what makes it both great but hard to scale for some types of workloads. This is also what allows us to offer a general-purpose orchestration platform — we heavily utilize Postgres features like transactions, SKIP LOCKED, recursive queries, triggers, COPY FROM, and much more.<p>Which brings us to today. We’re announcing a full rewrite of the Hatchet engine — still built on Postgres — together with our task orchestration layer which is built on top of our underlying queue. To be more specific, we’re launching:<p>1. DAG-based workflows that support a much wider array of conditions, including sleep conditions, event-based triggering, and conditional execution based on parent output data [1].<p>2. Durable execution — durable execution refers to a function’s ability to recover from failure by caching intermediate results and automatically replaying them on a retry. We call a function with this ability a durable task. We also support durable sleep and durable events, which you can read more about here [2]<p>3. Queue features such as key-based concurrency queues (for implementing fair queueing), rate limiting, sticky assignment, and worker affinity.<p>4. Improved performance across every dimension we’ve tested, which we attribute to six improvements to the Hatchet architecture: range-based partitioning of time series tables, hash-based partitioning of task events (for updating task statuses), separating our monitoring tables from our queue, buffered reads and writes, switching all high-volume tables to use identity columns, and aggressive use of Postgres triggers.<p>We've also removed RabbitMQ as a required dependency for self-hosting.<p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Hatchet.<p>[0] <a href="https://www.postgresql.org/docs/" rel="nofollow">https://www.postgresql.org/docs/</a><p>[1] <a href="https://docs.hatchet.run/home/conditional-workflows">https://docs.hatchet.run/home/conditional-workflows</a><p>[2] <a href="https://docs.hatchet.run/home/durable-execution">https://docs.hatchet.run/home/durable-execution</a>