The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: JuryNow – Get an anonymous instant verdict from 12 real people

After 16 years, I have just launched my game JuryNow. Imagine having a truly diverse panel of 12 real people of all ages, far removed from your peer group, around the world who will be able to give you an instant decision on your question 24/7. No commentary, just a verdict between two choices. You can ask a moral dilemma, or a fashion dilemma (you can upload 2 images), you can use JuryNow to give you an independent perspective on a family argument, or a workplace problem, or even a trivial thought. You can also ask a mini political poll and receive global verdict in real time.<p>It’s anonymous, fast (under 3 minutes), and...when there are more than 13 people playing simultaneously, completely AI-free<i>.</i><p>How do you pay for this priceless fun? With JuryDuty. While you wait 3 minutes for your verdict, You answer other people’s questions. There is no commentary, just a binary choice.<p>You can ask things like:<p><i>“Do I have a moral duty to go to my brother’s third wedding? We have no parents?”</i><p><i>“Do you feel guilty when you kill mosquitoes?”</i><p><i>"Should I take away my mother's car keys? She is 84 and had two near misses this month."</i><p>As a 58F, I built JuryNow because I wanted to create a truly objective place to get outside opinions that were not from my peer group, but from 12 people in 12 different countries, different ages, professions, cultures, a truly diverse global objective jury with no algorithms.<p>Would love your feedback! It’s totally free, no sign-up needed for a first play. <a href="https://jurynow.app/" rel="nofollow">https://jurynow.app/</a><p>if there are fewer than 13 people playing (and it only just launched last week and that was just on Reddit!) then a popup will appear saying your verdict is simulated by AI. But this is just a TEMPORARY feature with the MVP. As soon as there are regular players, it will be permanently dismantled and we will celebrate the power of collective human intelligence!

Show HN: Undercutf1 – F1 Live Timing TUI with Driver Tracker, Variable Delay

undercutf1 is a F1 live timing app, built as a TUI. It contains traditional timing pages like a Driver Tracker, Timing Tower, Race Control, along with some more detailed analysis like lap and gap history, so that you can see strategies unfolding.<p>I started to build undercutf1 almost two years ago, after becoming increasingly frustrated with the TV direction and lack of detailed information coming out of the live feed. Overtakes were often missed and strategies were often ill-explained or missed. I discovered that F1 live timing data is available over a simple SignalR stream, so I set out building an app that would let me see all the information I could dream of. Now undercutf1 serves as the perfect companion (like a second Martin Brundle) when I'm watching the sessions live.<p>If you want to test it out, you replay the Suzuka race easily by downloading the timing data, then starting a simulated session:<p>1. Download undercutf1 using the installation instructions in the README.<p>2. Import the Suzuka race session data using `undercutf1 import 2025 -m 1256 -s 10006`.<p>3. Start the app (`undercutf1`) then press S (Session) then F (Simulated Session), then select Suzuka then Race using the arrow keys, then press Enter.<p>4. Use arrow keys to navigate between the timing pages, and use N / Shift+N to fast-forward through the session.<p>If you want to test it out during this weekends Jeddah GP, simply install as in the README then start a live session by pressing S (Session) then L (Live Session).<p>The app is built for a terminal of roughly 110x30 cells, which probably seems an odd size but just so happens to be the size of a fullscreen terminal on a MBP zoomed in far enough that the text is easily glanceable when the laptop is placed on a coffee table some distance away from me :) Other terminal sizes will work fine, but information density/scaling may not be ideal.<p>If you're using the TUI during a live session, you'll want to synchronise the delay of the timing feed to your TV feed. Use the N/M keys to increase/decrease the delay. During non-race session, I find it fairly easy to sync the session clock on TV with the session clock on the bottom left of the timing screen. For race sessions, synchronisation is a little harder. I usually aim to sync the start of the race time (e.g. 13:00 on the timing screen clock) with the start of the formation lap, where the live feed helpfully shows the clock tick over to 0 minutes. I usually delay the feed by 30 to 60 seconds.

Show HN: Goldbach Conjecture up to 4*10^18+7*10^13

Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).<p>My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.<p>[Website] <a href="https://gridbach.com/" rel="nofollow">https://gridbach.com/</a><p>[Medium] <a href="https://medium.com/@jay_gridbach/grid-computing-shatters-world-record-for-goldbach-conjecture-verification-1ef3dc58a38d" rel="nofollow">https://medium.com/@jay_gridbach/grid-computing-shatters-wor...</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Less Slow C++

Earlier this year, I took a month to reexamine my coding habits and rethink some past design choices. I hope to rewrite and improve my FOSS libraries this year, and I needed answers to a few questions first. Perhaps some of these questions will resonate with others in the community, too.<p><pre><code> - Are coroutines viable for high-performance work? - Should I use SIMD intrinsics for clarity or drop to assembly for easier library distribution? - Has hardware caught up with vectorized scatter/gather in AVX-512 & SVE? - How do secure enclaves & pointer tagging differ on Intel, Arm, & AMD? - What's the throughput gap between CPU and GPU Tensor Cores (TCs)? - How costly are misaligned memory accesses & split-loads, and what gains do non-temporal loads/stores offer? - Which parts of the standard library hit performance hardest? - How do error-handling strategies compare overhead-wise? - What's the compile-time vs. run-time trade-off for lazily evaluated ranges? - What practical, non-trivial use cases exist for meta-programming? - How challenging is Linux Kernel bypass with io_uring vs. POSIX sockets? - How close are we to effectively using Networking TS or heterogeneous Executors in C++? - What are best practices for propagating stateful allocators in nested containers, and which libraries support them? </code></pre> These questions span from micro-kernel optimizations (nanoseconds) to distributed systems (micro/millisecond latencies). Rather than tackling them all in one post, I compiled my explorations into a repository—extending my previous Google Benchmark tutorial (<<a href="https://ashvardanian.com/posts/google-benchmark" rel="nofollow">https://ashvardanian.com/posts/google-benchmark</a>>)—to serve as a sandbox for performance experimentation.<p>Some fun observations:<p><pre><code> - Compilers now vectorize 3x3x3 and 4x4x4 single/double precision multiplications well! The smaller one is ~60% slower despite 70% fewer operations, outperforming my vanilla SSE/AVX and coming within 10% of AVX-512. - Nvidia TCs vary dramatically across generations in numeric types, throughput, tile shapes, thread synchronization (thread/quad-pair/warp/warp-groups), and operand storage. Post-Volta, manual PTX is often needed (as intrinsics lag), though the new TileIR (introduced at GTC) promises improvements for dense linear algebra kernels. - The AI wave drives CPUs and GPUs to converge in mat-mul throughput & programming complexity. It took me a day to debug TMM register initialization, and SME is equally odd. Sierra Forest packs 288 cores/socket, and AVX10.2 drops 256-bit support for 512-bit... I wonder if discrete Intel GPUs are even needed, given CPU advances? - In common floating-point ranges, scalar sine approximations can be up to 40x faster than standard implementations, even without SIMD. It's a bit hand-wavy, though; I wish more projects documented error bounds and had 1 & 3.5 ULP variants like Sleef. - Meta-programming tools like CTRE can outperform typical RegEx engines by 5x and simplify building parsers compared to hand-crafted FSMs. - Once clearly distinct in complexity and performance (DPDK/SPDK vs. io_uring), the gap is narrowing. While pre-5.5 io_uring can boost UDP throughput by 4x on loopback IO, newer zero-copy and concurrency optimizations remain challenging. </code></pre> The repository is loaded with links to favorite CppCon lectures, GitHub snippets, and tech blog posts. Recognizing that many high-level concepts are handled differently across languages, I've also started porting examples to Rust & Python in separate repos. Coroutines look bad everywhere :(<p>Overall, this research project was rewarding! Most questions found answers in code — except pointer tagging and secure enclaves, which still elude me in public cloud. I'd love to hear from others, especially on comparing High-Level Synthesis for small matrix multiplications on FPGAs versus hand-written VHDL/Verilog for integral types. Let me know if you have ideas for other cool, obscure topics to cover!

Show HN: I made a Doom-like game fit inside a QR code

Show HN: Plandex v2 – open source AI coding agent for large projects and tasks

Hey HN! I’m Dane, the creator of Plandex (<a href="https://github.com/plandex-ai/plandex">https://github.com/plandex-ai/plandex</a>), an open source AI coding agent focused especially on tackling large tasks in real world software projects.<p>You can watch a 2 minute demo of Plandex in action here: <a href="https://www.youtube.com/watch?v=SFSu2vNmlLk" rel="nofollow">https://www.youtube.com/watch?v=SFSu2vNmlLk</a><p>And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: <a href="https://www.youtube.com/watch?v=g-_76U_nK0Y" rel="nofollow">https://www.youtube.com/watch?v=g-_76U_nK0Y</a>.<p>I launched Plandex v1 here on HN a little less than a year ago (<a href="https://news.ycombinator.com/item?id=39918500">https://news.ycombinator.com/item?id=39918500</a>).<p>Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.<p>In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.<p>I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.<p>A bit more on some of Plandex’s key features:<p>- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.<p>- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.<p>- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.<p>- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.<p>- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.<p>- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.<p>I’d love to get more HNers in the Plandex Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>). Please join and say hi!<p>And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!

Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator

Show HN: I made a free tool that analyzes SEC filings and posts detailed reports

(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings. By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples: <a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>

Show HN: I made a free tool that analyzes SEC filings and posts detailed reports

(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings. By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples: <a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>

Show HN: memEx, a personal knowledge base inspired by zettlekasten and org-mode

Show HN: Atari Missile Command Game Built Using AI Gemini 2.5 Pro

A modern HTML5 canvas remake of the classic Atari game from 1980. Defend your cities and missile bases from incoming enemy attacks using your missile launchers. Initially built using the Google Gemini 2.5 Pro AI LLM model.

Show HN: Aqua Voice 2 – Fast Voice Input for Mac and Windows

Hey HN - It’s Finn and Jack from Aqua Voice (<a href="https://withaqua.com">https://withaqua.com</a>). Aqua is fast AI dictation for your desktop and our attempt to make voice a first-class input method.<p>Video: <a href="https://withaqua.com/watch">https://withaqua.com/watch</a><p>Try it here: <a href="https://withaqua.com/sandbox">https://withaqua.com/sandbox</a><p>Finn is uber dyslexic and has been using dictation software since sixth grade. For over a decade, he’s been chasing a dream that never quite worked — using your voice instead of a keyboard.<p>Our last post (<a href="https://news.ycombinator.com/item?id=39828686">https://news.ycombinator.com/item?id=39828686</a>) about this seemed to resonate with the community - though it turned out that version of Aqua was a better demo than product. But it gave us (and others) a lot of good ideas about what should come next.<p>Since then, we’ve remade Aqua from scratch for speed and usability. It now lives on your desktop, and it lets you talk into any text field -- Cursor, Gmail, Slack, even your terminal.<p>It starts up in under 50ms, inserts text in about a second (sometimes as fast as 450ms), and has state-of-the-art accuracy. It does a lot more, but that’s the core. We’d love your feedback — and if you’ve got ideas for what voice should do next, let’s hear them!

Show HN: DrawDB – open-source online database diagram editor (a retro)

One year ago I open-sourced my very first 'real' project and shared it here. I was a college student in my senior year and desperately looking for a job. At the time of sharing it i couldn't even afford a domain and naively let someone buy the one i had my eyes on lol. It's been a hell of a year with this blowing up, me moving to another country, and switching 2 jobs.<p>In a year we somehow managed to hit 26k stars, grow a 1000+ person discord community, and support 37 languages. I couldn't be more grateful for the community that helped this grow, but now i don't know what direction to take this project in.<p>All of this was an accident. But now I feel like I'm missing out on not using this success. I have been thinking of monetization options, but not sure if I wanna go down that route. I like the idea of it being free and available for everyone but also can't help but think of everything that could be done if committed full-time or even had a small team. I keep telling myself(and others) i'll do something if i meet a co-founder, but doubt and fear of blowing this up keeps back.<p>How would you proceed?

Show HN: Lux – A luxurious package manager for Lua

Show HN: Browser MCP – Automate your browser using Cursor, Claude, VS Code

Show HN: OCR pipeline for ML training (tables, diagrams, math, multilingual)

Hi HN,<p>I’ve been working on an OCR pipeline specifically optimized for machine learning dataset preparation. It’s designed to process complex academic materials — including math formulas, tables, figures, and multilingual text — and output clean, structured formats like JSON and Markdown.<p>Some features: • Multi-stage OCR combining DocLayout-YOLO, Google Vision, MathPix, and Gemini Pro Vision • Extracts and understands diagrams, tables, LaTeX-style math, and multilingual text (Japanese/Korean/English) • Highly tuned for ML training pipelines, including dataset generation and preprocessing for RAG or fine-tuning tasks<p>Sample outputs and real exam-based examples are included (EJU Biology, UTokyo Math, etc.) Would love to hear any feedback or ideas for improvement.<p>GitHub: <a href="https://github.com/ses4255/Versatile-OCR-Program">https://github.com/ses4255/Versatile-OCR-Program</a>

Show HN: I built a word game. My mom thinks it's great. What do you think?

Show HN: I built a word game. My mom thinks it's great. What do you think?

Show HN: Hatchet v1 – A task orchestration platform built on Postgres

Hey HN - this is Alexander from Hatchet. We’re building an open-source platform for managing background tasks, using Postgres as the underlying database.<p>Just over a year ago, we launched Hatchet as a distributed task queue built on top of Postgres with a 100% MIT license (<a href="https://news.ycombinator.com/item?id=39643136">https://news.ycombinator.com/item?id=39643136</a>). The feedback and response we got from the HN community was overwhelming. In the first month after launching, we processed about 20k tasks on the platform — today, we’re processing over 20k tasks per minute (>1 billion per month).<p>Scaling up this quickly was difficult — every task in Hatchet corresponds to at minimum 5 Postgres transactions and we would see bursts on Hatchet Cloud instances to over 5k tasks/second, which corresponds to roughly 25k transactions/second. As it turns out, a simple Postgres queue utilizing FOR UPDATE SKIP LOCKED doesn’t cut it at this scale. After provisioning the largest instance type that CloudSQL offers, we even discussed potentially moving some load off of Postgres in favor of something trendy like Clickhouse + Kafka.<p>But we doubled down on Postgres, and spent about 6 months learning how to operate Postgres databases at scale and reading the Postgres manual and several other resources [0] during commutes and at night. We stuck with Postgres for two reasons:<p>1. We wanted to make Hatchet as portable and easy to administer as possible, and felt that implementing our own storage engine specifically on Hatchet Cloud would be disingenuous at best, and in the worst case, would take our focus away from the open source community.<p>2. More importantly, Postgres is general-purpose, which is what makes it both great but hard to scale for some types of workloads. This is also what allows us to offer a general-purpose orchestration platform — we heavily utilize Postgres features like transactions, SKIP LOCKED, recursive queries, triggers, COPY FROM, and much more.<p>Which brings us to today. We’re announcing a full rewrite of the Hatchet engine — still built on Postgres — together with our task orchestration layer which is built on top of our underlying queue. To be more specific, we’re launching:<p>1. DAG-based workflows that support a much wider array of conditions, including sleep conditions, event-based triggering, and conditional execution based on parent output data [1].<p>2. Durable execution — durable execution refers to a function’s ability to recover from failure by caching intermediate results and automatically replaying them on a retry. We call a function with this ability a durable task. We also support durable sleep and durable events, which you can read more about here [2]<p>3. Queue features such as key-based concurrency queues (for implementing fair queueing), rate limiting, sticky assignment, and worker affinity.<p>4. Improved performance across every dimension we’ve tested, which we attribute to six improvements to the Hatchet architecture: range-based partitioning of time series tables, hash-based partitioning of task events (for updating task statuses), separating our monitoring tables from our queue, buffered reads and writes, switching all high-volume tables to use identity columns, and aggressive use of Postgres triggers.<p>We've also removed RabbitMQ as a required dependency for self-hosting.<p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Hatchet.<p>[0] <a href="https://www.postgresql.org/docs/" rel="nofollow">https://www.postgresql.org/docs/</a><p>[1] <a href="https://docs.hatchet.run/home/conditional-workflows">https://docs.hatchet.run/home/conditional-workflows</a><p>[2] <a href="https://docs.hatchet.run/home/durable-execution">https://docs.hatchet.run/home/durable-execution</a>

1 2 3 ... 135 136 137 >