The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Dlg – Zero-cost printf-style debugging for Go
Hey HN,<p>I tend to use printf-style debugging as my primary troubleshooting method and only resort to gdb as a last resort.<p>While I like its ease of use printf debugging isn't without its annoyances, namely removing the print statements once you're done.<p>I used to use trace-level logging from proper logging libraries but adding trace calls in every corner quickly gets out of control and results in an overwhelming amount of output.<p>To scratch my own itch I created dlg - a minimal debugging library that disappears completely from production builds.
Its API exposes just a single function, Printf [1].<p>dlg is optimized for performance in debug builds and, most importantly, when compiled without the dlg build tag, all calls are eliminated by the Go linker as if dlg was never imported.<p>For debug builds it adds optional stack trace generation configurable via environment variables or linker flags.<p>GitHub: <a href="https://github.com/vvvvv/dlg">https://github.com/vvvvv/dlg</a><p>Any feedback is much appreciated.<p>[1]: Actually two functions - there's also SetOutput.
Show HN: Dlg – Zero-cost printf-style debugging for Go
Hey HN,<p>I tend to use printf-style debugging as my primary troubleshooting method and only resort to gdb as a last resort.<p>While I like its ease of use printf debugging isn't without its annoyances, namely removing the print statements once you're done.<p>I used to use trace-level logging from proper logging libraries but adding trace calls in every corner quickly gets out of control and results in an overwhelming amount of output.<p>To scratch my own itch I created dlg - a minimal debugging library that disappears completely from production builds.
Its API exposes just a single function, Printf [1].<p>dlg is optimized for performance in debug builds and, most importantly, when compiled without the dlg build tag, all calls are eliminated by the Go linker as if dlg was never imported.<p>For debug builds it adds optional stack trace generation configurable via environment variables or linker flags.<p>GitHub: <a href="https://github.com/vvvvv/dlg">https://github.com/vvvvv/dlg</a><p>Any feedback is much appreciated.<p>[1]: Actually two functions - there's also SetOutput.
Show HN: Open-source alternative to ChatGPT Agents for browsing
Hey HN,<p>We are Winston, Edward, and James, and we built Meka Agent, an open-source framework that lets vision-based LLMs execute tasks directly on a computer, just like a person would.<p>Backstory:<p>In the last few months, we've been building computer-use agents that have been used by various teams for QA testing, but realized that the underlying browsing frameworks aren't quite good enough yet.<p>As such, we've been working on a browsing agent.<p>We achieved 72.7% on WebArena compared to the previous state of the art set by OpenAI's new ChatGPT agent at 65.4%. You can read more about it here: <a href="https://github.com/trymeka/webarena_evals">https://github.com/trymeka/webarena_evals</a>.<p>Today, we are open sourcing Meka, our state of the art agent, to allow anyone to build their own powerful, vision-based agents from scratch. We provide the groundwork for the hard parts, so you don't have to:<p>* True vision-based control: Meka doesn't just read HTML. It looks at the screen, identifies interactive elements, and decides where to click, type, and scroll.<p>* Full computer access: It's not sandboxed in a browser. Meka operates with OS-level controls, allowing it to handle system dialogues, file uploads, and other interactions that browser-only automation tools can't.<p>* Extensible by design: We've made it easy to plug in your own LLMs and computer providers.<p>* State-of-the-art performance: 72.7% on WebArena<p>Our goal is to enable developers to create repeatable, robust tasks on any computer just by prompting an agent, without worrying about the implementation details.<p>We’d love to get your feedback on how this tool could fit into your automation workflows. Try it out and let us know what you think.<p>You can find the repo on GitHub and get started quickly with our hosted platform, <a href="https://app.withmeka.com/" rel="nofollow">https://app.withmeka.com/</a>.<p>Thanks,
Winston, Edward, and James
Show HN: Open-source alternative to ChatGPT Agents for browsing
Hey HN,<p>We are Winston, Edward, and James, and we built Meka Agent, an open-source framework that lets vision-based LLMs execute tasks directly on a computer, just like a person would.<p>Backstory:<p>In the last few months, we've been building computer-use agents that have been used by various teams for QA testing, but realized that the underlying browsing frameworks aren't quite good enough yet.<p>As such, we've been working on a browsing agent.<p>We achieved 72.7% on WebArena compared to the previous state of the art set by OpenAI's new ChatGPT agent at 65.4%. You can read more about it here: <a href="https://github.com/trymeka/webarena_evals">https://github.com/trymeka/webarena_evals</a>.<p>Today, we are open sourcing Meka, our state of the art agent, to allow anyone to build their own powerful, vision-based agents from scratch. We provide the groundwork for the hard parts, so you don't have to:<p>* True vision-based control: Meka doesn't just read HTML. It looks at the screen, identifies interactive elements, and decides where to click, type, and scroll.<p>* Full computer access: It's not sandboxed in a browser. Meka operates with OS-level controls, allowing it to handle system dialogues, file uploads, and other interactions that browser-only automation tools can't.<p>* Extensible by design: We've made it easy to plug in your own LLMs and computer providers.<p>* State-of-the-art performance: 72.7% on WebArena<p>Our goal is to enable developers to create repeatable, robust tasks on any computer just by prompting an agent, without worrying about the implementation details.<p>We’d love to get your feedback on how this tool could fit into your automation workflows. Try it out and let us know what you think.<p>You can find the repo on GitHub and get started quickly with our hosted platform, <a href="https://app.withmeka.com/" rel="nofollow">https://app.withmeka.com/</a>.<p>Thanks,
Winston, Edward, and James
Show HN: Online Ruler – Measuring in inches/centimeters
Show HN: ELF Injector
The ELF Injector allows you to "inject" arbitrary-sized relocatable code chunks into ELF executables. The code chunks will run before the original entry point of the executable runs.<p>Included in the project are sample chunks as well as a step-by-step tutorial on how it works.<p>It's a mix of C and assembly and currently runs on 32-bit ARM though it's easy to port to other architectures.
Show HN: 433 – How to make a font that says nothing
Show HN: CUDA Fractal Renderer
Show HN: Color Me Same – A new kind of logic game
Show HN: A GitHub Action that quizzes you on a pull request
A little idea I got from playing with AI SWE Agents. Can AI help make sure we understand the code that our AIs write?<p>PR Quiz uses AI to generate a quiz from a pull request and blocks you from merging until the quiz is passed. You can configure various options like the LLM model to use, max number of attempts to pass the quiz or min diff size to generate a quiz for. I found that the reasoning models, while more expensive, generated better questions from my limited testing.<p>Privacy: This GitHub Action runs a local webserver and uses ngrok to serve the quiz through a temporary url. Your code is only sent to the model provider (OpenAI).
Show HN: A GitHub Action that quizzes you on a pull request
A little idea I got from playing with AI SWE Agents. Can AI help make sure we understand the code that our AIs write?<p>PR Quiz uses AI to generate a quiz from a pull request and blocks you from merging until the quiz is passed. You can configure various options like the LLM model to use, max number of attempts to pass the quiz or min diff size to generate a quiz for. I found that the reasoning models, while more expensive, generated better questions from my limited testing.<p>Privacy: This GitHub Action runs a local webserver and uses ngrok to serve the quiz through a temporary url. Your code is only sent to the model provider (OpenAI).
Show HN: Terminal-Bench-RL: Training long-horizon terminal agents with RL
After training calculator agent via RL, I really wanted to go bigger! So I built RL infrastructure for training long-horizon terminal/coding agents that scales from 2x A100s to 32x H100s (~$1M worth of compute!) Without any training, my 32B agent hit #19 on Terminal-Bench leaderboard, beating Stanford's Terminus-Qwen3-235B-A22! With training... well, too expensive, but I bet the results would be good!<p>*What I did*:<p>- Created a Claude Code-inspired agent (system msg + tools)<p>- Built Docker-isolated GRPO training where each rollout gets its own container<p>- Developed a multi-agent synthetic data pipeline to generate & validate training data with Opus-4<p>- Implemented a hybrid reward signal of unit test verifiers & a behavioural LLM judge.<p>*Key results*:<p>- My untrained Qwen3-32B agent achieved 13.75% on Terminal-Bench (#19, beats Stanford's Qwen3-235B MoE)<p>- I tested training to work stably on 32x H100s distributed across 4 bare metal nodes<p>- I created a mini-eval framework for LLM-judge performance. Sonnet-4 won.<p>- ~£30-50k needed for full training run of 1000 epochs (I could only afford testing )<p>*Technical details*:<p>- The synthetic dataset ranges from easy to extremely hard tasks. An example hard task's prompt:<p>"I found this mystery program at `/app/program` and I'm completely stumped. It's a stripped binary, so I have no idea what it does or how to run it properly. The program seems to expect some specific input and then produces an output, but I can't figure out what kind of input it needs. Could you help me figure out what this program requires?"<p>- Simple config presets allow training to run on multiple hardware setups with minimal effort.<p>- GRPO used with 16 rollouts per task, up to 32k tokens per rollout.<p>- Agent uses XML/YAML format to structure tool calls<p>*More details*:<p>My Github repos open source it all (agent, data, code) and has way more technical details if you are interested!:<p>- Terminal Agent RL repo<p>- Multi-agent synthetic data pipeline repo<p>I thought I would share this because I believe long-horizon RL is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible.<p>Thanks for reading!<p>Dan<p>(Built using rLLM RL framework which was brilliant to work with, and evaluated and inspired by the great Terminal Bench benchmark)
Show HN: Terminal-Bench-RL: Training long-horizon terminal agents with RL
After training calculator agent via RL, I really wanted to go bigger! So I built RL infrastructure for training long-horizon terminal/coding agents that scales from 2x A100s to 32x H100s (~$1M worth of compute!) Without any training, my 32B agent hit #19 on Terminal-Bench leaderboard, beating Stanford's Terminus-Qwen3-235B-A22! With training... well, too expensive, but I bet the results would be good!<p>*What I did*:<p>- Created a Claude Code-inspired agent (system msg + tools)<p>- Built Docker-isolated GRPO training where each rollout gets its own container<p>- Developed a multi-agent synthetic data pipeline to generate & validate training data with Opus-4<p>- Implemented a hybrid reward signal of unit test verifiers & a behavioural LLM judge.<p>*Key results*:<p>- My untrained Qwen3-32B agent achieved 13.75% on Terminal-Bench (#19, beats Stanford's Qwen3-235B MoE)<p>- I tested training to work stably on 32x H100s distributed across 4 bare metal nodes<p>- I created a mini-eval framework for LLM-judge performance. Sonnet-4 won.<p>- ~£30-50k needed for full training run of 1000 epochs (I could only afford testing )<p>*Technical details*:<p>- The synthetic dataset ranges from easy to extremely hard tasks. An example hard task's prompt:<p>"I found this mystery program at `/app/program` and I'm completely stumped. It's a stripped binary, so I have no idea what it does or how to run it properly. The program seems to expect some specific input and then produces an output, but I can't figure out what kind of input it needs. Could you help me figure out what this program requires?"<p>- Simple config presets allow training to run on multiple hardware setups with minimal effort.<p>- GRPO used with 16 rollouts per task, up to 32k tokens per rollout.<p>- Agent uses XML/YAML format to structure tool calls<p>*More details*:<p>My Github repos open source it all (agent, data, code) and has way more technical details if you are interested!:<p>- Terminal Agent RL repo<p>- Multi-agent synthetic data pipeline repo<p>I thought I would share this because I believe long-horizon RL is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible.<p>Thanks for reading!<p>Dan<p>(Built using rLLM RL framework which was brilliant to work with, and evaluated and inspired by the great Terminal Bench benchmark)
Show HN: I built an AI that turns any book into a text adventure game
It's a web app that uses AI to turn any book into a playable text adventure. Your favorite book, but your choices, hence your story. You can even "remix" the genre like playing Dune as a noir detective story.<p>Note: Work in progress. Suggestions are welcome.
Show HN: I built an AI that turns any book into a text adventure game
It's a web app that uses AI to turn any book into a playable text adventure. Your favorite book, but your choices, hence your story. You can even "remix" the genre like playing Dune as a noir detective story.<p>Note: Work in progress. Suggestions are welcome.
Show HN: Open-source physical rack-mounted GUI for home lab
I have realized that a lot of people nowadays self-host services and set up home labs with mini racks.<p>One major pain point I have come across personally is to quickly get health status from self-hosted services and machines, and have the ability to headlessly control my Raspberry Pi inside a mini rack.<p>So It got me thinking about building a built-in GUI that users can easily add to their Raspberry Pi nodes in their (mini or full) racks or elsewhere.<p>I have previously designed this GUI for an open source project I have been working on (called Ubo pod: github.com/ubopod) and decided to detach/decouple the GUI into its own standalone module for this use case.<p>The GUI allows headless control of your Raspberry Pi, monitoring of system resources, and application status.<p>I am designing a new PCB and enclosure as part of this re-design to allow for a new form factor that mounts on server racks.<p>I am recording my journey of re-designing this and I would love to get early feedback from users to better understand what they may need or require from such a solution, specially on the hardware side.<p>The software behind the GUI is quite mature (<a href="https://github.com/ubopod/ubo_app">https://github.com/ubopod/ubo_app</a>) and you can actually try it right now without the hardware inside the web browser as shown in the video:<p><a href="https://www.youtube.com/watch?v=9Ob_HDO66_8" rel="nofollow">https://www.youtube.com/watch?v=9Ob_HDO66_8</a><p>All PCB designs are available here:<p><a href="https://github.com/ubopod/ubo-pcb">https://github.com/ubopod/ubo-pcb</a>
Show HN: I built a free tool to find valuable expired domains using AI
Hi HN,<p>I’ve been collecting and analyzing expired domains for years — especially those about to drop. Every day, tens of thousands expire. Most are junk, but a few still have traffic, backlinks, SEO value, or just great names. Finding them used to take hours.<p>Last week I put my internal tools online:
<a href="https://pendingdelete.domains" rel="nofollow">https://pendingdelete.domains</a>
No login, no paywall
Updated daily
Combines domain history, traffic, SEO data and AI-driven insights to identify valuable expirations
The goal: help spot valuable domains quickly and skip the noise.<p>Still a work-in-progress — would love feedback:
Is this useful?
What signals or filters would you add?
Any UI or speed improvements?<p>Thanks!
Show HN: Allzonefiles.io – get lists of all registered domains in the Internet
This site provides lists with 305M of domain names across 1570 domain zones in the entire Internet. You can download these lists from the website or via API. Domain lists for majority of zones are updated daily.
Show HN: I made a tool to generate photomosaics with your pictures
Hi HN!<p>I wanted to make some photomosaics for an anniversary gift, but I ended up building this tool and turning it into a website that anyone can use.<p>For those who don’t know, a photomosaic is an image made up of many smaller tile images, arranged in a way that forms a larger, recognisable picture.<p>The best part? Everything runs directly in your browser. No files are uploaded, and there’s no sign-up required.
Show HN: I made a tool to generate photomosaics with your pictures
Hi HN!<p>I wanted to make some photomosaics for an anniversary gift, but I ended up building this tool and turning it into a website that anyone can use.<p>For those who don’t know, a photomosaic is an image made up of many smaller tile images, arranged in a way that forms a larger, recognisable picture.<p>The best part? Everything runs directly in your browser. No files are uploaded, and there’s no sign-up required.