The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP
Hi HN, I built webctl because I was frustrated by the gap between curl and full browser automation frameworks like Playwright.<p>I initially built this to solve a personal headache: I wanted an AI agent to handle project management tasks on my company’s intranet. I needed it to persist cookies across sessions (to handle SSO) and then scrape a Kanban board.<p>Existing AI browser tools (like current MCP implementations) often force unsolicited data into the context window—dumping the full accessibility tree, console logs, and network errors whether you asked for them or not.<p>webctl is an attempt to solve this with a Unix-style CLI:<p>- Filter before context: You pipe the output to standard tools. webctl snapshot --interactive-only | head -n 20 means the LLM only sees exactly what I want it to see.<p>- Daemon Architecture: It runs a persistent background process. The goal is to keep the browser state (cookies/session) alive while you run discrete, stateless CLI commands.<p>- Semantic targeting: It uses ARIA roles (e.g., role=button name~="Submit") rather than fragile CSS selectors.<p>Disclaimer: The daemon logic for state persistence is still a bit experimental, but the architecture feels like the right direction for building local, token-efficient agents.<p>It’s basically "Playwright for the terminal."
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP
Hi HN, I built webctl because I was frustrated by the gap between curl and full browser automation frameworks like Playwright.<p>I initially built this to solve a personal headache: I wanted an AI agent to handle project management tasks on my company’s intranet. I needed it to persist cookies across sessions (to handle SSO) and then scrape a Kanban board.<p>Existing AI browser tools (like current MCP implementations) often force unsolicited data into the context window—dumping the full accessibility tree, console logs, and network errors whether you asked for them or not.<p>webctl is an attempt to solve this with a Unix-style CLI:<p>- Filter before context: You pipe the output to standard tools. webctl snapshot --interactive-only | head -n 20 means the LLM only sees exactly what I want it to see.<p>- Daemon Architecture: It runs a persistent background process. The goal is to keep the browser state (cookies/session) alive while you run discrete, stateless CLI commands.<p>- Semantic targeting: It uses ARIA roles (e.g., role=button name~="Submit") rather than fragile CSS selectors.<p>Disclaimer: The daemon logic for state persistence is still a bit experimental, but the architecture feels like the right direction for building local, token-efficient agents.<p>It’s basically "Playwright for the terminal."
Show HN: Tiny FOSS Compass and Navigation App (<2MB)
Show HN: Tiny FOSS Compass and Navigation App (<2MB)
Show HN: WebTiles – create a tiny 250x250 website with neighbors around you
There is a large grid of 250x250 tiles, on which you are be able to create a tiny website, contained into the tile.
You can basically consider the tile as a mini version of your website, showcasing what your full site has (but it can be anything). You are able to link to your full site, and use any HTML/CSS/JS inside. The purpose is to create beautiful and interesting tiles, that could be used for exploring the indie-web in an easy and interesting way.
Show HN: WebTiles – create a tiny 250x250 website with neighbors around you
There is a large grid of 250x250 tiles, on which you are be able to create a tiny website, contained into the tile.
You can basically consider the tile as a mini version of your website, showcasing what your full site has (but it can be anything). You are able to link to your full site, and use any HTML/CSS/JS inside. The purpose is to create beautiful and interesting tiles, that could be used for exploring the indie-web in an easy and interesting way.
Show HN: OSS AI agent that indexes and searches the Epstein files
Hi HN,<p>I built an open-source AI agent that has already indexed and can search the entire Epstein files, roughly 100M words of publicly released documents.<p>The goal was simple: make a large, messy corpus of PDFs and text files immediately searchable in a precise way, without relying on keyword search or bloated prompts.<p>What it does:<p>- The full dataset is already indexed
- You can ask natural language questions
- Answers are grounded and include direct references to source documents
- Supports both exact text lookup and semantic search<p>Discussion around these files is often fragmented. This makes it possible to explore the primary sources directly and verify claims without manually digging through thousands of pages.<p>Happy to answer questions or go into technical details.<p>Code: <a href="https://github.com/nozomio-labs/nia-epstein-ai" rel="nofollow">https://github.com/nozomio-labs/nia-epstein-ai</a>
Show HN: SnackBase – Open-source, GxP-compliant back end for Python teams
Hi HN, I’m the creator of SnackBase.<p>I built this because I work in Healthcare and Life Sciences domain and was tired of spending months building the same "compliant" infrastructure (Audit Logs, Row-Level Security, PII Masking, Auth) before writing any actual product code.<p>The Problem: Existing BaaS tools (Supabase, Appwrite) are amazing, but they are hard to validate for GxP (FDA regulations) and often force you into a JS/Go ecosystem. I wanted something native to the Python tools I already use.<p>The Solution: SnackBase is a self-hosted Python (FastAPI + SQLAlchemy) backend that includes:<p>Compliance Core: Immutable audit logs with blockchain-style hashing (prev_hash) for integrity.<p>Native Python Hooks: You can write business logic in pure Python (no webhooks or JS runtimes required).<p>Clean Architecture: Strict separation of layers. No business logic in the API routes.<p>The Stack:<p>Python 3.12 + FastAPI<p>SQLAlchemy 2.0 (Async)<p>React 19 (Admin UI)<p>Links:<p>Live Demo: <a href="https://demo.snackbase.dev" rel="nofollow">https://demo.snackbase.dev</a><p>Repo: <a href="https://github.com/lalitgehani/snackbase" rel="nofollow">https://github.com/lalitgehani/snackbase</a><p>The demo resets every hour. I’d love feedback on the DSL implementation or the audit logging approach.
Show HN: Ayder – HTTP-native durable event log written in C (curl as client)
Hi HN,<p>I built Ayder — a single-binary, HTTP-native durable event log written in C.
The wedge is simple: curl is the client (no JVM, no ZooKeeper, no thick client libs).<p>There’s a 2-minute demo that starts with an unclean SIGKILL, then restarts and
verifies offsets + data are still there.<p>Numbers (3-node Raft, real network, sync-majority writes, 64B payload):
~50K msg/s sustained (wrk2 @ 50K req/s), client P99 ~3.46ms.
Crash recovery after SIGKILL is ~40–50s with ~8M offsets.<p>Repo link has the video, benchmarks, and quick start.
I’m looking for a few early design partners (any event ingestion/streaming workload).
Show HN: Nogic – VS Code extension that visualizes your codebase as a graph
I built Nogic, a VSCode extension currently, because AI tools make code grow faster than developers can build a mental model by jumping between files. Exploring structure visually has been helping me onboard to unfamiliar codebases faster.<p>It’s early and rough, but usable. Would love feedback on whether this is useful and what relationships are most valuable to visualize.
Show HN: Nogic – VS Code extension that visualizes your codebase as a graph
I built Nogic, a VSCode extension currently, because AI tools make code grow faster than developers can build a mental model by jumping between files. Exploring structure visually has been helping me onboard to unfamiliar codebases faster.<p>It’s early and rough, but usable. Would love feedback on whether this is useful and what relationships are most valuable to visualize.
Show HN: An iOS budget app I've been maintaining since 2011
I’ve been building and selling software since the early 2000s, starting with classic shareware. In 2011, I moved into the App Store world and built an iOS budget app because I needed a simple way to track my own expenses.<p>At the time, my plan was to replace a few larger shareware projects with several smaller apps to spread the risk. That didn’t quite work out — one app, MoneyControl, quickly grew so much that it became my main focus.<p>Fifteen years later, the app is still on the App Store, still actively developed, and still used by people who started with version 1.0. Many apps from that era are long gone.<p>Looking back, these are some of the things that mattered most:<p>Starting early helped, but wasn’t enough on its own.
Early visibility made a difference, but long-term maintenance and reliability are what kept users.<p>Focus beat diversification.
I wanted many small apps. I ended up with one large, long-lived product. Deep focus turned out to be more sustainable.<p>Long-term maintenance is most of the work.
Adapting to new iOS versions, migrating data safely, handling edge cases, and keeping old data usable mattered more than flashy features.<p>Discoverability keeps getting harder.
Reaching users on the App Store today is much more difficult than it was years ago. Prices are higher than in the old 99-cent days, but visibility hasn’t improved.<p>I’m a developer first, not a marketer.
I work alone, with occasional help from freelancers. No employees, no growth team. The app could probably have grown more with better marketing, but that was never my strength.<p>You don’t need to get rich to build something sustainable.
I didn’t build this for an exit. I’ve been able to make a living from my work for over 20 years, which feels like success to me.<p>Building things you actually use keeps you honest.
Every product I built was something I personally needed. That authenticity mattered more than any roadmap.<p>This week I released version 10 with a new design and a major technical overhaul. It feels less like a milestone and more like preparing the app for the next phase.<p>Happy to answer questions about long-term app maintenance, indie development, or keeping a product alive across many iOS generations.
Show HN: An iOS budget app I've been maintaining since 2011
I’ve been building and selling software since the early 2000s, starting with classic shareware. In 2011, I moved into the App Store world and built an iOS budget app because I needed a simple way to track my own expenses.<p>At the time, my plan was to replace a few larger shareware projects with several smaller apps to spread the risk. That didn’t quite work out — one app, MoneyControl, quickly grew so much that it became my main focus.<p>Fifteen years later, the app is still on the App Store, still actively developed, and still used by people who started with version 1.0. Many apps from that era are long gone.<p>Looking back, these are some of the things that mattered most:<p>Starting early helped, but wasn’t enough on its own.
Early visibility made a difference, but long-term maintenance and reliability are what kept users.<p>Focus beat diversification.
I wanted many small apps. I ended up with one large, long-lived product. Deep focus turned out to be more sustainable.<p>Long-term maintenance is most of the work.
Adapting to new iOS versions, migrating data safely, handling edge cases, and keeping old data usable mattered more than flashy features.<p>Discoverability keeps getting harder.
Reaching users on the App Store today is much more difficult than it was years ago. Prices are higher than in the old 99-cent days, but visibility hasn’t improved.<p>I’m a developer first, not a marketer.
I work alone, with occasional help from freelancers. No employees, no growth team. The app could probably have grown more with better marketing, but that was never my strength.<p>You don’t need to get rich to build something sustainable.
I didn’t build this for an exit. I’ve been able to make a living from my work for over 20 years, which feels like success to me.<p>Building things you actually use keeps you honest.
Every product I built was something I personally needed. That authenticity mattered more than any roadmap.<p>This week I released version 10 with a new design and a major technical overhaul. It feels less like a milestone and more like preparing the app for the next phase.<p>Happy to answer questions about long-term app maintenance, indie development, or keeping a product alive across many iOS generations.
Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever
Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.<p>The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.<p>What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.<p>API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.<p>Self-hosting options:
- USB drive / local folder (just open the HTML files)
- Home server on your LAN
- Tor hidden service (2 commands, no port forwarding needed)
- VPS with HTTPS
- GitHub Pages for small archives<p>Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.<p>Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.<p>How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture.<p>Live demo: <a href="https://online-archives.github.io/redd-archiver-example/" rel="nofollow">https://online-archives.github.io/redd-archiver-example/</a><p>GitHub: <a href="https://github.com/19-84/redd-archiver" rel="nofollow">https://github.com/19-84/redd-archiver</a> (Public Domain)<p>Pushshift torrent: <a href="https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4" rel="nofollow">https://academictorrents.com/details/1614740ac8c94505e4ecb9d...</a>
Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever
Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.<p>The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.<p>What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.<p>API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.<p>Self-hosting options:
- USB drive / local folder (just open the HTML files)
- Home server on your LAN
- Tor hidden service (2 commands, no port forwarding needed)
- VPS with HTTPS
- GitHub Pages for small archives<p>Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.<p>Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.<p>How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture.<p>Live demo: <a href="https://online-archives.github.io/redd-archiver-example/" rel="nofollow">https://online-archives.github.io/redd-archiver-example/</a><p>GitHub: <a href="https://github.com/19-84/redd-archiver" rel="nofollow">https://github.com/19-84/redd-archiver</a> (Public Domain)<p>Pushshift torrent: <a href="https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4" rel="nofollow">https://academictorrents.com/details/1614740ac8c94505e4ecb9d...</a>
Show HN: An LLM-optimized programming language
Show HN: 30k IKEA items in flat text
OP here.<p>I took the unofficial IKEA US dataset (originally scraped by jeffreyszhou) and converted all 30,511 products into a flat, markdown-like protocol called CommerceTXT.<p>The goal: See if a flatter structure is more efficient for LLM context windows.<p>The results:
- Size: 30k products across 632 categories.
- Efficiency: The text version uses ~24% fewer tokens (3.6M saved total) compared to the equivalent minified JSON.
- Structure: Files are organized in folders (e.g. /products/category/), which helps with testing hierarchical retrieval routers.<p>The link goes to the dataset on Hugging Face which has the full benchmarks.<p>Parser code is here: <a href="https://github.com/commercetxt/commercetxt" rel="nofollow">https://github.com/commercetxt/commercetxt</a><p>Happy to answer questions about the conversion logic!
Show HN: Fall asleep by watching JavaScript load
Show HN: Fall asleep by watching JavaScript load
Show HN: Agent-of-empires: OpenCode and Claude Code session manager
Hi! I’m Nathan: an ML Engineer at Mozilla.ai: I built agent-of-empires (aoe): a CLI application to help you manage all of your running Claude Code/Opencode sessions and know when they are waiting for you.<p>- Written in rust and relies on tmux for security and reliability
- Monitors state of cli sessions to tell you when an agent is running vs idle vs waiting for your input
- Manage sessions by naming them, grouping them, configuring profiles for various settings<p>I'm passionate about getting self-hosted open-weight LLMs to be valid options to compete with proprietary closed models. One roadblock for me is that although tools like opencode allow you to connect to Local LLMs (Ollama, lm studio, etc), they generally run muuuuuch slower than models hosted by Anthropic and OpenAI. I would start a coding agent on a task, but then while I was sitting waiting for that task to complete, I would start opening new terminal windows to start multitasking. Pretty soon, I was spending a lot of time toggling between terminal windows to see which one needed me: like help in adding a clarification, approving a new command, or giving it a new task.<p>That’s why I build agent-of-empires (“aoe”). With aoe, I can launch a bunch of opencode and Claude Code sessions and quickly see their status or toggle between them, which helps me avoid having a lot of terminal windows open, or having to manually attach and detach from tmux sessions myself. It’s helping me give local LLMs a fair try, because them being slower is now much less of a bottleneck.<p>You can give it an install with<p>curl -fsSL <a href="https://raw.githubusercontent.com/njbrake/agent-of-empires/main/scripts/install.sh" rel="nofollow">https://raw.githubusercontent.com/njbrake/agent-of-empires/m...</a> | bash<p>Or brew install njbrake/aoe/aoe<p>And then launch by simply entering the command `aoe`.<p>I’m interested in what you think as well as what features you think would be useful to add!<p>I am planning to add some further features around sandboxing (with docker) as well as support for intuitive git worktrees and am curious if there are any opinions about what should or shouldn’t be in it.<p>I decided against MCP management or generic terminal usage, to help keep the tool focused on parts of agentic coding that I haven’t found a usable solution for.<p>I hit the character limit on this post which prevented me from including a view of the output, but the readme on the github link has a screenshot showing what it looks like.<p>Thanks!