The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Gridland: make terminal apps that also run in the browser
Hi everyone,<p>Gridland is a runtime + ShadCN UI registry that makes it possible to build terminal apps that run in the browser as well as the native terminal. This is useful for demoing TUIs so that users know what they're getting before they are invested enough to install them. And, tbh, it's also just super fun!<p>Gridland is the successor to Ink Web (ink-web.dev) which is the same concept, but using Ink + xterm.js. After building Ink Web, we continued experimenting and found that using OpenTUI and a canvas renderer performed better with less flickering and nearly instant load times.<p>We're excited to continue iterating on this. I expect a lot of criticism from the "why does this need to exist" angle, and tbh, it probably doesn't - it's really mostly just for fun, but we still think the demo use case mentioned previously has potential.<p>- Chris + Jess
Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.<p>So I built a CLI that lets the agent open a browser, interact with the page, record what happens, and collect any errors. Then it bundles everything — video, screenshots, logs — into a self-contained HTML file I can review in seconds.<p><pre><code> proofshot start --run "npm run dev" --port 3000
# agent navigates, clicks, takes screenshots
proofshot stop
</code></pre>
It works with whatever agent you use (Claude Code, Cursor, Codex, etc.) — it’s just shell commands. It's packaged as a skill so your AI coding agent knows exactly how it works. It's built on agent-browser from Vercel Labs which is far better and faster than Playwright MCP.<p>It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.<p>Open source and completely free.<p>Website: <a href="https://proofshot.argil.io/" rel="nofollow">https://proofshot.argil.io/</a>
Show HN: I took back Video.js after 16 years and we rewrote it to be 88% smaller
What do you do when private equity buys your old company and fires the maintainers of the popular open source project you started over a decade ago? You reboot it, and bring along some new friends to do it.<p>Video.js is used by billions of people every month, on sites like Amazon.com, Linkedin, and Dropbox, and yet it wasn’t in great shape. A skeleton crew of maintainers were doing their best with a dated architecture, but it needed more. So Sam from Plyr, Rahim from Vidstack, and Wes and Christain from Media Chrome jumped in to help me rebuild it better, faster, and smaller.<p>It’s in beta now. Please give it a try and tell us what breaks.
Show HN: Email.md – Markdown to responsive, email-safe HTML
Show HN: Gemini can now natively embed video, so I built sub-second video search
Gemini Embedding 2 can project raw video directly into a 768-dimensional vector space alongside text. No transcription, no frame captioning, no intermediate text. A query like "green car cutting me off" is directly comparable to a 30-second video clip at the vector level.<p>I used this to build a CLI that indexes hours of footage into ChromaDB, then searches it with natural language and auto-trims the matching clip. Demo video on the GitHub README.
Indexing costs ~$2.50/hr of footage. Still-frame detection skips idle chunks, so security camera / sentry mode footage is much cheaper.
Show HN: Time Keep – Location timezones, timers, alarms, countdowns in one place
I kept running into this: timer on my laptop, alarm on my phone, timezone / discord timestamp conversion app in a separate tab. Switching between them was a hassle and I wanted to be able to set them up and manage really fast anywhere I was.<p>So I built Time Keep, it puts world clocks, timers, alarms, countdowns, a stopwatch, breaks, a sleep planner, Discord timestamps, and more into one always open place.<p>Works without an account or signup. All tools are fully functional immediately. Sign in to save your data across sessions. Pro adds live cross-device sync.<p>Shared countdown links show the correct time in every viewer's timezone. Built with Next.js, Supabase, Clerk, and Vercel.<p><a href="https://www.timekeep.cc" rel="nofollow">https://www.timekeep.cc</a>
Show HN: Littlebird – Screenreading is the missing link in AI
Show HN: Agent Kernel – Three Markdown files that make any AI agent stateful
Show HN: The King Wen Permutation: [52, 10, 2]
I analyzed two orderings of the 64 I Ching hexagrams and found
the permutation cycle decomposition between them is [52, 10, 2]
with zero fixed points. Nobody has done this kind of analysis before
and this cycle type has not been reported in the literature.
You can verify it yourself.
Show HN: The King Wen Permutation: [52, 10, 2]
I analyzed two orderings of the 64 I Ching hexagrams and found
the permutation cycle decomposition between them is [52, 10, 2]
with zero fixed points. Nobody has done this kind of analysis before
and this cycle type has not been reported in the literature.
You can verify it yourself.
Show HN: Threadprocs – executables sharing one address space (0-copy pointers)
This project launches multiple independent programs into a single shared virtual address space, while still behaving like separate processes (independent binaries, globals, and lifetimes). When threadprocs share their address space, pointers are valid across them with no code changes for well-behaved Linux binaries.<p>Unlike threads, each threadproc is a standalone and semi-isolated process. Unlike dlopen-based plugin systems, threadprocs run traditional executables with a `main()` function. Unlike POSIX processes, pointers remain valid across threadprocs because they share the same address space.<p>This means that idiomatic pointer-based data structures like `std::string` or `std::unordered_map` can be passed between threadprocs and accessed directly (with the usual data race considerations).<p>This accomplishes a programming model somewhere between pthreads and multi-process shared memory IPC.<p>The implementation relies on directing ASLR and virtual address layout at load time and implementing a user-space analogue of `exec()`, as well as careful manipulation of threadproc file descriptors, signals, etc. It is implemented entirely in unprivileged user space code: <<a href="https://github.com/jer-irl/threadprocs/blob/main/docs/02-implementation.md" rel="nofollow">https://github.com/jer-irl/threadprocs/blob/main/docs/02-imp...</a>>.<p>There is a simple demo demonstrating “cross-threadproc” memory dereferencing at <<a href="https://github.com/jer-irl/threadprocs/tree/main?tab=readme-ov-file#demo" rel="nofollow">https://github.com/jer-irl/threadprocs/tree/main?tab=readme-...</a>>, including a high-level diagram.<p>This is relevant to systems of multiple processes with shared memory (often ring buffers or flat tables). These designs often require serialization or copying, and tend away from idiomatic C++ or Rust data structures. Pointer-based data structures cannot be passed directly.<p>There are significant limitations and edge cases, and it’s not clear this is a practical model, but the project explores a way to relax traditional process memory boundaries while still structuring a system as independently launched components.
Show HN: Threadprocs – executables sharing one address space (0-copy pointers)
This project launches multiple independent programs into a single shared virtual address space, while still behaving like separate processes (independent binaries, globals, and lifetimes). When threadprocs share their address space, pointers are valid across them with no code changes for well-behaved Linux binaries.<p>Unlike threads, each threadproc is a standalone and semi-isolated process. Unlike dlopen-based plugin systems, threadprocs run traditional executables with a `main()` function. Unlike POSIX processes, pointers remain valid across threadprocs because they share the same address space.<p>This means that idiomatic pointer-based data structures like `std::string` or `std::unordered_map` can be passed between threadprocs and accessed directly (with the usual data race considerations).<p>This accomplishes a programming model somewhere between pthreads and multi-process shared memory IPC.<p>The implementation relies on directing ASLR and virtual address layout at load time and implementing a user-space analogue of `exec()`, as well as careful manipulation of threadproc file descriptors, signals, etc. It is implemented entirely in unprivileged user space code: <<a href="https://github.com/jer-irl/threadprocs/blob/main/docs/02-implementation.md" rel="nofollow">https://github.com/jer-irl/threadprocs/blob/main/docs/02-imp...</a>>.<p>There is a simple demo demonstrating “cross-threadproc” memory dereferencing at <<a href="https://github.com/jer-irl/threadprocs/tree/main?tab=readme-ov-file#demo" rel="nofollow">https://github.com/jer-irl/threadprocs/tree/main?tab=readme-...</a>>, including a high-level diagram.<p>This is relevant to systems of multiple processes with shared memory (often ring buffers or flat tables). These designs often require serialization or copying, and tend away from idiomatic C++ or Rust data structures. Pointer-based data structures cannot be passed directly.<p>There are significant limitations and edge cases, and it’s not clear this is a practical model, but the project explores a way to relax traditional process memory boundaries while still structuring a system as independently launched components.
Show HN: Cq – Stack Overflow for AI coding agents
Hi all, I'm Peter at Staff Engineer and Mozilla.ai and I want to share our idea for a standard for shared agent learning, conceptually it seemed to fit easily in my mental model as a Stack Overflow for agents.<p>The project is trying to see if we can get agents (any agent, any model) to propose 'knowledge units' (KUs) as a standard schema based on gotchas it runs into during use, and proactively query for existing KUs in order to get insights which it can verify and confirm if they prove useful.<p>It's currently very much a PoC with a more lofty proposal in the repo, we're trying to iterate from local use, up to team level, and ideally eventually have some kind of public commons.<p>At the team level (see our Docker compose example) and your coding agent configured to point to the API address for the team to send KUs there instead - where they can be reviewed by a human in the loop (HITL) via a UI in the browser, before they're allowed to appear in queries by other agents in your team.<p>We're learning a lot even from using it locally on various repos internally, not just in the kind of KUs it generates, but also from a UX perspective on trying to make it easy to get using it and approving KUs in the browser dashboard. There are bigger, complex problems to solve in the future around data privacy, governance etc. but for now we're super focussed on getting something that people can see some value from really quickly in their day-to-day.<p>Tech stack:<p>* Skills - markdown<p>* Local Python MCP server (FastMCP) - managing a local SQLite knowledge store<p>* Optional team API (FastAPI, Docker) for sharing knowledge across an org<p>* Installs as a Claude Code plugin or OpenCode MCP server<p>* Local-first by default; your knowledge stays on your machine unless you opt into team sync by setting the address in config<p>* OSS (Apache 2.0 licensed)<p>Here's an example of something which seemed straight forward, when asking Claude Code to write a GitHub action it often used actions that were multiple major versions out of date because of its training data. In this case I told the agent what I saw when I reviewed the GitHub action YAML file it created and it proposed the knowledge unit to be persisted. Next time in a completely different repo using OpenCode and an OpenAI model, the cq skill was used up front before it started the task and it got the information about the gotcha on major versions in training data and checked GitHub proactively, using the correct, latest major versions. It then confirmed the KU, increasing the confidence score.<p>I guess some folks might say: well there's a CLAUDE.md in your repo, or in ~/.claude/ but we're looking further than that, we want this to be available to all agents, to all models, and maybe more importantly we don't want to stuff AGENTS.md or CLAUDE.md with loads of rules that lead to unpredictable behaviour, this is targetted information on a particular task and seems a lot more useful.<p>Right now it can be installed locally as a plugin for Claude Code and OpenCode:<p>claude plugin marketplace add mozilla-ai/cq
claude plugin install cq<p>This allows you to capture data in your local ~/.cq/local.db (the data doesn't get sent anywhere else).<p>We'd love feedback on this, the repo is open and public - so GitHub issues are welcome. We've posted on some of our social media platforms with a link to the blog post (below) so feel free to reply to us if you found it useful, or ran into friction, we want to make this something that's accessible to everyone.<p>Blog post with the full story: <a href="https://blog.mozilla.ai/cq-stack-overflow-for-agents/" rel="nofollow">https://blog.mozilla.ai/cq-stack-overflow-for-agents/</a>
GitHub repo: <a href="https://github.com/mozilla-ai/cq" rel="nofollow">https://github.com/mozilla-ai/cq</a><p>Thanks again for your time.
Show HN: Cq – Stack Overflow for AI coding agents
Hi all, I'm Peter at Staff Engineer and Mozilla.ai and I want to share our idea for a standard for shared agent learning, conceptually it seemed to fit easily in my mental model as a Stack Overflow for agents.<p>The project is trying to see if we can get agents (any agent, any model) to propose 'knowledge units' (KUs) as a standard schema based on gotchas it runs into during use, and proactively query for existing KUs in order to get insights which it can verify and confirm if they prove useful.<p>It's currently very much a PoC with a more lofty proposal in the repo, we're trying to iterate from local use, up to team level, and ideally eventually have some kind of public commons.<p>At the team level (see our Docker compose example) and your coding agent configured to point to the API address for the team to send KUs there instead - where they can be reviewed by a human in the loop (HITL) via a UI in the browser, before they're allowed to appear in queries by other agents in your team.<p>We're learning a lot even from using it locally on various repos internally, not just in the kind of KUs it generates, but also from a UX perspective on trying to make it easy to get using it and approving KUs in the browser dashboard. There are bigger, complex problems to solve in the future around data privacy, governance etc. but for now we're super focussed on getting something that people can see some value from really quickly in their day-to-day.<p>Tech stack:<p>* Skills - markdown<p>* Local Python MCP server (FastMCP) - managing a local SQLite knowledge store<p>* Optional team API (FastAPI, Docker) for sharing knowledge across an org<p>* Installs as a Claude Code plugin or OpenCode MCP server<p>* Local-first by default; your knowledge stays on your machine unless you opt into team sync by setting the address in config<p>* OSS (Apache 2.0 licensed)<p>Here's an example of something which seemed straight forward, when asking Claude Code to write a GitHub action it often used actions that were multiple major versions out of date because of its training data. In this case I told the agent what I saw when I reviewed the GitHub action YAML file it created and it proposed the knowledge unit to be persisted. Next time in a completely different repo using OpenCode and an OpenAI model, the cq skill was used up front before it started the task and it got the information about the gotcha on major versions in training data and checked GitHub proactively, using the correct, latest major versions. It then confirmed the KU, increasing the confidence score.<p>I guess some folks might say: well there's a CLAUDE.md in your repo, or in ~/.claude/ but we're looking further than that, we want this to be available to all agents, to all models, and maybe more importantly we don't want to stuff AGENTS.md or CLAUDE.md with loads of rules that lead to unpredictable behaviour, this is targetted information on a particular task and seems a lot more useful.<p>Right now it can be installed locally as a plugin for Claude Code and OpenCode:<p>claude plugin marketplace add mozilla-ai/cq
claude plugin install cq<p>This allows you to capture data in your local ~/.cq/local.db (the data doesn't get sent anywhere else).<p>We'd love feedback on this, the repo is open and public - so GitHub issues are welcome. We've posted on some of our social media platforms with a link to the blog post (below) so feel free to reply to us if you found it useful, or ran into friction, we want to make this something that's accessible to everyone.<p>Blog post with the full story: <a href="https://blog.mozilla.ai/cq-stack-overflow-for-agents/" rel="nofollow">https://blog.mozilla.ai/cq-stack-overflow-for-agents/</a>
GitHub repo: <a href="https://github.com/mozilla-ai/cq" rel="nofollow">https://github.com/mozilla-ai/cq</a><p>Thanks again for your time.
Show HN: Passport Globe (See where your passport takes you)
Just a cool visual way to see where you can go around the world. It also supports multiple passports.
Show HN: AI SDLC Scaffold, repo template for AI-assisted software development
I built an open-source repo template that brings structure to AI-assisted software development, starting from the pre-coding phases: objectives, user stories, requirements, architecture decisions.<p>It's designed around Claude Code but the ideas are tool-agnostic. I've been a computer science researcher and full-stack software engineer for 25 years, working mainly in startups. I've been using this approach on my personal projects for a while, then, when I decided to package it up as scaffold for more easy reuse, I figured it might be useful to others too. I published it under Apache 2.0, fork it and make it yours.<p>You can easily try it out: follow the instructions in the README to start using it.<p>The problem it solves:<p>AI coding agents are great at writing code, but they work much better when they have clear context about what to build and why. Most projects jump straight to implementation. This scaffold provides a structured workflow for the pre-coding phases, and organizes the output so that agents can navigate it efficiently across sessions.<p>How it works:<p>Everything lives in the repo alongside source code. The AI guidance is split into three layers, each optimized for context-window usage:<p>1. Instruction files (CLAUDE.md, CLAUDE.<phase>.md): always loaded, kept small. They are organized hierarchically, describe repo structure, maintain artifact indexes, and define cross-phase rules like traceability invariants.<p>2. Skills (.claude/skills/SDLC-*): loaded on demand. Step-by-step procedures for each SDLC activity: eliciting requirements, gap analysis, drafting architecture, decomposing into components, planning tasks, implementation.<p>3. Project artifacts: structured markdown files that accumulate as work progresses: stakeholders, goals, user stories, requirements, assumptions, constraints, decisions, architecture, data model, API design, task tracking. Accessed selectively through indexes.<p>This separation matters because instruction files stay in the context window permanently and must be lean, skills can be detailed since they're loaded only when invoked, and artifacts scale with the project but are navigated via indexed tables rather than read in full.<p>Key design choices:<p>Context-window efficiency: artifact collections use markdown index tables (one-line description and trigger conditions) so the agent can locate what it needs without reading everything.<p>Decision capture: decisions made during AI reasoning and human feedback are persisted as a structured artifact, to make them reviewable, traceable, and consistently applied across sessions.<p>Waterfall-ish flow: sequential phases with defined outputs. Tedious for human teams, but AI agents don't mind the overhead, and the explicit structure prevents the unconstrained "just start vibecoding" failure mode.<p>How I use it:<p>Short, focused sessions. Each session invokes one skill, produces its output, and ends. The knowledge organization means the next session picks up without losing context. I've found that free-form prompting between skills is usually a sign the workflow is missing a piece.<p>Current limitations:<p>I haven't found a good way to integrate Figma MCP for importing existing UI/UX designs into the workflow. Suggestions welcome.<p>Feedback, criticism, and contributions are very welcome!
Show HN: Codala, a social network built on scanning barcodes
Hi HN. I poured months into making a mobile app called Codala, but sadly, it's getting practically zero downloads. I have no marketing budget at all.<p>I'm starting to question if the app is just bad, but the core idea feels solid to me: you scan any QR code or barcode, and it opens up a space to chat, leave reviews, and discuss that specific product or place.<p>I know the app needs users leaving comments to feel alive, but it's been 3 weeks and things are dead quiet. If you want to take a look, I’m totally open to any feedback or honesty.<p>It's only available on Google Play for now
Show HN: Codala, a social network built on scanning barcodes
Hi HN. I poured months into making a mobile app called Codala, but sadly, it's getting practically zero downloads. I have no marketing budget at all.<p>I'm starting to question if the app is just bad, but the core idea feels solid to me: you scan any QR code or barcode, and it opens up a space to chat, leave reviews, and discuss that specific product or place.<p>I know the app needs users leaving comments to feel alive, but it's been 3 weeks and things are dead quiet. If you want to take a look, I’m totally open to any feedback or honesty.<p>It's only available on Google Play for now
Show HN: Codala, a social network built on scanning barcodes
Hi HN. I poured months into making a mobile app called Codala, but sadly, it's getting practically zero downloads. I have no marketing budget at all.<p>I'm starting to question if the app is just bad, but the core idea feels solid to me: you scan any QR code or barcode, and it opens up a space to chat, leave reviews, and discuss that specific product or place.<p>I know the app needs users leaving comments to feel alive, but it's been 3 weeks and things are dead quiet. If you want to take a look, I’m totally open to any feedback or honesty.<p>It's only available on Google Play for now
Show HN: Revise – An AI Editor for Documents
I started building this 10 months ago, largely using agentic coding tools. I've stayed very involved in the code base and architecture, and have never moved faster in my life as a dev.<p>The word processor engine and rendering layer are all built from scratch - the only 3rd party library I used was the excellent Y.js for the CRDT stack.<p>Would love some feedback!