The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: An interactive physics simulator with 1000’s of balls, in your terminal

IP over Avian Carriers with Quality of Service (1999)

g(old)

Show HN: Artificial Ivy in the Browser

This is just a goofy thing I cooked up over the weekend. It's kind of like a screensaver, but with more reading and sliders. (It's not terribly efficient, so expect phone batteries to take a hit!)

Show HN: Ocrbase – pdf → .md/.json document OCR and structured extraction API

Show HN: Subth.ink – write something and see how many others wrote the same

Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea).<p>It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM).<p>Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult.

Show HN: Pipenet – A Modern Alternative to Localtunnel

Hey HN!<p>localtunnel's server needs random ports per client. That doesn't work on Fly.io or behind strict firewalls.<p>We rewrote it in TypeScript and added multiplexing over a single port. Open-source and 100% self-hostable.<p>Public instance at *.pipenet.dev if you don't want to self-host.<p>Built at Glama for our MCP Inspector, but it's a generic tunnel with no ties to our infra.<p><a href="https://github.com/punkpeye/pipenet" rel="nofollow">https://github.com/punkpeye/pipenet</a>

Show HN: Agent Skills Leaderboard

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

Hi HN, we're Sam, Shane, and Abhi.<p>Almost a year ago, we first shared Mastra here (<a href="https://news.ycombinator.com/item?id=43103073">https://news.ycombinator.com/item?id=43103073</a>). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.<p>Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.<p>If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.<p>Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.<p>Agent development is changing quickly, so we’ve added a lot since February:<p>- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.<p>- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.<p>- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.<p>- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.<p>(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)<p>Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.<p>We'll be around and happy to answer any questions!

Show HN: GibRAM an in-memory ephemeral GraphRAG runtime for retrieval

Hi HN,<p>I have been working with regulation-heavy documents lately, and one thing kept bothering me. Flat RAG pipelines often fail to retrieve related articles together, even when they are clearly connected through references, definitions, or clauses.<p>After trying several RAG setups, I subjectively felt that GraphRAG was a better mental model for this kind of data. The Microsoft GraphRAG paper and reference implementation were helpful starting points. However, in practice, I found one recurring friction point: graph storage and vector indexing are usually handled by separate systems, which felt unnecessarily heavy for short-lived analysis tasks.<p>To explore this tradeoff, I built GibRAM (Graph in-buffer Retrieval and Associative Memory). It is an experimental, in-memory GraphRAG runtime where entities, relationships, text units, and embeddings live side by side in a single process.<p>GibRAM is intentionally ephemeral. It is designed for exploratory tasks like summarization or conversational querying over a bounded document set. Data lives in memory, scoped by session, and is automatically cleaned up via TTL. There are no durability guarantees, and recomputation is considered cheaper than persistence for the intended use cases.<p>This is not a database and not a production-ready system. It is a casual project, largely vibe-coded, meant to explore what GraphRAG looks like when memory is the primary constraint instead of storage. Technical debt exists, and many tradeoffs are explicit.<p>The project is open source, and I would really appreciate feedback, especially from people working on RAG, search infrastructure, or graph-based retrieval.<p>GitHub: <a href="https://github.com/gibram-io/gibram" rel="nofollow">https://github.com/gibram-io/gibram</a><p>Happy to answer questions or hear why this approach might be flawed.

Show HN: I built a tool to assist AI agents to know when a PR is good to go

I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.<p>It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.<p>The core problem: no deterministic way for an agent to know a PR is ready to merge.<p>So I built gtg (Good To Go). One command, one answer:<p>$ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved<p>It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.<p>The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.<p>MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.

Show HN: Xenia – A monospaced font built with a custom Python engine

I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a').<p>I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support.<p>Regular weight is free for the community. I'm releasing more weights based on interest.

Show HN: What if your menu bar was a keyboard-controlled command center?

Hey Hacker News The ones that know me here know that I am a productivity geek.<p>After DockFlow to manage my Dock and ExtraDock, which gives me more space to manage my apps and files, I decided to tackle the macOS big boss: the menu bar.<p>I spend ~40% of my day context-switching between apps — Zoom meetings, Slack channels, Code projects, and Figma designs. My macOS menu bar has too many useless icons I almost never use.<p>So I thought to myself, how can I use this area to improve my workflows?<p>Most solutions (Bartender, Ice) require screen recording permissions, and did not really solve my issues. I wanted custom menus in the apps, not the ones that the developers decided for me.<p>After a few iterations and exploring different solutions, ExtraBar was created. Instead of just hiding icons, what if the menu bar became a keyboard-controlled command center that has the actions I need? No permissions. No telemetry. Just local actions.<p>This is ExtraBar: Set up the menu with the apps and actions YOU need, and use a hotkey to bring it up with full keyboard navigation built in.<p>What you can do: - Jump into your next Zoom call with a keystroke - Open specific Slack channels instantly (no menu clicking) - Launch VS Code projects directly - Trigger Apple Shortcuts workflows - Integrate with Raycast for advanced automation - Custom deep links to Figma, Spotify, or any URL<p>Real-world example: I've removed my menu bar icons. Everything is keyboard- controlled: cmd+B → 2 (Zoom) → 4 (my personal meeting) → I'm in.<p>Why it's different: Bartender and Ice hide icons. ExtraBar <i>uses</i> your menu bar to do things. Bartender requires screen recording permissions. Ice requires accessibility permissions. ExtraBar works offline with zero permissions - (Enhance functionality with only accessibility permissions, not a must)<p>Technical: - Written in SwiftUI; native on Apple Silicon and Intel - Zero OS permissions required (optional accessibility for enhanced keyboard nav) - All data stored locally (no cloud, no telemetry) - Very Customizable with custom configuration built in for popular apps + fully customizable configuration actions. - Import/export action configurations<p>The app is improving weekly based on community feedback. We're also building configuration sharing so users can share setups.<p>Already got some great feedback from Reddit and Producthunt, and I can't wait to get yours!<p>Check out the website: <a href="https://extrabar.app" rel="nofollow">https://extrabar.app</a> ProductHunt: <a href="https://www.producthunt.com/products/extrabar" rel="nofollow">https://www.producthunt.com/products/extrabar</a>

Show HN: ChunkHound, a local-first tool for understanding large codebases

ChunkHound’s goal is simple: local-first codebase intelligence that helps you pull deep, core-dev-level insights on demand, generate always-up-to-date docs, and scale from small repos to enterprise monorepos — while staying free + open source and provider-agnostic (VoyageAI / OpenAI / Qwen3, Anthropic / OpenAI / Gemini / Grok, and more).<p>I’d love your feedback — and if you have, thank you for being part of the journey!

Show HN: Figma-use – CLI to control Figma for AI agents

I'm Dan, and I built a CLI that lets AI agents design in Figma.<p>What it does: 100 commands to create shapes, text, frames, components, modify styles, export assets. JSX importing that's ~100x faster than any plugin API import. Works with any LLM coding assistant.<p>Why I built it: The official Figma MCP server can only read files. I wanted AI to actually design — create buttons, build layouts, generate entire component systems. Existing solutions were either read-only or required verbose JSON schemas that burn through tokens.<p>Demo (45 sec): <a href="https://youtu.be/9eSYVZRle7o" rel="nofollow">https://youtu.be/9eSYVZRle7o</a><p>Tech stack: Bun + Citty for CLI, Elysia WebSocket proxy, Figma plugin. The render command connects to Figma's internal multiplayer protocol via Chrome DevTools for extra performance when dealing with large groups of objects.<p>Try it: bun install -g @dannote/figma-use<p>Looking for feedback on CLI ergonomics, missing commands, and whether the JSX syntax feels natural.

Show HN: Beats, a web-based drum machine

Hello all!<p>I've been an avid fan of Pocket Operators by Teenage Engineering since I found out about them. I even own an EP-133 K.O. II today, which I love.<p>A couple of months ago, Reddit user andiam03 shared a Google Sheet with some drum patterns [1]. I thought it was a very cool way to share and understand beats.<p>During the weekend I coded a basic version of this app I am sharing today. I iterated over it in my free time, and yesterday I felt like I had a pretty good version to share with y'all.<p>It's not meant to be a sequencer but rather a way to experiment with beats and basic sounds, save them, and use them in your songs. It also has a sharing feature with a link.<p>It was built using Tone.js [2], Stimulus [3] and deployed in Render [4] as a static website. I used an LLM to read the Tone.js documentation and generate sounds, since I have no knowledge about sound production, and modified from there.<p>Anyway, hope you like it! I had a blast building it.<p>[0]: <a href="https://teenage.engineering" rel="nofollow">https://teenage.engineering</a><p>[1]: <a href="https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soeVaY7iUSj1YncfIJZIPScBhM/edit" rel="nofollow">https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soe...</a><p>[2]: <a href="https://tonejs.github.io" rel="nofollow">https://tonejs.github.io</a><p>[3]: <a href="https://stimulus.hotwired.dev" rel="nofollow">https://stimulus.hotwired.dev</a><p>[4]: <a href="http://render.com" rel="nofollow">http://render.com</a>

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup

Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (<a href="https://news.ycombinator.com/item?id=42908061">https://news.ycombinator.com/item?id=42908061</a>), we've been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.<p>Here's what's new in 0.2:<p>*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:<p><pre><code> lume create my-vm --os macos --ipsw latest --unattended tahoe </code></pre> You can write custom YAML configs to set up any macOS version your way.<p>*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:<p><pre><code> curl -X POST localhost:7777/lume/vms/my-vm/run -d '{"noDisplay": true}' </code></pre> *MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly:<p><pre><code> # Add to Claude Desktop config "lume": { "command": "lume", "args": ["serve", "--mcp"] } # Then just ask: "Create a sandbox VM and run my tests" </code></pre> *Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations:<p><pre><code> lume config storage add external-ssd /Volumes/ExternalSSD/lume lume clone my-vm backup --source-storage default --dest-storage external-ssd </code></pre> *Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.<p>We're seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI/CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research<p>To get started:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)" lume create sandbox --os macos --ipsw latest --unattended tahoe lume run sandbox --shared-dir ~/my-project </code></pre> Lume is MIT licensed and Apple Silicon only (M1/M2/M3/M4) since it uses Apple's native Virtualization Framework directly—no emulation.<p>Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We're also working on a managed cloud offering for teams that need macOS compute on demand—if you're interested, reach out.<p>We're actively developing this as part of Cua (<a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a>), our Computer Use Agent SDK. We'd love your feedback, bug reports, or feature ideas.<p>GitHub: <a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a> Docs: <a href="https://cua.ai/docs/lume">https://cua.ai/docs/lume</a><p>We'll be here to answer questions!

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup

Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (<a href="https://news.ycombinator.com/item?id=42908061">https://news.ycombinator.com/item?id=42908061</a>), we've been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.<p>Here's what's new in 0.2:<p>*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:<p><pre><code> lume create my-vm --os macos --ipsw latest --unattended tahoe </code></pre> You can write custom YAML configs to set up any macOS version your way.<p>*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:<p><pre><code> curl -X POST localhost:7777/lume/vms/my-vm/run -d '{"noDisplay": true}' </code></pre> *MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly:<p><pre><code> # Add to Claude Desktop config "lume": { "command": "lume", "args": ["serve", "--mcp"] } # Then just ask: "Create a sandbox VM and run my tests" </code></pre> *Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations:<p><pre><code> lume config storage add external-ssd /Volumes/ExternalSSD/lume lume clone my-vm backup --source-storage default --dest-storage external-ssd </code></pre> *Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.<p>We're seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI/CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research<p>To get started:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)" lume create sandbox --os macos --ipsw latest --unattended tahoe lume run sandbox --shared-dir ~/my-project </code></pre> Lume is MIT licensed and Apple Silicon only (M1/M2/M3/M4) since it uses Apple's native Virtualization Framework directly—no emulation.<p>Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We're also working on a managed cloud offering for teams that need macOS compute on demand—if you're interested, reach out.<p>We're actively developing this as part of Cua (<a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a>), our Computer Use Agent SDK. We'd love your feedback, bug reports, or feature ideas.<p>GitHub: <a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a> Docs: <a href="https://cua.ai/docs/lume">https://cua.ai/docs/lume</a><p>We'll be here to answer questions!

Show HN: Streaming gigabyte medical images from S3 without downloading them

Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss

Hey HN – I built Dock after years of team chat frustrations as a founder. Free forever for teams up to 5. Unlimited search, unlimited history. No "upgrade to see messages older than 90 days" nonsense. Built for teams who work both async and sync/real-time when it matters. runs on SOC 2 infra, compliant, secure and in-transit and at-rest encryption, runs on Cloudflare.<p>Early stage – would love feedback from anyone who's felt the same pain.

Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss

Hey HN – I built Dock after years of team chat frustrations as a founder. Free forever for teams up to 5. Unlimited search, unlimited history. No "upgrade to see messages older than 90 days" nonsense. Built for teams who work both async and sync/real-time when it matters. runs on SOC 2 infra, compliant, secure and in-transit and at-rest encryption, runs on Cloudflare.<p>Early stage – would love feedback from anyone who's felt the same pain.

1 2 3 ... 927 928 929 >