The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Xenia – A monospaced font built with a custom Python engine

I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a').<p>I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support.<p>Regular weight is free for the community. I'm releasing more weights based on interest.

Show HN: What if your menu bar was a keyboard-controlled command center?

Hey Hacker News The ones that know me here know that I am a productivity geek.<p>After DockFlow to manage my Dock and ExtraDock, which gives me more space to manage my apps and files, I decided to tackle the macOS big boss: the menu bar.<p>I spend ~40% of my day context-switching between apps — Zoom meetings, Slack channels, Code projects, and Figma designs. My macOS menu bar has too many useless icons I almost never use.<p>So I thought to myself, how can I use this area to improve my workflows?<p>Most solutions (Bartender, Ice) require screen recording permissions, and did not really solve my issues. I wanted custom menus in the apps, not the ones that the developers decided for me.<p>After a few iterations and exploring different solutions, ExtraBar was created. Instead of just hiding icons, what if the menu bar became a keyboard-controlled command center that has the actions I need? No permissions. No telemetry. Just local actions.<p>This is ExtraBar: Set up the menu with the apps and actions YOU need, and use a hotkey to bring it up with full keyboard navigation built in.<p>What you can do: - Jump into your next Zoom call with a keystroke - Open specific Slack channels instantly (no menu clicking) - Launch VS Code projects directly - Trigger Apple Shortcuts workflows - Integrate with Raycast for advanced automation - Custom deep links to Figma, Spotify, or any URL<p>Real-world example: I've removed my menu bar icons. Everything is keyboard- controlled: cmd+B → 2 (Zoom) → 4 (my personal meeting) → I'm in.<p>Why it's different: Bartender and Ice hide icons. ExtraBar <i>uses</i> your menu bar to do things. Bartender requires screen recording permissions. Ice requires accessibility permissions. ExtraBar works offline with zero permissions - (Enhance functionality with only accessibility permissions, not a must)<p>Technical: - Written in SwiftUI; native on Apple Silicon and Intel - Zero OS permissions required (optional accessibility for enhanced keyboard nav) - All data stored locally (no cloud, no telemetry) - Very Customizable with custom configuration built in for popular apps + fully customizable configuration actions. - Import/export action configurations<p>The app is improving weekly based on community feedback. We're also building configuration sharing so users can share setups.<p>Already got some great feedback from Reddit and Producthunt, and I can't wait to get yours!<p>Check out the website: <a href="https://extrabar.app" rel="nofollow">https://extrabar.app</a> ProductHunt: <a href="https://www.producthunt.com/products/extrabar" rel="nofollow">https://www.producthunt.com/products/extrabar</a>

Show HN: ChunkHound, a local-first tool for understanding large codebases

ChunkHound’s goal is simple: local-first codebase intelligence that helps you pull deep, core-dev-level insights on demand, generate always-up-to-date docs, and scale from small repos to enterprise monorepos — while staying free + open source and provider-agnostic (VoyageAI / OpenAI / Qwen3, Anthropic / OpenAI / Gemini / Grok, and more).<p>I’d love your feedback — and if you have, thank you for being part of the journey!

Show HN: Figma-use – CLI to control Figma for AI agents

I'm Dan, and I built a CLI that lets AI agents design in Figma.<p>What it does: 100 commands to create shapes, text, frames, components, modify styles, export assets. JSX importing that's ~100x faster than any plugin API import. Works with any LLM coding assistant.<p>Why I built it: The official Figma MCP server can only read files. I wanted AI to actually design — create buttons, build layouts, generate entire component systems. Existing solutions were either read-only or required verbose JSON schemas that burn through tokens.<p>Demo (45 sec): <a href="https://youtu.be/9eSYVZRle7o" rel="nofollow">https://youtu.be/9eSYVZRle7o</a><p>Tech stack: Bun + Citty for CLI, Elysia WebSocket proxy, Figma plugin. The render command connects to Figma's internal multiplayer protocol via Chrome DevTools for extra performance when dealing with large groups of objects.<p>Try it: bun install -g @dannote/figma-use<p>Looking for feedback on CLI ergonomics, missing commands, and whether the JSX syntax feels natural.

Show HN: Beats, a web-based drum machine

Hello all!<p>I've been an avid fan of Pocket Operators by Teenage Engineering since I found out about them. I even own an EP-133 K.O. II today, which I love.<p>A couple of months ago, Reddit user andiam03 shared a Google Sheet with some drum patterns [1]. I thought it was a very cool way to share and understand beats.<p>During the weekend I coded a basic version of this app I am sharing today. I iterated over it in my free time, and yesterday I felt like I had a pretty good version to share with y'all.<p>It's not meant to be a sequencer but rather a way to experiment with beats and basic sounds, save them, and use them in your songs. It also has a sharing feature with a link.<p>It was built using Tone.js [2], Stimulus [3] and deployed in Render [4] as a static website. I used an LLM to read the Tone.js documentation and generate sounds, since I have no knowledge about sound production, and modified from there.<p>Anyway, hope you like it! I had a blast building it.<p>[0]: <a href="https://teenage.engineering" rel="nofollow">https://teenage.engineering</a><p>[1]: <a href="https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soeVaY7iUSj1YncfIJZIPScBhM/edit" rel="nofollow">https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soe...</a><p>[2]: <a href="https://tonejs.github.io" rel="nofollow">https://tonejs.github.io</a><p>[3]: <a href="https://stimulus.hotwired.dev" rel="nofollow">https://stimulus.hotwired.dev</a><p>[4]: <a href="http://render.com" rel="nofollow">http://render.com</a>

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup

Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (<a href="https://news.ycombinator.com/item?id=42908061">https://news.ycombinator.com/item?id=42908061</a>), we've been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.<p>Here's what's new in 0.2:<p>*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:<p><pre><code> lume create my-vm --os macos --ipsw latest --unattended tahoe </code></pre> You can write custom YAML configs to set up any macOS version your way.<p>*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:<p><pre><code> curl -X POST localhost:7777/lume/vms/my-vm/run -d '{"noDisplay": true}' </code></pre> *MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly:<p><pre><code> # Add to Claude Desktop config "lume": { "command": "lume", "args": ["serve", "--mcp"] } # Then just ask: "Create a sandbox VM and run my tests" </code></pre> *Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations:<p><pre><code> lume config storage add external-ssd /Volumes/ExternalSSD/lume lume clone my-vm backup --source-storage default --dest-storage external-ssd </code></pre> *Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.<p>We're seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI/CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research<p>To get started:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)" lume create sandbox --os macos --ipsw latest --unattended tahoe lume run sandbox --shared-dir ~/my-project </code></pre> Lume is MIT licensed and Apple Silicon only (M1/M2/M3/M4) since it uses Apple's native Virtualization Framework directly—no emulation.<p>Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We're also working on a managed cloud offering for teams that need macOS compute on demand—if you're interested, reach out.<p>We're actively developing this as part of Cua (<a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a>), our Computer Use Agent SDK. We'd love your feedback, bug reports, or feature ideas.<p>GitHub: <a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a> Docs: <a href="https://cua.ai/docs/lume">https://cua.ai/docs/lume</a><p>We'll be here to answer questions!

Show HN: Lume 0.2 – Build and Run macOS VMs with unattended setup

Hey HN, Lume is an open-source CLI for running macOS and Linux VMs on Apple Silicon. Since launch (<a href="https://news.ycombinator.com/item?id=42908061">https://news.ycombinator.com/item?id=42908061</a>), we've been using it to run AI agents in isolated macOS environments. We needed VMs that could set themselves up, so we built that.<p>Here's what's new in 0.2:<p>*Unattended Setup* – Go from IPSW to a fully configured VM without touching the keyboard. We built a VNC + OCR system that clicks through macOS Setup Assistant automatically. No more manual setup before pushing to a registry:<p><pre><code> lume create my-vm --os macos --ipsw latest --unattended tahoe </code></pre> You can write custom YAML configs to set up any macOS version your way.<p>*HTTP API + Daemon* – A REST API on port 7777 that runs as a background service. Your scripts and CI pipelines can manage VMs that persist even if your terminal closes:<p><pre><code> curl -X POST localhost:7777/lume/vms/my-vm/run -d '{"noDisplay": true}' </code></pre> *MCP Server* – Native integration with Claude Desktop and AI coding agents. Claude can create, run, and execute commands in VMs directly:<p><pre><code> # Add to Claude Desktop config "lume": { "command": "lume", "args": ["serve", "--mcp"] } # Then just ask: "Create a sandbox VM and run my tests" </code></pre> *Multi-location Storage* – macOS disk space is always tight, so from user feedback we added support for external drives. Add an SSD, move VMs between locations:<p><pre><code> lume config storage add external-ssd /Volumes/ExternalSSD/lume lume clone my-vm backup --source-storage default --dest-storage external-ssd </code></pre> *Registry Support* – Pull and push VM images from GHCR or GCS. Create a golden image once, share it across your team.<p>We're seeing people use Lume for: - Running Claude Code in an isolated VM (your host stays clean, reset mistakes by cloning) - CI/CD pipelines for Apple platform apps - Automated UI testing across macOS versions - Disposable sandboxes for security research<p>To get started:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)" lume create sandbox --os macos --ipsw latest --unattended tahoe lume run sandbox --shared-dir ~/my-project </code></pre> Lume is MIT licensed and Apple Silicon only (M1/M2/M3/M4) since it uses Apple's native Virtualization Framework directly—no emulation.<p>Lume runs on EC2 Mac instances and Scaleway if you need cloud infrastructure. We're also working on a managed cloud offering for teams that need macOS compute on demand—if you're interested, reach out.<p>We're actively developing this as part of Cua (<a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a>), our Computer Use Agent SDK. We'd love your feedback, bug reports, or feature ideas.<p>GitHub: <a href="https://github.com/trycua/cua" rel="nofollow">https://github.com/trycua/cua</a> Docs: <a href="https://cua.ai/docs/lume">https://cua.ai/docs/lume</a><p>We'll be here to answer questions!

Show HN: Streaming gigabyte medical images from S3 without downloading them

Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss

Hey HN – I built Dock after years of team chat frustrations as a founder. Free forever for teams up to 5. Unlimited search, unlimited history. No "upgrade to see messages older than 90 days" nonsense. Built for teams who work both async and sync/real-time when it matters. runs on SOC 2 infra, compliant, secure and in-transit and at-rest encryption, runs on Cloudflare.<p>Early stage – would love feedback from anyone who's felt the same pain.

Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss

Hey HN – I built Dock after years of team chat frustrations as a founder. Free forever for teams up to 5. Unlimited search, unlimited history. No "upgrade to see messages older than 90 days" nonsense. Built for teams who work both async and sync/real-time when it matters. runs on SOC 2 infra, compliant, secure and in-transit and at-rest encryption, runs on Cloudflare.<p>Early stage – would love feedback from anyone who's felt the same pain.

Show HN: mdto.page – Turn Markdown into a shareable webpage instantly

Hi HN<p>I built mdto.page because I often needed a quick way to share Markdown notes or documentation as a proper webpage, without setting up a GitHub repo or configuring a static site generator.<p>I wanted something dead simple: upload Markdown -> get a shareable public URL.<p>Key features:<p>Instant Publishing: No login or setup required.<p>Flexible Expiration: You can set links to expire automatically after 1 day, 7 days, 2 weeks, or 30 days. Great for temporary sharing.<p>It's free to use. I’d love to hear your feedback!

Show HN: 1Code – Open-source Cursor-like UI for Claude Code

Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code (<a href="https://1code.dev" rel="nofollow">https://1code.dev</a>), a local UI for Claude Code.<p>Here's a video of the product: <a href="https://www.youtube.com/watch?v=Sgk9Z-nAjC0" rel="nofollow">https://www.youtube.com/watch?v=Sgk9Z-nAjC0</a><p>Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere.<p>So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features.<p>What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes.<p>Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically.<p>We'd love to hear your feedback!

Show HN: Reversing YouTube’s “Most Replayed” Graph

Hi HN,<p>I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out.<p>This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.<p>This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.

Show HN: Gambit, an open-source agent harness for building reliable AI agents

Hey HN!<p>Wanted to show our open source agent harness called Gambit.<p>If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration.<p>Normally you might see an agent orchestration framework pipeline like:<p>compute -> compute -> compute -> LLM -> compute -> compute -> LLM<p>we invert this so with an agent harness, it’s more like:<p>LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM<p>Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks.<p>Agents can call agents, and each agent can be designed with whatever model params make sense for your task.<p>Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns).<p>We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.<p>Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality.<p>We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications:<p>- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.<p>- Rubric based grading to guarantee you (for instance) don’t leak PII accidentally<p>- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.<p>We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out!<p>Walkthrough video: <a href="https://youtu.be/J_hQ2L_yy60" rel="nofollow">https://youtu.be/J_hQ2L_yy60</a>

Show HN: I built a text-based business simulator to replace video courses

I am a solo developer, and I built Core MBA because I was frustrated with the "video course" default in business education.<p>I wanted to build a "compiler for business logic"—a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest technical hurdle was building the Market Engine.<p>I needed it to be mathematically rigorous, not a hallucinating chatbot. I wrote a custom `useMarketEngine.ts` hook that runs a discrete-event simulation. Every "run cycle," it solves a system of equations, including a specific Ad Fatigue formula—`1 / (1 + (power - 1) * fatigueFactor)`—to force diminishing returns.<p>I also coded the "Theory of Constraints" directly into the state management: the system enforces bottlenecks between Inventory, Demand, and Capacity. For instance, a single employee has a hard cap of 7 operations per day. If you scale demand beyond that without hiring, the system burns your cash on lost orders.<p>To handle the educational content, I moved away from hardcoded quizzes.<p>I built a module that pipes the static lesson text into Gemini Flash to generate unique "Combat Cases" on the fly. The AI validates your strategy against the specific principles of the lesson (like LTV/CAC) rather than generic business advice.<p>These two engines are connected by a "Liquidity Loop."<p>Passing the AI cases earns you virtual capital ($500), which is the only fuel for the Market Engine. You literally cannot play the game if you don't learn the theory.<p>If you go bankrupt, my heuristic `Advisor` analyzes your crash data—comparing `lostRevenue` vs `lostCapacity`—and links you back to the exact lesson you ignored.<p>I am inviting you to test the full loop: read a brief, pass the AI simulation (Combat Cases ), and try to survive in the Market Engine.<p>I specifically need feedback on: 1. The Content: I aimed for maximum density—are the lessons <i>too</i> dry? 2. The AI Simulation: Does it accurately validate your logic based on the lesson? 3. The Market Economy: Does the math feel balanced, or is the "Ad Fatigue" too punishing?

Show HN: A 10KiB kernel for cloud apps

Show HN: Tabstack – Browser infrastructure for AI agents (by Mozilla)

Hi HN,<p>My team and I are building Tabstack to handle the "web layer" for AI agents. Launch Post: <a href="https://tabstack.ai/blog/intro-browsing-infrastructure-ai-agents" rel="nofollow">https://tabstack.ai/blog/intro-browsing-infrastructure-ai-ag...</a><p>Maintaining a complex infrastructure stack for web browsing is one of the biggest bottlenecks in building reliable agents. You start with a simple fetch, but quickly end up managing a complex stack of proxies, handling client-side hydration, and debugging brittle selectors. and writing custom parsing logic for every site.<p>Tabstack is an API that abstracts that infrastructure. You send a URL and an intent; we handle the rendering and return clean, structured data for the LLM.<p>How it works under the hood:<p>- Escalation Logic: We don't spin up a full browser instance for every request (which is slow and expensive). We attempt lightweight fetches first, escalating to full browser automation only when the site requires JS execution/hydration.<p>- Token Optimization: Raw HTML is noisy and burns context window tokens. We process the DOM to strip non-content elements and return a markdown-friendly structure that is optimized for LLM consumption.<p>- Infrastructure Stability: Scaling headless browsers is notoriously hard (zombie processes, memory leaks, crashing instances). We manage the fleet lifecycle and orchestration so you can run thousands of concurrent requests without maintaining the underlying grid.<p>On Ethics: Since we are backed by Mozilla, we are strict about how this interacts with the open web.<p>- We respect robots.txt rules.<p>- We identify our User Agent.<p>- We do not use requests/content to train models.<p>- Data is ephemeral and discarded after the task.<p>The linked post goes into more detail on the infrastructure and why we think browsing needs to be a distinct layer in the AI stack.<p>This is obviously a very new space and we're all learning together. There are plenty of known unknowns (and likely even more unknown unknowns) when it comes to agentic browsing, so we’d genuinely appreciate your feedback, questions, and tips.<p>Happy to answer questions about the stack, our architecture, or the challenges of building browser infrastructure.

Show HN: Tabstack – Browser infrastructure for AI agents (by Mozilla)

Hi HN,<p>My team and I are building Tabstack to handle the "web layer" for AI agents. Launch Post: <a href="https://tabstack.ai/blog/intro-browsing-infrastructure-ai-agents" rel="nofollow">https://tabstack.ai/blog/intro-browsing-infrastructure-ai-ag...</a><p>Maintaining a complex infrastructure stack for web browsing is one of the biggest bottlenecks in building reliable agents. You start with a simple fetch, but quickly end up managing a complex stack of proxies, handling client-side hydration, and debugging brittle selectors. and writing custom parsing logic for every site.<p>Tabstack is an API that abstracts that infrastructure. You send a URL and an intent; we handle the rendering and return clean, structured data for the LLM.<p>How it works under the hood:<p>- Escalation Logic: We don't spin up a full browser instance for every request (which is slow and expensive). We attempt lightweight fetches first, escalating to full browser automation only when the site requires JS execution/hydration.<p>- Token Optimization: Raw HTML is noisy and burns context window tokens. We process the DOM to strip non-content elements and return a markdown-friendly structure that is optimized for LLM consumption.<p>- Infrastructure Stability: Scaling headless browsers is notoriously hard (zombie processes, memory leaks, crashing instances). We manage the fleet lifecycle and orchestration so you can run thousands of concurrent requests without maintaining the underlying grid.<p>On Ethics: Since we are backed by Mozilla, we are strict about how this interacts with the open web.<p>- We respect robots.txt rules.<p>- We identify our User Agent.<p>- We do not use requests/content to train models.<p>- Data is ephemeral and discarded after the task.<p>The linked post goes into more detail on the infrastructure and why we think browsing needs to be a distinct layer in the AI stack.<p>This is obviously a very new space and we're all learning together. There are plenty of known unknowns (and likely even more unknown unknowns) when it comes to agentic browsing, so we’d genuinely appreciate your feedback, questions, and tips.<p>Happy to answer questions about the stack, our architecture, or the challenges of building browser infrastructure.

Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)

Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)

< 1 2 3 4 ... 928 929 930 >