The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: A new benchmark for testing LLMs for deterministic outputs

When building workflows that rely on LLMs, we commonly use structured output for programmatic use cases like converting an invoice into rows or meeting transcripts into tickets or even complex PDFs into database entries.<p>The model may return the schema you want, but with hallucinated values like `invoice_date` being off by 2 months or the transcript array ordered wrongly. The JSON is valid, but the values are not.<p>Structured output today is a big part of using LLMs, especially when building deterministic workflows.<p>Current structured output benchmarks (e.g., JSONSchemaBench) only validate the pass rate for JSON schema and types, and not the actual values within the produced JSON.<p>So we designed the Structured Output Benchmark (SOB) that fixes this by measuring both the JSON schema pass rate, types, and the value accuracy across all three modalities, text, image, and audio.<p>For our test set, every record is paired with a JSON Schema and a ground-truth answer that was verified against the source context manually by a human and an LLM cross-check, so a missing or hallucinated value will be considered to be wrong.<p>Open source is doing pretty well with GLM 4.7 coming in number 2 right after GPT 5.4.<p>We noticed the rankings shift across modalities: GLM-4.7 leads text, Gemma-4-31B leads images, Gemini-2.5-Flash leads audio.<p>For example, GPT-5.4 ranks 3rd on text but 9th on images.<p>Model size is not a predictor, either: Qwen3.5-35B and GLM-4.7 beat GPT-5 and Claude-Sonnet-4.6 on Value Accuracy. Phi-4 (14B) beats GPT-5 and GPT-5-mini on text.<p>Structured hallucinations are the hardest bug. Such values are type-correct, schema-valid, and plausible, so they slip through most guardrails. For example, in one audio record, the ground truth is "target_market_age": "15 to 35 years", and a model returns "25 to 35". This is invisible without field-level checks.<p>Our goal is to be the best general model for deterministic tasks, and a key aspect of determinism is a controllable and consistent output structure. The first step to making structured output better is to measure it and hold ourselves against the best.

Show HN: Adblock-rust Manager – Firefox extension to enable the Brave ad blocker

Firefox 149 ships adblock-rust (Brave's Rust engine, MPL-2.0) completely disabled with no UI. It's controlled by two about:config prefs with no WebExtension API, so you can't touch them programmatically from a standard extension.<p>This extension gives it a UI: ETP toggle (via browser.privacy API, instant), filter list manager with clipboard helpers for the manual about:config steps, and 8 preset lists. You can also add your own if you so desire.

Show HN: Rocky – Rust SQL engine with branches, replay, column lineage

Hi HN, I'm Hugo. I've been building Rocky over the past month, shipping fast in the open. The binary is on GitHub Releases, `dagster-rocky` on PyPI, and the VS Code extension on the Marketplace. I held off on a broader announcement until the trust-system surface was coherent enough to talk about as one thing. The governance waveplan — column classification, per-env masking, 8-field audit trail on every run, `rocky compliance` rollup, role-graph reconciliation, retention policies — landed end-to-end last week in engine-v1.16.0 and rounded out in v1.17.4 (tagged 2026-04-26). That's the milestone I'd been waiting for.<p>The pitch: keep Databricks or Snowflake. Bring Rocky for the DAG. Rocky is a Rust-based control plane for warehouse pipelines. Storage and compute stay with your warehouse. Rocky owns the graph — dependencies, compile-time types, drift, incremental logic, cost, lineage, governance. The things your current stack can't give you because it doesn't own the DAG.<p>A few things I think are interesting:<p>- Branches + replay. `rocky branch create stg` gives you a logical copy of a pipeline's tables (schema-prefix today; native Delta SHALLOW CLONE and Snowflake zero-copy are next). `rocky replay <run_id>` reconstructs which SQL ran against which inputs. Git-grade workflow on a warehouse.<p>- Column-level lineage from the compiler, not a post-hoc graph crawl. The type checker traces columns through joins, CTEs, and windows. VS Code surfaces it inline via LSP.<p>- Governance as a first-class surface. Column classification tags plus per-env masking policies, applied to the warehouse via Unity Catalog (Databricks) or masking policies (Snowflake). 8-field audit trail on every run. `rocky compliance` rollup that CI can gate on. Role-graph reconciliation via SCIM + per-catalog GRANT. Retention policies with a warehouse-side drift probe.<p>- Cost attribution. Every run produces per-model cost (bytes, duration). `[budget]` blocks in `rocky.toml`; breaches fire a `budget_breach` hook event.<p>- Compile-time portability + blast radius. Dialect-divergence lint across Databricks / Snowflake / BigQuery / DuckDB (12 constructs). `SELECT *` downstream-impact lint.<p>- Schema-grounded AI. Generated SQL goes through the compiler — AI suggestions type-check before they can land.<p>What Rocky isn't:<p>- Not a warehouse — it's the control plane on top.<p>- Not a Fivetran replacement. `rocky load` handles files (CSV/Parquet/JSONL); for SaaS sources use Fivetran, Airbyte, or warehouse-native CDC.<p>- Not dbt Cloud — no hosted UI, no managed scheduler. First-class Dagster integration if you need orchestration.<p>Adapters: Databricks (GA), Snowflake (Beta), BigQuery (Beta), DuckDB (local dev / playground). Apache 2.0.<p>I'd love feedback on the trust-system framing, the governance surface (particularly classification-to-masking resolution in `rocky compile` and the `rocky compliance` CI gate), the branches/replay design, the cost-attribution primitives, or anything else that catches your eye. Happy to go deep in the thread.

Show HN: Rip.so – a graveyard for dead internet things

Show HN: Auto-Architecture: Karpathy's Loop, pointed at a CPU

Show HN: AgentSwift – Open-source iOS builder agent

I'm working on a coding agent for building iOS apps. It's built on openspec and xcodebuildmcp. It's free and open source.

Show HN: Drive any macOS app in the background without stealing the cursor

Hi HN, Francesco from Cua here. I hacked this project together last weekend, inspired by the Codex Computer-Use release and lessons learned from deploying GUI-operating agents for our customers.<p>The main problem: when a UI automation process controls a desktop app today, it usually takes over the human’s session. Your cursor moves, keyboard focus gets stolen, windows jump to the front, and you have to stop working until the agent is done. That is why we have historically avoided encouraging users to run these processes directly on their host machine, instead relying on VMs or GUI containers for concurrency and background execution.<p>But computer-use - the tools we give agents to operate computers like humans - does not scale cleanly that way. As models get smarter, agents need to share hosts safely, run in the background, and avoid collisions with the human or other agents using the same machine.<p>We realized macOS has no first-class API for "drive this app without touching the cursor". CGEventPost routes through the hardware input stream, so it moves your cursor. CGEvent.postToPid avoids the cursor warp, but Chromium treats those events as untrusted and silently drops clicks at the renderer boundary. Activating the target app first raises the window and pulls focus, defeating the point of background execution.<p>Cua Driver is our attempt at a real fix: a background computer-use driver for macOS that lets an agent click, type, scroll, and read native apps while your cursor, frontmost app, and Space stay where they are. The default interface is a CLI, so it is easy to script or call from any coding agent shell.<p>Try it on macOS 14+:<p>/bin/bash -c "$(curl -fsSL <a href="https://raw.githubusercontent.com/trycua/cua/main/libs/cua-driver/scripts/install.sh" rel="nofollow">https://raw.githubusercontent.com/trycua/cua/main/libs/cua-d...</a>)"<p>The first internal use case was delegated demo recording. We ask Claude Code to drive an app while 'cua-driver recording start' captures the trajectory, screenshots, actions, and click markers. The result is an agent-generated product demo, Screen Studio inspired.<p>Other things we have used it for:<p>- Replacing Vercel’s agent-browser and other browser-use CLIs. With Claude Code and Cua Driver, you do not need Chrome DevTools Protocol at all.<p>- A dev-loop QA agent that reproduces a visual bug, edits code, rebuilds, and verifies the UI while my editor stays frontmost.<p>- Personal-assistant flows that use iMessage from Claude Code, Hermes, or other general-purpose agent CLIs.<p>- Pulling visual context from Chrome, Figma, Preview, or YouTube windows I am not looking at, without relying on their APIs.<p>What made this harder than expected:<p>- CGEventPost warps the cursor because it goes through the HID stream.<p>- CGEvent.postToPid does not warp the cursor, but Chromium drops it at the renderer IPC boundary.<p>- Activating the target first raises the window and can drag you across Spaces.<p>- Electron apps stop keeping useful AX trees alive when windows are occluded without a private remote-aware SPI.<p>The unlock was SkyLight. SLEventPostToPid is a sibling of the public per-PID call, but it travels through a WindowServer channel Chromium accepts as trusted. Pair it with yabai’s focus-without-raise pattern, plus an off-screen primer click at (-1, -1), and the click lands without the window ever raising.<p>One thing we learned: the right addressing mode depends on the app. Native macOS apps usually have rich AX trees, Chromium-family apps often need a hybrid of AX and screenshots, and apps like Blender or CAD tools may expose almost no useful AX surface. The mistake is defaulting to pixels everywhere - or defaulting to AX everywhere.<p>Long technical writeup: <a href="https://github.com/trycua/cua/blob/main/blog/inside-macos-window-internals.md" rel="nofollow">https://github.com/trycua/cua/blob/main/blog/inside-macos-wi...</a><p>I would like feedback from people building Mac automation, agent harnesses, or accessibility tooling. If it breaks on an macOS app you care about, that is useful data for us.

Show HN: Drive any macOS app in the background without stealing the cursor

Hi HN, Francesco from Cua here. I hacked this project together last weekend, inspired by the Codex Computer-Use release and lessons learned from deploying GUI-operating agents for our customers.<p>The main problem: when a UI automation process controls a desktop app today, it usually takes over the human’s session. Your cursor moves, keyboard focus gets stolen, windows jump to the front, and you have to stop working until the agent is done. That is why we have historically avoided encouraging users to run these processes directly on their host machine, instead relying on VMs or GUI containers for concurrency and background execution.<p>But computer-use - the tools we give agents to operate computers like humans - does not scale cleanly that way. As models get smarter, agents need to share hosts safely, run in the background, and avoid collisions with the human or other agents using the same machine.<p>We realized macOS has no first-class API for "drive this app without touching the cursor". CGEventPost routes through the hardware input stream, so it moves your cursor. CGEvent.postToPid avoids the cursor warp, but Chromium treats those events as untrusted and silently drops clicks at the renderer boundary. Activating the target app first raises the window and pulls focus, defeating the point of background execution.<p>Cua Driver is our attempt at a real fix: a background computer-use driver for macOS that lets an agent click, type, scroll, and read native apps while your cursor, frontmost app, and Space stay where they are. The default interface is a CLI, so it is easy to script or call from any coding agent shell.<p>Try it on macOS 14+:<p>/bin/bash -c "$(curl -fsSL <a href="https://raw.githubusercontent.com/trycua/cua/main/libs/cua-driver/scripts/install.sh" rel="nofollow">https://raw.githubusercontent.com/trycua/cua/main/libs/cua-d...</a>)"<p>The first internal use case was delegated demo recording. We ask Claude Code to drive an app while 'cua-driver recording start' captures the trajectory, screenshots, actions, and click markers. The result is an agent-generated product demo, Screen Studio inspired.<p>Other things we have used it for:<p>- Replacing Vercel’s agent-browser and other browser-use CLIs. With Claude Code and Cua Driver, you do not need Chrome DevTools Protocol at all.<p>- A dev-loop QA agent that reproduces a visual bug, edits code, rebuilds, and verifies the UI while my editor stays frontmost.<p>- Personal-assistant flows that use iMessage from Claude Code, Hermes, or other general-purpose agent CLIs.<p>- Pulling visual context from Chrome, Figma, Preview, or YouTube windows I am not looking at, without relying on their APIs.<p>What made this harder than expected:<p>- CGEventPost warps the cursor because it goes through the HID stream.<p>- CGEvent.postToPid does not warp the cursor, but Chromium drops it at the renderer IPC boundary.<p>- Activating the target first raises the window and can drag you across Spaces.<p>- Electron apps stop keeping useful AX trees alive when windows are occluded without a private remote-aware SPI.<p>The unlock was SkyLight. SLEventPostToPid is a sibling of the public per-PID call, but it travels through a WindowServer channel Chromium accepts as trusted. Pair it with yabai’s focus-without-raise pattern, plus an off-screen primer click at (-1, -1), and the click lands without the window ever raising.<p>One thing we learned: the right addressing mode depends on the app. Native macOS apps usually have rich AX trees, Chromium-family apps often need a hybrid of AX and screenshots, and apps like Blender or CAD tools may expose almost no useful AX surface. The mistake is defaulting to pixels everywhere - or defaulting to AX everywhere.<p>Long technical writeup: <a href="https://github.com/trycua/cua/blob/main/blog/inside-macos-window-internals.md" rel="nofollow">https://github.com/trycua/cua/blob/main/blog/inside-macos-wi...</a><p>I would like feedback from people building Mac automation, agent harnesses, or accessibility tooling. If it breaks on an macOS app you care about, that is useful data for us.

Show HN: Live Sun and Moon Dashboard with NASA Footage

Show HN: Live Sun and Moon Dashboard with NASA Footage

Show HN: Startup Equity Adventure Game

I put this together (with Claude) as a semi-gamified way for folks to learn about startup equity. Take a look, and share your scorecard :)

Show HN: A free ESG stock screener that publishes its losses and methodology

Hey HN, JSS(JumpstartSignal) is a free, ESG-filtered daily stock screener. I built it after some really badly-timed quantum computing stock buys, so I felt I needed to learn more about systematic, longer-horizon approaches and the underlying technicals instead of chasing themes. Three things about it that might be of interest:<p>1. Methodology is fully documented at <a href="https://jumpstartsignal.com/how-it-works/" rel="nofollow">https://jumpstartsignal.com/how-it-works/</a> 5-stage pipeline, 54 signals tested individually plus 1,836 combinations evaluated, walk-forward validation across 25 hold periods. Nothing hand-tuned to a single backtest window.<p>2. Many wins, misses, and losses are published as case studies e.g. <a href="https://jumpstartsignal.com/case-studies/nvda/" rel="nofollow">https://jumpstartsignal.com/case-studies/nvda/</a> walks through the 32 times the system flagged NVDA starting at $5.44 in 2018. <a href="https://jumpstartsignal.com/case-studies/sedg/" rel="nofollow">https://jumpstartsignal.com/case-studies/sedg/</a> shows a -49% loss, and <a href="https://jumpstartsignal.com/case-studies/tsla/" rel="nofollow">https://jumpstartsignal.com/case-studies/tsla/</a> explains why the system <i>never</i> flagged Tesla (it passed Stages 1 and 2 on 207 days but only peaked at 20/100 in scoring vs the 70 needed for OPPORTUNITY tier). <a href="https://jumpstartsignal.com/results/" rel="nofollow">https://jumpstartsignal.com/results/</a> also shows the 10 best entries alongside the 10 worst.<p>3. A genetic algorithm picked the signal weights, but constrained to maintain alpha across multiple market regimes (otherwise it overfits to a single bull market). The constraint dropped some "best in backtest" configurations that only worked 2018-2021.<p>Topline: 2012-2025 backtest at SPOTLIGHT + OPPORTUNITY tier produced +163% alpha vs SPY (results page has the per-trade breakdown).<p>Daily watchlist emailed free; reports + results + case studies are publicly browsable without signup.<p>Happy to take questions about methodology, what the system gets wrong, or why specific tickers landed where they did.

Show HN: The Unix Magic poster, annotated (updated)

This is a site that maps the references on Gary Overacre's 1980s UNIX Magic poster to short write-ups with sources. I posted an earlier version about a year ago [1]. Since then I rewrote some of the annotations, added deep-linking to individual markers and a frame/sidebar view, gave the site a terminal-style redesign, and fixed historical inaccuracies (daemon etymology, nroff origin, B language vs. Multics, etc.).<p>Contributions and comments welcome; each marker is a GitHub issue.<p>site: <a href="https://unixmagic.net" rel="nofollow">https://unixmagic.net</a><p>[1] <a href="https://news.ycombinator.com/item?id=43019136">https://news.ycombinator.com/item?id=43019136</a>

Show HN: The Unix Magic poster, annotated (updated)

This is a site that maps the references on Gary Overacre's 1980s UNIX Magic poster to short write-ups with sources. I posted an earlier version about a year ago [1]. Since then I rewrote some of the annotations, added deep-linking to individual markers and a frame/sidebar view, gave the site a terminal-style redesign, and fixed historical inaccuracies (daemon etymology, nroff origin, B language vs. Multics, etc.).<p>Contributions and comments welcome; each marker is a GitHub issue.<p>site: <a href="https://unixmagic.net" rel="nofollow">https://unixmagic.net</a><p>[1] <a href="https://news.ycombinator.com/item?id=43019136">https://news.ycombinator.com/item?id=43019136</a>

Show HN: Tiao, A two-player turn-based board game

Hi HN,<p>I built this digital version of Tiao, a two-player turn based strategy board game. Think Checkers meets Go. It's free, runs in the browser, has multiplayer, AI, over the board mode and a lot of other neat things. The source is on GitHub (AGPL).<p>The game was originally designed by my friend Andreas Edmeier. He created the rules and has been playtesting and refining the game design for years. I built the website for it. The core in about 2 weeks using TypeScript, Next.js, Express, Websockets, and MongoDB. Fully dockerized, deployed on a Hetzner VPS with Coolify. Authentication with better-auth. Real-time gameplay, ELO matchmaking, OpenPanel analytics, and a fully functional achievements system.<p>Play it: <a href="https://playtiao.com" rel="nofollow">https://playtiao.com</a> Source: <a href="https://github.com/trebeljahr/tiao" rel="nofollow">https://github.com/trebeljahr/tiao</a><p>Happy to answer questions about the tech, the game design, or anything else.<p>My hope is that more people will play this game because I think it is genuinely fun and would be cool to one day see people play this on a Go board or on their phones/computers.<p>Have a good one.

Show HN: Tiao, A two-player turn-based board game

Hi HN,<p>I built this digital version of Tiao, a two-player turn based strategy board game. Think Checkers meets Go. It's free, runs in the browser, has multiplayer, AI, over the board mode and a lot of other neat things. The source is on GitHub (AGPL).<p>The game was originally designed by my friend Andreas Edmeier. He created the rules and has been playtesting and refining the game design for years. I built the website for it. The core in about 2 weeks using TypeScript, Next.js, Express, Websockets, and MongoDB. Fully dockerized, deployed on a Hetzner VPS with Coolify. Authentication with better-auth. Real-time gameplay, ELO matchmaking, OpenPanel analytics, and a fully functional achievements system.<p>Play it: <a href="https://playtiao.com" rel="nofollow">https://playtiao.com</a> Source: <a href="https://github.com/trebeljahr/tiao" rel="nofollow">https://github.com/trebeljahr/tiao</a><p>Happy to answer questions about the tech, the game design, or anything else.<p>My hope is that more people will play this game because I think it is genuinely fun and would be cool to one day see people play this on a Go board or on their phones/computers.<p>Have a good one.

Show HN: Utilyze – an open source GPU monitoring tool more accurate than nvtop

The standard GPU utilization metric reported by nvidia-smi, nvtop, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor is highly misleading. It reports the fraction of time that any kernel is running on the GPU, which means a GPU can report 100% utilization even if only a small portion of its compute capacity is actually being used. In practice, we've seen workloads with ~1–10% real compute throughput while dashboards show 100%.<p>This becomes a problem when teams rely on that metric for capacity planning or optimization decisions, it can make underutilized systems look saturated.<p>We're releasing an open-source (Apache 2.0) tool, Utilyze, to measure GPU utilization differently. It samples hardware performance counters and reports compute and memory throughput relative to the hardware's theoretical limits. It also estimates an attainable utilization ceiling for a given workload.<p>GitHub link: <a href="https://github.com/systalyze/utilyze" rel="nofollow">https://github.com/systalyze/utilyze</a><p>We'd love to hear your thoughts!

Show HN: Utilyze – an open source GPU monitoring tool more accurate than nvtop

The standard GPU utilization metric reported by nvidia-smi, nvtop, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor is highly misleading. It reports the fraction of time that any kernel is running on the GPU, which means a GPU can report 100% utilization even if only a small portion of its compute capacity is actually being used. In practice, we've seen workloads with ~1–10% real compute throughput while dashboards show 100%.<p>This becomes a problem when teams rely on that metric for capacity planning or optimization decisions, it can make underutilized systems look saturated.<p>We're releasing an open-source (Apache 2.0) tool, Utilyze, to measure GPU utilization differently. It samples hardware performance counters and reports compute and memory throughput relative to the hardware's theoretical limits. It also estimates an attainable utilization ceiling for a given workload.<p>GitHub link: <a href="https://github.com/systalyze/utilyze" rel="nofollow">https://github.com/systalyze/utilyze</a><p>We'd love to hear your thoughts!

Show HN: Utilyze – an open source GPU monitoring tool more accurate than nvtop

The standard GPU utilization metric reported by nvidia-smi, nvtop, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor is highly misleading. It reports the fraction of time that any kernel is running on the GPU, which means a GPU can report 100% utilization even if only a small portion of its compute capacity is actually being used. In practice, we've seen workloads with ~1–10% real compute throughput while dashboards show 100%.<p>This becomes a problem when teams rely on that metric for capacity planning or optimization decisions, it can make underutilized systems look saturated.<p>We're releasing an open-source (Apache 2.0) tool, Utilyze, to measure GPU utilization differently. It samples hardware performance counters and reports compute and memory throughput relative to the hardware's theoretical limits. It also estimates an attainable utilization ceiling for a given workload.<p>GitHub link: <a href="https://github.com/systalyze/utilyze" rel="nofollow">https://github.com/systalyze/utilyze</a><p>We'd love to hear your thoughts!

Show HN: A terminal spreadsheet editor with Vim keybindings

While speccing out this spreadsheet tool, I realized that I never had to think about the keybindings. It all just came naturally from Vim. Normal/insert/visual modes, hjkl navigation, dd/yy/p, :w, :q. The usual muscle memory works.<p>It supports CSV/TSV import and export, and a native .cell format that preserves formulas. The formula engine handles SUM, AVERAGE, COUNT, MIN, MAX, and IF with range references.<p>The codebase is a Cargo workspace: a pure cell-sheet-core library (no TUI dependency) and a cell-sheet-tui crate on top of ratatui. Early days, but it's usable.<p>To try it out: cargo install cell-sheet-tui<p>Feedback of any kind is greatly appreciated!

1 2 3 ... 973 974 975 >