The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: WhatHappened – HN summaries, heatmaps, and contrarian picks

Hi HN,<p>I built WhatHappened (whathappened.tech) because I have a love/hate relationship with this site. I love the content, but the "wall of text" UI gives me FOMO. I was spending too much time clicking into vague titles ("Project X") or wading through flame wars just to find technical insights.<p>I built this tool to act as a filter. It generates a card for the top daily posts with a few specific features to cut the noise:<p>1. AI Summaries: It generates a technical TL;DR (3 bullet points) and an ELI5 version for every post.<p>2. The Heat Meter: I analyze the comment section to visualize the distribution: Constructive vs. Technical vs. Flame War. If a thread is 90% Flame War, I know to skip it (or grab popcorn).<p>3. Contrarian Detection: To break the echo chamber, the AI specifically hunts for the most upvoted disagreement or critique in the comments and pins it to the card.<p>4. Mobile-First PWA: I mostly read HN on my phone, so I designed this as a PWA. It supports swipe gestures and installs to the home screen without an app store.<p>Stack: Next.js, Gemini, Supabase.<p>It currently supports English and Chinese. Any feedback will be appreciated! My original X post: <a href="https://x.com/marsw42/status/1997087957556318663" rel="nofollow">https://x.com/marsw42/status/1997087957556318663</a>, please share if you like it or find it helpful! :D<p>Thanks!

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server. You can verify this in the code + DevTools network panel.<p>This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server. You can verify this in the code + DevTools network panel.<p>This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server. You can verify this in the code + DevTools network panel.<p>This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server. You can verify this in the code + DevTools network panel.<p>This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.

Show HN: Sim – Apache-2.0 n8n alternative

Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>

Show HN: Sim – Apache-2.0 n8n alternative

Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>

Show HN: Sim – Apache-2.0 n8n alternative

Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>

Show HN: Sim – Apache-2.0 n8n alternative

Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>

The highest quality codebase

Show HN: VoxCSS – A DOM based voxel engine

Show HN: Wirebrowser – A JavaScript debugger with breakpoint-driven heap search

Hi HN!<p>I'm building a JavaScript debugger called Wirebrowser. It combines network inspection, request rewriting, heap snapshots, and live object search.<p>The main experimental feature is BDHS (Breakpoint-Driven Heap Search): it hooks into the JavaScript debugger and automatically captures a heap snapshot at every pause and performs a targeted search for the value or structure of interest. This reveals the moment a value appears in memory and the user-land function responsible for creating it.<p>Another interesting feature is the Live Object Search: it inspects runtime objects (not just snapshots), supports regex and object similarity, and lets you patch objects directly at runtime.<p>Whitepaper: <a href="https://fcavallarin.github.io/wirebrowser/BDHS-Origin-Trace" rel="nofollow">https://fcavallarin.github.io/wirebrowser/BDHS-Origin-Trace</a><p>Feedback very welcome, especially on whether BDHS would help your debugging workflow.

Show HN: Wirebrowser – A JavaScript debugger with breakpoint-driven heap search

Hi HN!<p>I'm building a JavaScript debugger called Wirebrowser. It combines network inspection, request rewriting, heap snapshots, and live object search.<p>The main experimental feature is BDHS (Breakpoint-Driven Heap Search): it hooks into the JavaScript debugger and automatically captures a heap snapshot at every pause and performs a targeted search for the value or structure of interest. This reveals the moment a value appears in memory and the user-land function responsible for creating it.<p>Another interesting feature is the Live Object Search: it inspects runtime objects (not just snapshots), supports regex and object similarity, and lets you patch objects directly at runtime.<p>Whitepaper: <a href="https://fcavallarin.github.io/wirebrowser/BDHS-Origin-Trace" rel="nofollow">https://fcavallarin.github.io/wirebrowser/BDHS-Origin-Trace</a><p>Feedback very welcome, especially on whether BDHS would help your debugging workflow.

Show HN: A 2-row, 16-key keyboard designed for smartphones

Mobile keyboards today are almost entirely based on the 26-key, 3-row QWERTY layout. Here’s a new 2-row, 16-key alternative designed specifically for smartphones.

Show HN: A 2-row, 16-key keyboard designed for smartphones

Mobile keyboards today are almost entirely based on the 26-key, 3-row QWERTY layout. Here’s a new 2-row, 16-key alternative designed specifically for smartphones.

Show HN: A 2-row, 16-key keyboard designed for smartphones

Mobile keyboards today are almost entirely based on the 26-key, 3-row QWERTY layout. Here’s a new 2-row, 16-key alternative designed specifically for smartphones.

Show HN: Automated license plate reader coverage in the USA

Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread).<p>These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces (<a href="https://www.404media.co/tag/flock/" rel="nofollow">https://www.404media.co/tag/flock/</a>) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them.<p>In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities.<p>The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access <i>millions</i> of records without warrants blurs the lines of the fourth amendment.

Show HN: Automated license plate reader coverage in the USA

Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread).<p>These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces (<a href="https://www.404media.co/tag/flock/" rel="nofollow">https://www.404media.co/tag/flock/</a>) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them.<p>In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities.<p>The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access <i>millions</i> of records without warrants blurs the lines of the fourth amendment.

Show HN: Automated license plate reader coverage in the USA

Built this over the last few days, based on a Rust codebase that parses the latest ALPR reports from OpenStreetMaps, calculates navigation statistics from every tagged residential building to nearby amenities, and tests each route for intersection with those ALPR cameras (Flock being the most widespread).<p>These have gotten more controversial in recent months, due to their indiscriminate large scale data collection, with 404 Media publishing many original pieces (<a href="https://www.404media.co/tag/flock/" rel="nofollow">https://www.404media.co/tag/flock/</a>) about their adoption and (ab)use across the country. I wanted to use open source datasets to track the rapid expansion, especially per-county, as this data can be crucial for 'deflock' movements to petition counties and city governments to ban and remove them.<p>In some counties, the tracking becomes so widespread that most people can't go anywhere without being photographed. This includes possibly sensitive areas, like places of worship and medical facilities.<p>The argument for their legality rests upon the notion that these cameras are equivalent to 'mere observation', but the enormous scope and data sharing agreements in place to share and access <i>millions</i> of records without warrants blurs the lines of the fourth amendment.

Show HN: Persistent memory for Claude Code sessions

< 1 2 3 4 5 6 7 ... 913 914 915 >