The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I made a spreadsheet where formulas also update backwards
Hello HN! I'm happy to release this project today. It's a bidirectional calculator (hence the name bidicalc).<p>I've been obsessed with the idea of making a spreadsheet where you can update both inputs and outputs, instead of regular spreadsheets where you can only update inputs.<p>Please let me know what you think! Especially if you find bugs or good example use cases.
Show HN: tomcp.org – Turn any URL into an MCP server
Prepend tomcp.org/ to any URL to instantly turn it into an MCP server.<p>You can either chat directly with the page or add the config to Cursor/Claude to pipe the website/docs straight into your context.<p>Why MCP? Using MCP is better than raw scraping or copy-pasting because it converts the page into clean Markdown. This helps the AI understand the structure better and uses significantly fewer tokens.<p>How it works: It is a proxy that fetches the URL, removes ads and navigation, and exposes the clean content as a standard MCP Resource.<p>Repo: <a href="https://github.com/Ami3466/tomcp" rel="nofollow">https://github.com/Ami3466/tomcp</a> (Inspired by GitMCP, but for the general web)
Show HN: Epstein's emails reconstructed in a message-style UI (OCR and LLMs)
This project reconstructs the Epstein email records from the recent U.S. House Oversight Committee releases using only public-domain documents (23,124 image files + 2,800 OCR text files).<p>Most email pages contain only one real message, buried under layers of repeated headers/footers. I wanted to rebuild the conversations without all the surrounding noise.<p>I used an OCR + vision-LLM pipeline to extract individual messages from the email screenshots, normalize senders/recipients, rebuild timestamps, detect duplicates, and map threads. The output is a structured SQLite database that runs client-side via SQL.js (WebAssembly).<p>The repository includes the full extraction pipeline, data cleaning scripts, schema, limitations, and implementation notes. The interface is a lightweight PWA that displays the reconstructed messages in a phone-style UI, with links back to every original source image for verification.<p>Live demo: <a href="https://epsteinsphone.org" rel="nofollow">https://epsteinsphone.org</a><p>All source data is from the official public releases; no leaks or private material.<p>Happy to answer questions about the pipeline, LLM extraction, threading logic, or the PWA implementation.
Show HN: Gotui – a modern Go terminal dashboard library
I’ve been working on gotui, a modern fork of the unmaintained termui, rebuilt on top of tcell for TrueColor, mouse support, and proper resize handling. It keeps the simple termui-style API, but adds a bunch of new widgets (charts, gauges, world map, etc.), nicer visuals (collapsed borders, rounded corners), and input components for building real dashboards and tools. Under the hood the renderer’s been reworked for much better performance, and I’d love feedback on what’s missing for you to use it in production.
Show HN: Tripwire: A new anti evil maid defense
If you have heard of [Haven](<a href="https://github.com/guardianproject/haven" rel="nofollow">https://github.com/guardianproject/haven</a>), then Tripwire fills in the void for a robust anti evil maid solution after Haven went dormant.<p>The GitHub repo describes both the concept and the setup process in great details. For a quick overview, read up to the demo video.<p>There is also a presentation of Tripwire available on the Counter Surveil podcast: <a href="https://www.youtube.com/watch?v=s-wPrOTm5qo" rel="nofollow">https://www.youtube.com/watch?v=s-wPrOTm5qo</a>
Show HN: Tripwire: A new anti evil maid defense
If you have heard of [Haven](<a href="https://github.com/guardianproject/haven" rel="nofollow">https://github.com/guardianproject/haven</a>), then Tripwire fills in the void for a robust anti evil maid solution after Haven went dormant.<p>The GitHub repo describes both the concept and the setup process in great details. For a quick overview, read up to the demo video.<p>There is also a presentation of Tripwire available on the Counter Surveil podcast: <a href="https://www.youtube.com/watch?v=s-wPrOTm5qo" rel="nofollow">https://www.youtube.com/watch?v=s-wPrOTm5qo</a>
Show HN: Tripwire: A new anti evil maid defense
If you have heard of [Haven](<a href="https://github.com/guardianproject/haven" rel="nofollow">https://github.com/guardianproject/haven</a>), then Tripwire fills in the void for a robust anti evil maid solution after Haven went dormant.<p>The GitHub repo describes both the concept and the setup process in great details. For a quick overview, read up to the demo video.<p>There is also a presentation of Tripwire available on the Counter Surveil podcast: <a href="https://www.youtube.com/watch?v=s-wPrOTm5qo" rel="nofollow">https://www.youtube.com/watch?v=s-wPrOTm5qo</a>
Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig
Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig
Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig
Show HN: WhatHappened – HN summaries, heatmaps, and contrarian picks
Hi HN,<p>I built WhatHappened (whathappened.tech) because I have a love/hate relationship with this site. I love the content, but the "wall of text" UI gives me FOMO. I was spending too much time clicking into vague titles ("Project X") or wading through flame wars just to find technical insights.<p>I built this tool to act as a filter. It generates a card for the top daily posts with a few specific features to cut the noise:<p>1. AI Summaries: It generates a technical TL;DR (3 bullet points) and an ELI5 version for every post.<p>2. The Heat Meter: I analyze the comment section to visualize the distribution: Constructive vs. Technical vs. Flame War. If a thread is 90% Flame War, I know to skip it (or grab popcorn).<p>3. Contrarian Detection: To break the echo chamber, the AI specifically hunts for the most upvoted disagreement or critique in the comments and pins it to the card.<p>4. Mobile-First PWA: I mostly read HN on my phone, so I designed this as a PWA. It supports swipe gestures and installs to the home screen without an app store.<p>Stack: Next.js, Gemini, Supabase.<p>It currently supports English and Chinese. Any feedback will be appreciated! My original X post: <a href="https://x.com/marsw42/status/1997087957556318663" rel="nofollow">https://x.com/marsw42/status/1997087957556318663</a>, please share if you like it or find it helpful! :D<p>Thanks!
Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them
OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally.
Regex detection happens in the extension itself.
Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server.
You can verify this in the code + DevTools network panel.<p>This is an early prototype.
There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack:
Manifest V3 Chrome Extension
Python FastAPI (Localhost)
HuggingFace dslim/bert-base-NER
Roadmap / Request for Feedback:
Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.
Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them
OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally.
Regex detection happens in the extension itself.
Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server.
You can verify this in the code + DevTools network panel.<p>This is an early prototype.
There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack:
Manifest V3 Chrome Extension
Python FastAPI (Localhost)
HuggingFace dslim/bert-base-NER
Roadmap / Request for Feedback:
Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.
Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them
OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally.
Regex detection happens in the extension itself.
Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server.
You can verify this in the code + DevTools network panel.<p>This is an early prototype.
There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack:
Manifest V3 Chrome Extension
Python FastAPI (Localhost)
HuggingFace dslim/bert-base-NER
Roadmap / Request for Feedback:
Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.
Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them
OP here.<p>I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.<p>The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.<p>The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.<p>A few notes up front (to set expectations clearly):<p>Everything runs 100% locally.
Regex detection happens in the extension itself.
Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.<p>No data is ever sent to a server.
You can verify this in the code + DevTools network panel.<p>This is an early prototype.
There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.<p>Tech Stack:
Manifest V3 Chrome Extension
Python FastAPI (Localhost)
HuggingFace dslim/bert-base-NER
Roadmap / Request for Feedback:
Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.<p>I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.<p>Repo is MIT licensed.<p>Very open to ideas suggestions or alternative approaches.
Show HN: Sim – Apache-2.0 n8n alternative
Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>
Show HN: Sim – Apache-2.0 n8n alternative
Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>
Show HN: Sim – Apache-2.0 n8n alternative
Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>
Show HN: Sim – Apache-2.0 n8n alternative
Hey HN, Waleed here. We're building Sim (<a href="https://sim.ai/">https://sim.ai/</a>), an open-source visual editor to build agentic workflows. Repo here: <a href="https://github.com/simstudioai/sim/" rel="nofollow">https://github.com/simstudioai/sim/</a>. Docs here: <a href="https://docs.sim.ai">https://docs.sim.ai</a>.<p>You can run Sim locally using Docker, with no execution limits or other restrictions.<p>We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.<p>We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:<p>- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...<p>- Tool calling with granular control: forced, auto<p>- Agent memory: conversation memory with sliding window support (by last n messages or tokens)<p>- Trace spans: detailed logging and observability for nested workflows and tool calling<p>- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents<p>- Workflow deployment versioning with rollbacks<p>- MCP support, Human-in-the-loop block<p>- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)<p>Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.<p>Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.<p>We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=43823096">https://news.ycombinator.com/item?id=43823096</a><p>[2] <a href="https://news.ycombinator.com/item?id=44052766">https://news.ycombinator.com/item?id=44052766</a>
The highest quality codebase