The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Vaultrice – A real-time key-value store with a localStorage API

Hi HN,<p>I'm Adriano, one of the makers of Vaultrice. I'm excited (and a little nervous!) to share what we've been building.<p>For years, we found ourselves in a frustrating loop: whenever we needed a simple real-time feature—like a "who's online" list, a collaborative app, or just sharing state between a marketing site and our main app — we'd end up spending days setting up the same stack or discarded to do it. Setting it up, it always involved wiring together a database, a WebSocket server, an API, and managing the connection state. It felt like massive overkill for what we were trying to achieve.<p>We wanted a tool that felt as simple to use as the browser's `localStorage` API but worked across domains and devices, with real-time sync and security built-in.<p>So, we built Vaultrice.<p>It’s a key-value data store built on top of Cloudflare's Durable Objects, which gives you a strongly consistent backend for each data object. You interact with it through our TS/JS SDK, which comes in two flavors:<p>1. `NonLocalStorage`: A low-level client with a `localStorage`-like API (`setItem`, `getItem`, etc.) plus real-time events and presence (`.on()`, `.join()`).<p>2. `SyncObject`: A higher-level, reactive JavaScript Proxy. You just change a property on an object (`doc.title = 'New Title'`), and it automatically syncs to all other connected clients.<p>The goal is to let you build the real-time features you want in minutes, not days. We've also put a lot of thought into a layered security model, allowing you to go from simple API key restrictions all the way to server-signed object IDs and client-side E2EE.<p>We’ve just launched and would be grateful for any feedback from the HN community. What do you think of the API design? Are there use cases we haven't considered? Any thoughts on the security model?<p>We'll be here (or via email (support@vaultrice.com)) to answer any questions. Thanks for checking it out!

Show HN: MCP Security Suite

Hi HN!<p>We kept seeing devs get pwned through MCP tools in ways that security scanners completely miss. So we built an open-source analyzer to catch these attacks. Our first OSS by Mighty team.<p>The problem: At Defcon, we saw MCP exploits with 100% success rate against Claude and Llama. Three attack patterns:<p>Hidden Unicode in "error messages" - Paste a colleague's error into Claude, your SSH keys get exfiltrated Trusted tool updates - That database tool you've used for months? Last week's update added credential theft Tool redefinition - Malicious tool redefines "deploy to prod" to run attacker's script<p>Traditional scanners (CodeQL, SonarQube) catch <15% of these. They're looking for SQLi, not prompt injections hidden in tool descriptions.<p>What we built: git clone <a href="https://github.com/NineSunsInc/mighty-security" rel="nofollow">https://github.com/NineSunsInc/mighty-security</a><p>python analyzers/comprehensive_mcp_analyzer.py /path/to/your/mcp/tool<p>Scans for prompt injection, credential exfil, suspicious updates, tool shadowing. Runtime wrapper adds <10ms overhead. Fully local, no telemetry.<p>Why this matters: 43% of MCP tools have command injection vulns. GitHub's own MCP server was exploitable. We found Fortune 500s running database-connected MCP tools that hadn't been audited since installation. We went from paranoid code review to "AI said it works" in 18 months. The magic is real, but so are the vulnerabilities.<p>Demo: <a href="https://www.loom.com/share/e830c56d39254a788776358c5b03fdc3" rel="nofollow">https://www.loom.com/share/e830c56d39254a788776358c5b03fdc3</a><p>GitHub: <a href="https://github.com/NineSunsInc/mighty-security" rel="nofollow">https://github.com/NineSunsInc/mighty-security</a><p>Would love feedback - what MCP security issues have you seen?

Show HN: JMAP MCP – Email for your agents

I wrote this JMAP MCP server that adds email management tools to Claude for searching, reading, and sending emails through FastMail and other JMAP providers in Deno!

Show HN: JMAP MCP – Email for your agents

I wrote this JMAP MCP server that adds email management tools to Claude for searching, reading, and sending emails through FastMail and other JMAP providers in Deno!

Show HN: Prime Number Grid Visualizer

Hello HN. I made this simple little tool that let's you input rows and columns to create a grid, then it plots the grid with prime numbers.<p>I made it for fun, but I'd love suggestions on how I can improve it in any way. Thanks, love you.

Show HN: Prime Number Grid Visualizer

Hello HN. I made this simple little tool that let's you input rows and columns to create a grid, then it plots the grid with prime numbers.<p>I made it for fun, but I'd love suggestions on how I can improve it in any way. Thanks, love you.

Show HN: Prime Number Grid Visualizer

Hello HN. I made this simple little tool that let's you input rows and columns to create a grid, then it plots the grid with prime numbers.<p>I made it for fun, but I'd love suggestions on how I can improve it in any way. Thanks, love you.

Show HN: Edka – Kubernetes clusters on your own Hetzner account

Hi HN,<p>I’ve been working with Kubernetes for over a decade, since the alpha days, and was involved in kube-aws project before AWS launched EKS. For the past four years, I’ve been helping friends and small businesses cut costs by running Kubernetes on Hetzner Cloud, which I’ve found to be rock solid and by far the best priced provider.<p>Provisioning a cluster on Hetzner is now straightforward, thanks to tools like k3s and hetzner-k3s, but configuring it for your specific needs still takes time and expertise. I built Edka to make that part easy: spin up a production ready cluster in ~2 minutes, then choose how low level or automated you want to go.<p>How it works:<p>Layer 1 – Cluster provisioning - Creates a k3s-based Kubernetes cluster on Hetzner (lightweight, easy to manage, scales well).<p>Layer 2 – Add-ons - One-click deploy for metrics-server, cert-manager, and various operators; preconfigured for Hetzner, no extra setup needed.<p>Layer 3 – Applications - Minimal config UIs for apps built on top of add-ons. - Example: Need PostgreSQL? Fill a few fields → platform installs CloudNativePG → provisions HA PostgreSQL with PITR → gives ready to use endpoints. Backups can be restored to any point in time with a click. Quick demo: <a href="https://edka.io/apps/" rel="nofollow">https://edka.io/apps/</a><p>Layer 4 – Deployments - Connect your CI to push container images to a public/private registry. - Edka updates deployments automatically (with semantic versioning rules), supports instant rollbacks, autoscaling, persistent volumes, secrets/env imports, and quick public exposure. Quick demo: <a href="https://edka.io/deployments/" rel="nofollow">https://edka.io/deployments/</a><p>Tech stack: TypeScript, React + Tailwind CSS, PostgreSQL, Redis, BullMQ, Vault + AWS KMS to encrypted sensitive data.<p>The platform is still in beta and I’m building it in my spare time, so there are some rough edges, but I’d love feedback from anyone running Kubernetes on Hetzner, exploring alternatives to EKS/GKE/AKS or looking to automate their infrastructure with Kubernetes.<p>More details: <a href="https://edka.io/" rel="nofollow">https://edka.io/</a><p>Thank you!

Show HN: Edka – Kubernetes clusters on your own Hetzner account

Hi HN,<p>I’ve been working with Kubernetes for over a decade, since the alpha days, and was involved in kube-aws project before AWS launched EKS. For the past four years, I’ve been helping friends and small businesses cut costs by running Kubernetes on Hetzner Cloud, which I’ve found to be rock solid and by far the best priced provider.<p>Provisioning a cluster on Hetzner is now straightforward, thanks to tools like k3s and hetzner-k3s, but configuring it for your specific needs still takes time and expertise. I built Edka to make that part easy: spin up a production ready cluster in ~2 minutes, then choose how low level or automated you want to go.<p>How it works:<p>Layer 1 – Cluster provisioning - Creates a k3s-based Kubernetes cluster on Hetzner (lightweight, easy to manage, scales well).<p>Layer 2 – Add-ons - One-click deploy for metrics-server, cert-manager, and various operators; preconfigured for Hetzner, no extra setup needed.<p>Layer 3 – Applications - Minimal config UIs for apps built on top of add-ons. - Example: Need PostgreSQL? Fill a few fields → platform installs CloudNativePG → provisions HA PostgreSQL with PITR → gives ready to use endpoints. Backups can be restored to any point in time with a click. Quick demo: <a href="https://edka.io/apps/" rel="nofollow">https://edka.io/apps/</a><p>Layer 4 – Deployments - Connect your CI to push container images to a public/private registry. - Edka updates deployments automatically (with semantic versioning rules), supports instant rollbacks, autoscaling, persistent volumes, secrets/env imports, and quick public exposure. Quick demo: <a href="https://edka.io/deployments/" rel="nofollow">https://edka.io/deployments/</a><p>Tech stack: TypeScript, React + Tailwind CSS, PostgreSQL, Redis, BullMQ, Vault + AWS KMS to encrypted sensitive data.<p>The platform is still in beta and I’m building it in my spare time, so there are some rough edges, but I’d love feedback from anyone running Kubernetes on Hetzner, exploring alternatives to EKS/GKE/AKS or looking to automate their infrastructure with Kubernetes.<p>More details: <a href="https://edka.io/" rel="nofollow">https://edka.io/</a><p>Thank you!

Show HN: Edka – Kubernetes clusters on your own Hetzner account

Hi HN,<p>I’ve been working with Kubernetes for over a decade, since the alpha days, and was involved in kube-aws project before AWS launched EKS. For the past four years, I’ve been helping friends and small businesses cut costs by running Kubernetes on Hetzner Cloud, which I’ve found to be rock solid and by far the best priced provider.<p>Provisioning a cluster on Hetzner is now straightforward, thanks to tools like k3s and hetzner-k3s, but configuring it for your specific needs still takes time and expertise. I built Edka to make that part easy: spin up a production ready cluster in ~2 minutes, then choose how low level or automated you want to go.<p>How it works:<p>Layer 1 – Cluster provisioning - Creates a k3s-based Kubernetes cluster on Hetzner (lightweight, easy to manage, scales well).<p>Layer 2 – Add-ons - One-click deploy for metrics-server, cert-manager, and various operators; preconfigured for Hetzner, no extra setup needed.<p>Layer 3 – Applications - Minimal config UIs for apps built on top of add-ons. - Example: Need PostgreSQL? Fill a few fields → platform installs CloudNativePG → provisions HA PostgreSQL with PITR → gives ready to use endpoints. Backups can be restored to any point in time with a click. Quick demo: <a href="https://edka.io/apps/" rel="nofollow">https://edka.io/apps/</a><p>Layer 4 – Deployments - Connect your CI to push container images to a public/private registry. - Edka updates deployments automatically (with semantic versioning rules), supports instant rollbacks, autoscaling, persistent volumes, secrets/env imports, and quick public exposure. Quick demo: <a href="https://edka.io/deployments/" rel="nofollow">https://edka.io/deployments/</a><p>Tech stack: TypeScript, React + Tailwind CSS, PostgreSQL, Redis, BullMQ, Vault + AWS KMS to encrypted sensitive data.<p>The platform is still in beta and I’m building it in my spare time, so there are some rough edges, but I’d love feedback from anyone running Kubernetes on Hetzner, exploring alternatives to EKS/GKE/AKS or looking to automate their infrastructure with Kubernetes.<p>More details: <a href="https://edka.io/" rel="nofollow">https://edka.io/</a><p>Thank you!

Show HN: XR2000: A science fiction programming challenge

Today I’m releasing the XR2000: A programming challenge with extensive science fiction backstory.

Show HN: XR2000: A science fiction programming challenge

Today I’m releasing the XR2000: A programming challenge with extensive science fiction backstory.

Show HN: XR2000: A science fiction programming challenge

Today I’m releasing the XR2000: A programming challenge with extensive science fiction backstory.

Show HN: XR2000: A science fiction programming challenge

Today I’m releasing the XR2000: A programming challenge with extensive science fiction backstory.

Show HN: Yet another memory system for LLMs

Built this for my LLM workflows - needed searchable, persistent memory that wouldn't blow up storage costs. I also wanted to use it locally for my research. It's a content-addressed storage system with block-level deduplication (saves 30-40% on typical codebases). I have integrated the CLI tool into most of my workflows in Zed, Claude Code, and Cursor, and I provide the prompt I'm currently using in the repo.<p>The project is in C++ and the build system is rough around the edges but is tested on macOS and Ubuntu 24.04.

Show HN: Yet another memory system for LLMs

Built this for my LLM workflows - needed searchable, persistent memory that wouldn't blow up storage costs. I also wanted to use it locally for my research. It's a content-addressed storage system with block-level deduplication (saves 30-40% on typical codebases). I have integrated the CLI tool into most of my workflows in Zed, Claude Code, and Cursor, and I provide the prompt I'm currently using in the repo.<p>The project is in C++ and the build system is rough around the edges but is tested on macOS and Ubuntu 24.04.

Show HN: OWhisper – Ollama for realtime speech-to-text

Hello everyone. This is Yujong from the Hyprnote team (<a href="https://github.com/fastrepl/hyprnote" rel="nofollow">https://github.com/fastrepl/hyprnote</a>).<p>We built OWhisper for 2 reasons: (Also outlined in <a href="https://docs.hyprnote.com/owhisper/what-is-this" rel="nofollow">https://docs.hyprnote.com/owhisper/what-is-this</a>)<p>(1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way.<p>(2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints.<p>The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them.<p>For (1) - You can try it now. (<a href="https://docs.hyprnote.com/owhisper/cli/get-started" rel="nofollow">https://docs.hyprnote.com/owhisper/cli/get-started</a>)<p><pre><code> bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en </code></pre> If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8)<p>We're here and looking forward to your comments!

Show HN: OWhisper – Ollama for realtime speech-to-text

Hello everyone. This is Yujong from the Hyprnote team (<a href="https://github.com/fastrepl/hyprnote" rel="nofollow">https://github.com/fastrepl/hyprnote</a>).<p>We built OWhisper for 2 reasons: (Also outlined in <a href="https://docs.hyprnote.com/owhisper/what-is-this" rel="nofollow">https://docs.hyprnote.com/owhisper/what-is-this</a>)<p>(1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way.<p>(2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints.<p>The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them.<p>For (1) - You can try it now. (<a href="https://docs.hyprnote.com/owhisper/cli/get-started" rel="nofollow">https://docs.hyprnote.com/owhisper/cli/get-started</a>)<p><pre><code> bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en </code></pre> If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8)<p>We're here and looking forward to your comments!

Show HN: OWhisper – Ollama for realtime speech-to-text

Hello everyone. This is Yujong from the Hyprnote team (<a href="https://github.com/fastrepl/hyprnote" rel="nofollow">https://github.com/fastrepl/hyprnote</a>).<p>We built OWhisper for 2 reasons: (Also outlined in <a href="https://docs.hyprnote.com/owhisper/what-is-this" rel="nofollow">https://docs.hyprnote.com/owhisper/what-is-this</a>)<p>(1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way.<p>(2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints.<p>The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them.<p>For (1) - You can try it now. (<a href="https://docs.hyprnote.com/owhisper/cli/get-started" rel="nofollow">https://docs.hyprnote.com/owhisper/cli/get-started</a>)<p><pre><code> bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en </code></pre> If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8)<p>We're here and looking forward to your comments!

Show HN: OWhisper – Ollama for realtime speech-to-text

Hello everyone. This is Yujong from the Hyprnote team (<a href="https://github.com/fastrepl/hyprnote" rel="nofollow">https://github.com/fastrepl/hyprnote</a>).<p>We built OWhisper for 2 reasons: (Also outlined in <a href="https://docs.hyprnote.com/owhisper/what-is-this" rel="nofollow">https://docs.hyprnote.com/owhisper/what-is-this</a>)<p>(1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way.<p>(2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints.<p>The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them.<p>For (1) - You can try it now. (<a href="https://docs.hyprnote.com/owhisper/cli/get-started" rel="nofollow">https://docs.hyprnote.com/owhisper/cli/get-started</a>)<p><pre><code> bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en </code></pre> If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8)<p>We're here and looking forward to your comments!

< 1 2 3 ... 13 14 15 16 17 ... 864 865 866 >