The best Hacker News stories from Show from the past day
Latest posts:
Show HN: See the carbon impact of your cloud as you code
Hey folks, I’m Hassan, one of the co-founders of Infracost (<a href="https://www.infracost.io">https://www.infracost.io</a>). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code.
The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure.<p>We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs.<p>It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John.<p>Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it.<p>My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor.<p>And now, it is here, and I’d love your feedback. Try it out by going to <a href="https://dashboard.infracost.io/">https://dashboard.infracost.io/</a>, create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it.<p>I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon).<p>AMA - I’ll be monitoring the thread :)<p>Thanks
Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets
I made this for myself, and it seemed like it might be useful to others. I'd love some feedback, both on the threat model and the tool itself. I hope you find it useful!<p>Backstory: I've been using many agents in parallel as I work on a somewhat ambitious financial analysis tool. I was juggling agents working on epics for the linear solver, the persistence layer, the front-end, and planning for the second-generation solver. I was losing my mind playing whack-a-mole with the permission prompts. YOLO mode felt so tempting. And yet.<p>Then it occurred to me: what if YOLO mode isn't so bad? Decision fatigue is a thing. If I could cap the blast radius of a confused agent, maybe I could just review once. Wouldn't that be safer?<p>So that day, while my kids were taking a nap, I decided to see if I could put YOLO-mode Claude inside a sandbox that blocks exfiltration and regulates git access. The result is yolo-cage.<p>Also: the AI wrote its own containment system from inside the system's own prototype. Which is either very aligned or very meta, depending on how you look at it.
Show HN: RatatuiRuby wraps Rust Ratatui as a RubyGem – TUIs with the joy of Ruby
TerabyteDeals – Compare storage prices by $/TB
I built a simple tool to compare hard drive and SSD prices by price-per-terabyte.<p>I kept having to calculate $/TB manually when shopping for NAS drives, so I made this to save myself the trouble.<p>It pulls prices from Amazon (US, CA, AU, and EU stores), calculates $/TB, and lets you filter by drive type, interface, form factor, and capacity.<p>Nothing fancy — just a sortable table updated daily.<p>Any feedback is more than welcome, I hope someone will find it useful!
Show HN: Rails UI
Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.<p>The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:<p>- LTTB downsampling runs as a compute shader
- Hit-testing for tooltips/hover is GPU-accelerated
- Rendering uses instanced draws (one draw call per series)<p>The result: 1M points at 60fps with smooth zoom/pan.<p>Live demo: <a href="https://chartgpu.github.io/ChartGPU/examples/million-points/" rel="nofollow">https://chartgpu.github.io/ChartGPU/examples/million-points/</a><p>Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`<p>Happy to answer questions about WebGPU internals or architecture decisions.
Show HN: An interactive physics simulator with 1000’s of balls, in your terminal
IP over Avian Carriers with Quality of Service (1999)
g(old)
Show HN: Artificial Ivy in the Browser
This is just a goofy thing I cooked up over the weekend. It's kind of like a screensaver, but with more reading and sliders. (It's not terribly efficient, so expect phone batteries to take a hit!)
Show HN: Ocrbase – pdf → .md/.json document OCR and structured extraction API
Show HN: Ocrbase – pdf → .md/.json document OCR and structured extraction API
Show HN: Subth.ink – write something and see how many others wrote the same
Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea).<p>It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM).<p>Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult.
Show HN: Pipenet – A Modern Alternative to Localtunnel
Hey HN!<p>localtunnel's server needs random ports per client. That doesn't work on Fly.io or behind strict firewalls.<p>We rewrote it in TypeScript and added multiplexing over a single port. Open-source and 100% self-hostable.<p>Public instance at *.pipenet.dev if you don't want to self-host.<p>Built at Glama for our MCP Inspector, but it's a generic tunnel with no ties to our infra.<p><a href="https://github.com/punkpeye/pipenet" rel="nofollow">https://github.com/punkpeye/pipenet</a>
Show HN: Agent Skills Leaderboard
Show HN: Agent Skills Leaderboard
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs
Hi HN, we're Sam, Shane, and Abhi.<p>Almost a year ago, we first shared Mastra here (<a href="https://news.ycombinator.com/item?id=43103073">https://news.ycombinator.com/item?id=43103073</a>). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.<p>Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.<p>If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.<p>Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.<p>Agent development is changing quickly, so we’ve added a lot since February:<p>- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.<p>- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.<p>- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.<p>- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.<p>(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)<p>Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.<p>We'll be around and happy to answer any questions!
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs
Hi HN, we're Sam, Shane, and Abhi.<p>Almost a year ago, we first shared Mastra here (<a href="https://news.ycombinator.com/item?id=43103073">https://news.ycombinator.com/item?id=43103073</a>). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.<p>Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.<p>If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.<p>Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.<p>Agent development is changing quickly, so we’ve added a lot since February:<p>- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.<p>- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.<p>- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.<p>- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.<p>(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)<p>Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.<p>We'll be around and happy to answer any questions!
Show HN: GibRAM an in-memory ephemeral GraphRAG runtime for retrieval
Hi HN,<p>I have been working with regulation-heavy documents lately, and one thing kept bothering me. Flat RAG pipelines often fail to retrieve related articles together, even when they are clearly connected through references, definitions, or clauses.<p>After trying several RAG setups, I subjectively felt that GraphRAG was a better mental model for this kind of data. The Microsoft GraphRAG paper and reference implementation were helpful starting points. However, in practice, I found one recurring friction point: graph storage and vector indexing are usually handled by separate systems, which felt unnecessarily heavy for short-lived analysis tasks.<p>To explore this tradeoff, I built GibRAM (Graph in-buffer Retrieval and Associative Memory). It is an experimental, in-memory GraphRAG runtime where entities, relationships, text units, and embeddings live side by side in a single process.<p>GibRAM is intentionally ephemeral. It is designed for exploratory tasks like summarization or conversational querying over a bounded document set. Data lives in memory, scoped by session, and is automatically cleaned up via TTL. There are no durability guarantees, and recomputation is considered cheaper than persistence for the intended use cases.<p>This is not a database and not a production-ready system. It is a casual project, largely vibe-coded, meant to explore what GraphRAG looks like when memory is the primary constraint instead of storage. Technical debt exists, and many tradeoffs are explicit.<p>The project is open source, and I would really appreciate feedback, especially from people working on RAG, search infrastructure, or graph-based retrieval.<p>GitHub: <a href="https://github.com/gibram-io/gibram" rel="nofollow">https://github.com/gibram-io/gibram</a><p>Happy to answer questions or hear why this approach might be flawed.
Show HN: I built a tool to assist AI agents to know when a PR is good to go
I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.<p>It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.<p>The core problem: no deterministic way for an agent to know a PR is ready to merge.<p>So I built gtg (Good To Go). One command, one answer:<p>$ gtg 123
OK PR #123: READY
CI: success (5/5 passed)
Threads: 3/3 resolved<p>It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.<p>The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.<p>MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.
Show HN: I built a tool to assist AI agents to know when a PR is good to go
I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.<p>It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.<p>The core problem: no deterministic way for an agent to know a PR is ready to merge.<p>So I built gtg (Good To Go). One command, one answer:<p>$ gtg 123
OK PR #123: READY
CI: success (5/5 passed)
Threads: 3/3 resolved<p>It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.<p>The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.<p>MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.