The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: A game where you build a GPU

Thought the resources for GPU arch were lacking, so here we are

European alternatives to Google, Apple, Dropbox and 120 US apps

Show HN: Apfel – The free AI already on your Mac

Github: <a href="https://github.com/Arthur-Ficial/apfel" rel="nofollow">https://github.com/Arthur-Ficial/apfel</a>

Show HN: I built a frontpage for personal blogs

With social media and now AI, its important to keep the indie web alive. There are many people who write frequently. Blogosphere tries to highlight them by fetching the recent posts from personal blogs across many categories.<p>There are two versions: Minimal (HN-inspired, fast, static): <a href="https://text.blogosphere.app/" rel="nofollow">https://text.blogosphere.app/</a> Non-minimal: <a href="https://blogosphere.app/" rel="nofollow">https://blogosphere.app/</a><p>If you don't find your blog (or your favorite ones), please add them. I will review and approve it.

Show HN: Dull – Instagram Without Reels, YouTube Without Shorts (iOS)

I kept deleting and redownloading Instagram because I couldn't stop watching Reels but needed the app for DMs. Tried screen time limits, just overrode them. So I built this.<p>Dull loads Instagram, YouTube, Facebook, and X and filters out short-form content with a mix of CSS and JS injection. MutationObserver handles anything that lazy-loads after the page renders, which is most of the annoying stuff since these platforms love to load content dynamically.<p>The ongoing work is maintaining the filters. Platforms change their DOM all the time, Instagram obfuscates class names, YouTube restructures how Shorts appear in the feed, etc. It's a cat-and-mouse thing that never really ends.<p>Also has grayscale mode, time limits, and usage tracking.<p>Happy to answer questions.

Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs

Show HN: CLI to order groceries via reverse-engineered REWE API (Haskell)

I just had the best time learning about the REWE (German supermarket chain) API, how they use mTLS and what the workflows are. Also `mitmproxy2swagger`[1] is a great tool to create OpenAPI spec automatically.<p>And then 2026 feels like the perfect time writing Haskell. The code is handwritten, but whenever I got stuck with the build system or was just not getting the types right, I could fall back to ask AI to unblock me. It was never that smooth before.<p>Finally the best side projects are the ones you actually use and this one will be used for all my future grocery shopping.<p>[1]<a href="https://github.com/alufers/mitmproxy2swagger" rel="nofollow">https://github.com/alufers/mitmproxy2swagger</a>

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs

Show HN: Postgres extension for BM25 relevance-ranked full-text search

Last summer we faced a conundrum at my company, Tiger Data, a Postgres cloud vendor whose main business is in timeseries data. We were trying to grow our business towards emerging AI-centric workloads and wanted to provide a state-of-the-art hybrid search stack in Postgres. We'd already built pgvectorscale in house with the goal of scaling semantic search beyond pgvector's main memory limitations. We just needed a scalable ranked keyword search solution too.<p>The problem: core Postgres doesn't provide this; the leading Postgres BM25 extension, ParadeDB, is guarded behind AGPL; developing our own extension appeared daunting. We'd need a small team of sharp engineers and 6-12 months, I figured. And we'd probably still fall short of the performance of a mature system like Parade/Tantivy.<p>Or would we? I'd be experimenting long enough with AI-boosted development at that point to realize that with the latest tools (Claude Code + Opus) and an experienced hand (I've been working in database systems internals for 25 years now), the old time estimates pretty much go out the window.<p>I told our CTO I thought I could solo the project in one quarter. This raised some eyebrows.<p>It did take a little more time than that (two quarters), and we got some real help from the community (amazing!) after open-sourcing the pre-release. But I'm thrilled/exhausted today to share that pg_textsearch v1.0 is freely available via open source (Postgres license), on Tiger Data cloud, and hopefully soon, a hyperscalar near you:<p><a href="https://github.com/timescale/pg_textsearch" rel="nofollow">https://github.com/timescale/pg_textsearch</a><p>In the blog post accompanying the release, I overview the architecture and present benchmark results using MS-MARCO. To my surprise, we were not only able to meet Parade/Tantivy's query performance, but exceed it substantially, measuring a 4.7x advantage on query throughput at scale:<p><a href="https://www.tigerdata.com/blog/pg-textsearch-bm25-full-text-search-postgres" rel="nofollow">https://www.tigerdata.com/blog/pg-textsearch-bm25-full-text-...</a><p>It's exciting (and, to be honest, a little unnerving) to see a field I've spent so much time toiling in change so quickly in ways that enable us to be more ambitious in our technical objectives. Technical moats are moats no longer.<p>The benchmark scripts and methodology are available in the github repo. Happy to answer any questions in the thread.<p>Thanks,<p>TJ (tj@tigerdata.com)

Show HN: 30u30.fyi – Is your startup founder on Forbes' most fraudulent list?

Show HN: 30u30.fyi – Is your startup founder on Forbes' most fraudulent list?

Show HN: Twitch Roulette – Find live streamers who need views the most

Hey HN, I re-launched an old site I remembered back in the day that someone made called twitchroulette.net with a lot of new features and stats and I would love for people to check it out. The idea is you can easily browse the less browsed parts of twitch and find cool and new streamers to say hi to, and maybe make some new friends.<p>I also added some real time stats and breakdowns per channel and I think some of the things they show are pretty interesting. Check it out!

Show HN: Twitch Roulette – Find live streamers who need views the most

Hey HN, I re-launched an old site I remembered back in the day that someone made called twitchroulette.net with a lot of new features and stats and I would love for people to check it out. The idea is you can easily browse the less browsed parts of twitch and find cool and new streamers to say hi to, and maybe make some new friends.<p>I also added some real time stats and breakdowns per channel and I think some of the things they show are pretty interesting. Check it out!

Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer

The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google's A2A protocol.<p>Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2/day.<p>A2A passthrough: the private-side agent borrows the gateway's own inference pipeline, so there's one API key and one billing relationship regardless of who initiated the request.<p>You can talk to nully at <a href="https://georgelarson.me/chat/" rel="nofollow">https://georgelarson.me/chat/</a> or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.

Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer

The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google's A2A protocol.<p>Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2/day.<p>A2A passthrough: the private-side agent borrows the gateway's own inference pipeline, so there's one API key and one billing relationship regardless of who initiated the request.<p>You can talk to nully at <a href="https://georgelarson.me/chat/" rel="nofollow">https://georgelarson.me/chat/</a> or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.

Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3

I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster.<p>It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.<p>I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. <a href="https://turbopuffer.com/blog/turbopuffer" rel="nofollow">https://turbopuffer.com/blog/turbopuffer</a><p>The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”<p>Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.<p>At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.<p>You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.<p>It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.<p>On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.<p>Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.<p>I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.

Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3

I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster.<p>It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.<p>I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. <a href="https://turbopuffer.com/blog/turbopuffer" rel="nofollow">https://turbopuffer.com/blog/turbopuffer</a><p>The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”<p>Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.<p>At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.<p>You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.<p>It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.<p>On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.<p>Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.<p>I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.

Show HN: AI Roundtable – Let 200 models debate your question

Hey HN! After the Car Wash Test post got quite a big discussion going (400+ comments, <a href="https://news.ycombinator.com/item?id=47128138">https://news.ycombinator.com/item?id=47128138</a>), I spent the past few weeks building a tool so anyone can run these kinds of questions and get structured results. No signup and free to use.<p>You type a question, define answer options, pick up to 50 models at a time from a pool of 200+, and they all answer independently under identical conditions. No system prompt, structured output, same setup for every model.<p>You can also run a debate round where models see each other's reasoning and get a chance to change their minds. A reviewer model then summarizes the full transcript. All models are routed via my startup Opper. Any feedback is welcome!<p>Hope you enjoy it, and would love to hear what you think!

Show HN: AI Roundtable – Let 200 models debate your question

Hey HN! After the Car Wash Test post got quite a big discussion going (400+ comments, <a href="https://news.ycombinator.com/item?id=47128138">https://news.ycombinator.com/item?id=47128138</a>), I spent the past few weeks building a tool so anyone can run these kinds of questions and get structured results. No signup and free to use.<p>You type a question, define answer options, pick up to 50 models at a time from a pool of 200+, and they all answer independently under identical conditions. No system prompt, structured output, same setup for every model.<p>You can also run a debate round where models see each other's reasoning and get a chance to change their minds. A reviewer model then summarizes the full transcript. All models are routed via my startup Opper. Any feedback is welcome!<p>Hope you enjoy it, and would love to hear what you think!

Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build

I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.<p>So I built a CLI that lets the agent open a browser, interact with the page, record what happens, and collect any errors. Then it bundles everything — video, screenshots, logs — into a self-contained HTML file I can review in seconds.<p><pre><code> proofshot start --run "npm run dev" --port 3000 # agent navigates, clicks, takes screenshots proofshot stop </code></pre> It works with whatever agent you use (Claude Code, Cursor, Codex, etc.) — it’s just shell commands. It's packaged as a skill so your AI coding agent knows exactly how it works. It's built on agent-browser from Vercel Labs which is far better and faster than Playwright MCP.<p>It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.<p>Open source and completely free.<p>Website: <a href="https://proofshot.argil.io/" rel="nofollow">https://proofshot.argil.io/</a>

< 1 2 3 4 ... 165 166 167 >