The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Full Python GUI apps in the browser – no JavaScript, no server

I have been working on Dear ImGui Bundle since 2022, but it is the first time I talk about it here. It is a framework around Dear ImGui for building interactive applications in Python and C++. It comes with batteries included: Plotting, image inspection, Markdown, node editors, 3D gizmos, knobs, toggles, etc.<p><a href="https://imgui-bundle.pages.dev" rel="nofollow">https://imgui-bundle.pages.dev</a><p>It now also runs smoothly in the browser via pyodide: The playground below is a python app running in your browser (no server, no JavaScript). You can edit the code on the left and click Run. It even works on mobile.<p><a href="https://imgui-bundle.pages.dev/playground" rel="nofollow">https://imgui-bundle.pages.dev/playground</a><p>I have a strong interest in providing tools that help others express their creativity. This project aims to be a step in this direction as it helps develop GUIs where the code is extremely readable & hackable.<p>Some of the goals it addresses:<p>- Bring true Immediate Mode GUI to Python and C++<p>- A versatile range of high quality libraries: Widgets, Plots, Image Analysis, Node edition, markdown rendering<p>- Multiplatform apps in C++: works on all platform in C++ (desktop, mobile, emscripten)<p>- Deploy python apps to the web<p>- High quality python bindings that are always up-to-date (because they are auto-generated)<p>- Smooth transition between C++ and Python (same APIs for both)<p>I'd be happy to answer questions!

Show HN: Social Network for Corporate Cringe

Built social network to make fun of corporate cringe. Post humblebrag content and react with direct emotions.

Show HN: Stage CLI – An easier way of reading your AI generated changes locally

Hey HN! We're Charles and Dean. A few weeks ago we posted about Stage, a code review tool that guides you through reading a PR step by step - <a href="https://news.ycombinator.com/item?id=47796818">https://news.ycombinator.com/item?id=47796818</a>.<p>We got a lot of great feedback but also heard from many people that they wanted to have the chapters experience even before opening a PR… so we built the Stage CLI as the local, open-source version that anyone can try.<p>Here’s a quick demo video: <a href="https://www.tella.tv/video/stage-cli-demo-f55q" rel="nofollow">https://www.tella.tv/video/stage-cli-demo-f55q</a><p>It works with any coding agent of your choice. The skill instructs the agent to read your current branch’s changes, break them down into separate logical chapters, and open them in a local browser.<p>We’ve found that reading changes this way is a lot easier for us than reading them in an IDE or other similar CLI tools, which present diffs to you in repository tree order. You can see a few examples of what it feels like here: <a href="https://stagereview.app/explore">https://stagereview.app/explore</a>.<p>Try it out and let us know what you think! Would love to hear any feedback :)

Show HN: Agent-skills-eval – Test whether Agent Skills improve outputs

Show HN: PHP-fts – Full-text search engine in pure PHP, no extensions

Show HN: TRUST – Coding Rust like it's 1989

Show HN: I built an open-source email builder, alternative to Beefree/Unlayer

Show HN: I built an open-source email builder, alternative to Beefree/Unlayer

Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem

Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem

Show HN: Hallucinopedia

Show HN: Hallucinopedia

Red Squares – GitHub outages as contributions

Show HN: I indexed 8,643 BSides talks across 227 chapters and 6 continents

Hi HN,<p>I'm Roland, and for the past few weeks, I've been building AllBSides — a directory of every BSides conference talk uploaded to YouTube. As of today, 8,643 talks from 5,927 speakers across 227 chapters in 68 countries. Combined runtime is 280 days. The transcripts come to about 60 million words.<p>The archive came together in stages:<p>1. Manually map every BSides chapter's YouTube channel 2. Pull every video and transcript from Supabase 3. Run each transcript through Haiku for tag extraction (tools, topics, difficulty, team, talk style, research method, and much more) 4. Run results through Sonnet for categorization and dedup 5. Final pass goes through Opus for verification 6. Do a manual verification - at one time, the pipeline showed over 16k AI suggestions for manual verification. Today, most are resolved.<p>Total LLM cost so far: about €200. The whole pipeline is rebuildable from scratch.<p>Each talk gets its own page with embedded video, full transcript, speakers, tags, and "related talks." Each tool/framework/protocol/standard mentioned across the corpus gets its own page (3,968 distinct technologies tracked).<p>Some interesting facts I gathered while building it:<p>-(A) The site is currently 94% bot traffic. Of that, about 80,000 hits/month are AI training crawlers (ClaudeBot, GPTBot, meta-externalagent). Within 7 days of the talks archive going live, all major AI labs had ingested the entire corpus. The discovery cascade was startling to watch in real time.<p>-(B) The taxonomy work was the hardest part. Distinguishing "tools" from "frameworks" from "protocols" from "concepts" sounds easy until you have 5,000 ambiguous extracted entities. The 3-tier LLM pipeline helped a lot — Haiku alone was too noisy, Opus alone was too expensive.<p>-(C) Top tools mentioned: Wireshark (343), PowerShell (342), Metasploit (332), Burp Suite (322), GitHub (296), VirusTotal (273), Docker (253), Splunk (251), Nmap (247), MITRE ATT&CK (237). The list reflects what BSides talks actually discuss, not what vendors curate.<p>-(D) May is the peak BSides month — 29 events, 17% of all events with dates.<p>-(E) The top 1% of talks (86 videos by view count) account for 51% of all viewership. The other 99% are deeply niche, often the only video record of a specific technique.<p>The stack is intentionally lean: Go, SQLite, vanilla JavaScript, BunnyCDN. Static rendering at build time. No frameworks, no client-side state. The site costs about €50/month to run.<p>The data behind this post and much more can be found in the site footer, under the link "stats".<p>Happy to answer questions about the data pipeline, the taxonomy decisions, or what the AI crawler patterns looked like as the archive went live. Feedback on what to build next is genuinely welcome — I'm a solo dev figuring this out as I go.<p>— Roland (parkado)

Show HN: Brainio – Markdown notepad that turns notes into visual mind maps

Show HN: I Built a Museum Exhibit

Show HN: I Built a Museum Exhibit

Show HN: I built a new word game, Wordtrak

Hi HN! Looking for feedback on this 1v1 and daily word dueling game I've built over the last few months.<p>Play here: <a href="https://wordtrak.com/" rel="nofollow">https://wordtrak.com/</a><p>Or on iOS here: <a href="https://apps.apple.com/us/app/wordtrak/id6760442363">https://apps.apple.com/us/app/wordtrak/id6760442363</a> (Android version soon!)

Show HN: I built a new word game, Wordtrak

Hi HN! Looking for feedback on this 1v1 and daily word dueling game I've built over the last few months.<p>Play here: <a href="https://wordtrak.com/" rel="nofollow">https://wordtrak.com/</a><p>Or on iOS here: <a href="https://apps.apple.com/us/app/wordtrak/id6760442363">https://apps.apple.com/us/app/wordtrak/id6760442363</a> (Android version soon!)

Show HN: Airbyte Agents – context for agents across multiple data sources

I’m Michel, co-founder and CEO of Airbyte (<a href="https://airbyte.com/" rel="nofollow">https://airbyte.com/</a>). We’ve spent the last six years building data connectors. Today we're launching Airbyte Agents (<a href="https://docs.airbyte.com/ai-agents/" rel="nofollow">https://docs.airbyte.com/ai-agents/</a>), a unified data layer for agents to discover information and take action across operational systems.<p>Here’s a quick walkthrough: <a href="https://www.youtube.com/watch?v=ZosDytyf1fg" rel="nofollow">https://www.youtube.com/watch?v=ZosDytyf1fg</a><p>As agents move into real workflows, they need access to more tools (e.g. Slack, Salesforce, Linear). That means a ton of API plumbing: authentication, pagination, filters, handling schema, and matching entities across systems.<p>Most MCPs don’t fix this. They’re thin wrappers over APIs, so agents inherit their weak primitives and still get it wrong most of the time, especially when working across tools.<p>An even deeper issue is that APIs assume you already know what to query (think endpoints, Object IDs, fields), whereas agents usually start one step earlier: they need first to discover what matters before they can even start reasoning.<p>So we built Airbyte Agents to be a context layer between your Agents and all of your data. The core of this is something we call Context Store: a data index optimized for agentic search, populated by our replication connectors. All that work on data connectors the last six years comes in handy here!<p>This gives agents a structured way to discover data, while still allowing them to read and write directly to the upstream system when needed.<p>What got us working on this was an insane trace from an agent we were migrating to our new SDK. It was supposed to answer "which customers are at risk of leaving this quarter?" The trace had 47 steps. Most were API calls. The agent first had to find a bunch of accounts, then map them to the right customers, then look for tickets, bla bla... and when the Agent finally responded, the answer sounded ok, but was wrong. Not only that, it was excruciatingly slow. So we had to do something about it.<p>That 47-step agent is one example of a question where Airbyte Agents does particularly well. Other examples: - “Show me all enterprise deals closing this month with open support tickets." - “Find every support ticket that doesn’t have a Github issue opened”<p>Some of these might sound simple, but the quality of the answer changes dramatically when the agent doesn’t have to assemble all that context at runtime.<p>Once we had an early version of the product, I spent a weekend building a benchmark harness to see if it worked. Also for fun, I like writing benchmarks :). I compared calling the Airbyte Agent MCP vs calling a bunch of vendor MCPs directly. I tested retrieval, and search.<p>For the sake of simplicity, I used token consumption as a unit of measure. I think that’s a good proxy for how well agents are working. A failing agent (like the one that took 47 steps), will churn through lots of tokens while getting nowhere, while a successful one will get straight to the point.<p>Here's what I found when measuring: for Gong, it used up to 80% fewer tokens than their own MCP, for Zendesk up to 90% fewer, for Linear up to 75%, and for Salesforce up to 16% (Salesforce’s own SOQL does a good job here).<p>Of course there is the usual obvious bias: we are the builders of what we are benchmarking. So we made the test harness public: <a href="https://github.com/airbytehq/airbyte-agents-benchmarks" rel="nofollow">https://github.com/airbytehq/airbyte-agents-benchmarks</a>. Feel free to poke at it, and please tell us what you find if you do!<p>It's still early and some parts are rough, but we wanted to share this with the community asap. We'd love to hear from people building agents: - Are you indexing data ahead of time, or letting the agent call APIs live? - How are you matching entities across systems?<p>Would also love to hear any thoughts, comments, or ideas of how we could make this better, and if there are obvious things we’re missing. For now, we’re excited to keep building!

1 2 3 ... 976 977 978 >