The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Making a cross-platform game in Go using WebRTC Datachannels

Show HN: Robot MCP Server – Connect Any Language Model and ROS Robots Using MCP

We’ve open-sourced the Robot MCP Server, a tool that lets large language models (LLMs) talk directly to robots running ROS1 or ROS2.<p>What it does - Connects any LLM to existing ROS robots via the Model Context Protocol (MCP) - Natural language → ROS topics, services, and actions (And the ability to read any of them back) - Works without changing robot source code<p>Why it matters - Makes robots accessible from natural language interfaces - Opens the door to rapid prototyping of AI-robot applications - We are trying to create a common interface for safe AI ↔ robot communication<p>This is too big to develop alone — we’d love feedback, contributors, and partners from both the robotics and AI communities.

Show HN: CrabCamera – Cross-platform camera plugin for Tauri desktop apps

After building several Tauri desktop apps, I kept hitting the same wall: there's no reliable way to access cameras across Windows, macOS, and Linux. Every project meant reinventing camera integration, dealing with platform-specific APIs, and debugging permission issues.<p><pre><code> So I built CrabCamera – a Tauri plugin that handles all the camera complexity for you. What it does: - One API, three platforms: Same Rust code works on Windows (DirectShow), macOS (AVFoundation), and Linux (V4L2) - Permission handling: Automatically requests camera permissions on each platform - Format conversion: Takes care of the messy bits between platform formats and what your app needs - Error handling: Proper Rust error types instead of mysterious crashes - Hot-plugging: Detects when cameras are connected/disconnected The problem it solves: Before CrabCamera, adding camera support to a Tauri app meant: 1. Writing separate native code for each platform 2. Managing three different permission systems 3. Handling format conversions manually 4. Debugging platform-specific edge cases 5. Maintaining it all as OS APIs change Now it's just: use crabcamera::Camera; let camera = Camera::new()?; let frame = camera.capture_frame().await?; Why I built it: I was working on a plant monitoring app (botanica) that needed reliable camera access for time-lapse photography. Existing solutions were either abandoned, platform-specific, or required complex native bindings. The Tauri ecosystem is growing fast, but camera support was this obvious gap. Every desktop app eventually needs camera access – video calls, document scanning, AR features, security monitoring. Technical highlights: - Uses nokhwa for the heavy lifting but wraps it in Tauri-friendly APIs - Proper async/await support throughout - Memory-efficient streaming for video capture - Built-in image processing pipeline - Extensible plugin architecture What's next: - WebRTC integration for video calls - Built-in barcode/QR code scanning - Face detection hooks - Performance optimizations for 4K streams The crate is MIT licensed and available on crates.io. I'd love feedback from other Tauri developers who've wrestled with camera integration. Links: - Crates.io: https://crates.io/crates/crabcamera - GitHub: https://github.com/Michael-A-Kuykendall/crabcamera - Documentation: https://docs.rs/crabcamera</code></pre>

Show HN: CrabCamera – Cross-platform camera plugin for Tauri desktop apps

After building several Tauri desktop apps, I kept hitting the same wall: there's no reliable way to access cameras across Windows, macOS, and Linux. Every project meant reinventing camera integration, dealing with platform-specific APIs, and debugging permission issues.<p><pre><code> So I built CrabCamera – a Tauri plugin that handles all the camera complexity for you. What it does: - One API, three platforms: Same Rust code works on Windows (DirectShow), macOS (AVFoundation), and Linux (V4L2) - Permission handling: Automatically requests camera permissions on each platform - Format conversion: Takes care of the messy bits between platform formats and what your app needs - Error handling: Proper Rust error types instead of mysterious crashes - Hot-plugging: Detects when cameras are connected/disconnected The problem it solves: Before CrabCamera, adding camera support to a Tauri app meant: 1. Writing separate native code for each platform 2. Managing three different permission systems 3. Handling format conversions manually 4. Debugging platform-specific edge cases 5. Maintaining it all as OS APIs change Now it's just: use crabcamera::Camera; let camera = Camera::new()?; let frame = camera.capture_frame().await?; Why I built it: I was working on a plant monitoring app (botanica) that needed reliable camera access for time-lapse photography. Existing solutions were either abandoned, platform-specific, or required complex native bindings. The Tauri ecosystem is growing fast, but camera support was this obvious gap. Every desktop app eventually needs camera access – video calls, document scanning, AR features, security monitoring. Technical highlights: - Uses nokhwa for the heavy lifting but wraps it in Tauri-friendly APIs - Proper async/await support throughout - Memory-efficient streaming for video capture - Built-in image processing pipeline - Extensible plugin architecture What's next: - WebRTC integration for video calls - Built-in barcode/QR code scanning - Face detection hooks - Performance optimizations for 4K streams The crate is MIT licensed and available on crates.io. I'd love feedback from other Tauri developers who've wrestled with camera integration. Links: - Crates.io: https://crates.io/crates/crabcamera - GitHub: https://github.com/Michael-A-Kuykendall/crabcamera - Documentation: https://docs.rs/crabcamera</code></pre>

Show HN: HumanAlarm – Real people knock on your door to wake you up

I built HumanAlarm because I'm a heavy sleeper who's missed important things despite multiple phone alarms.<p>It's exactly what it sounds like - you book a wake-up time, we send someone to knock on your door for 2 minutes. If you don't answer, they wait 3-5 minutes and knock again. Simple as that.<p>We're live in select cities.<p>Would love feedback on the concept and execution!

Show HN: HumanAlarm – Real people knock on your door to wake you up

I built HumanAlarm because I'm a heavy sleeper who's missed important things despite multiple phone alarms.<p>It's exactly what it sounds like - you book a wake-up time, we send someone to knock on your door for 2 minutes. If you don't answer, they wait 3-5 minutes and knock again. Simple as that.<p>We're live in select cities.<p>Would love feedback on the concept and execution!

Show HN: Small Transfers – charge from 0.000001 USD per request for your SaaS

Show HN: Small Transfers – charge from 0.000001 USD per request for your SaaS

Show HN: Haystack – Review pull requests like you wrote them yourself

Hi HN! We’re Akshay and Jake. We put together a tool called Haystack to make pull requests straightforward to read.<p>What Haystack does:<p>-- Builds a clear narrative. Changes in Haystack aren’t just arranged as unordered diffs. Instead, they unfold in a logical order, each paired with an explanation in plain, precise language<p>-- Focuses attention where it counts. Routine plumbing and refactors are put into skimmable sections so you can spend your time on design and correctness<p>-- Provides full cross-file context. Every new or changed function/variable is traced across the codebase, showing how it’s used beyond the immediate diff<p>Here’s a quick demo: <a href="https://youtu.be/w5Lq5wBUS-I" rel="nofollow">https://youtu.be/w5Lq5wBUS-I</a><p>If you’d like to give it a spin, head over to haystackeditor.com/review! We set up some demo PRs that you should be able to understand and review even if you’ve never seen the repos before!<p>We used to work at big companies, where reviewing non-trivial pull requests felt like reading a book with its pages out of order. We would jump and scroll between files, trying to piece together the author’s intent before we could even start reviewing. And, as authors, we would spend time to restructure our own commits just to make them readable. AI has made this even trickier. Today it’s not uncommon for a pull request to contain code the author doesn’t fully understand themselves!<p>So, we built Haystack to help reviewers spend less time untangling code and more time giving meaningful feedback. We would love to hear about whether it gets the job done for you!<p>How we got here:<p>Haystack began as (yet another) VS Code fork where we experimented with visualizing code changes on a canvas. At first, it was a neat way to show how pieces of code worked together. But customers started laying out their entire codebase just to make sense of it. That’s when we realized the deeper problem: understanding a codebase is hard, and engineers need better ways to quickly understand unfamiliar code.<p>As we kept building, another insight emerged: with AI woven into workflows, engineers don’t always need to master every corner of a codebase to ship features. But in code review, deep and continuous context still matters, especially to separate what’s important to review from plumbing and follow-on changes.<p>So we pivoted. We took what we’d learned and worked closely with engineers to refine the idea. We started with simple code analysis (using language servers, tree-sitter, etc.) to show how changes relate. Then we added AI to explain and organize those changes and to trace how data moves through a pull request. Finally, we fused the two by empowering AI agents to use static analyses. Step by step, that became the Haystack we’re showing today.<p>We’d love to hear your thoughts, feedback, or suggestions!

Show HN: Haystack – Review pull requests like you wrote them yourself

Hi HN! We’re Akshay and Jake. We put together a tool called Haystack to make pull requests straightforward to read.<p>What Haystack does:<p>-- Builds a clear narrative. Changes in Haystack aren’t just arranged as unordered diffs. Instead, they unfold in a logical order, each paired with an explanation in plain, precise language<p>-- Focuses attention where it counts. Routine plumbing and refactors are put into skimmable sections so you can spend your time on design and correctness<p>-- Provides full cross-file context. Every new or changed function/variable is traced across the codebase, showing how it’s used beyond the immediate diff<p>Here’s a quick demo: <a href="https://youtu.be/w5Lq5wBUS-I" rel="nofollow">https://youtu.be/w5Lq5wBUS-I</a><p>If you’d like to give it a spin, head over to haystackeditor.com/review! We set up some demo PRs that you should be able to understand and review even if you’ve never seen the repos before!<p>We used to work at big companies, where reviewing non-trivial pull requests felt like reading a book with its pages out of order. We would jump and scroll between files, trying to piece together the author’s intent before we could even start reviewing. And, as authors, we would spend time to restructure our own commits just to make them readable. AI has made this even trickier. Today it’s not uncommon for a pull request to contain code the author doesn’t fully understand themselves!<p>So, we built Haystack to help reviewers spend less time untangling code and more time giving meaningful feedback. We would love to hear about whether it gets the job done for you!<p>How we got here:<p>Haystack began as (yet another) VS Code fork where we experimented with visualizing code changes on a canvas. At first, it was a neat way to show how pieces of code worked together. But customers started laying out their entire codebase just to make sense of it. That’s when we realized the deeper problem: understanding a codebase is hard, and engineers need better ways to quickly understand unfamiliar code.<p>As we kept building, another insight emerged: with AI woven into workflows, engineers don’t always need to master every corner of a codebase to ship features. But in code review, deep and continuous context still matters, especially to separate what’s important to review from plumbing and follow-on changes.<p>So we pivoted. We took what we’d learned and worked closely with engineers to refine the idea. We started with simple code analysis (using language servers, tree-sitter, etc.) to show how changes relate. Then we added AI to explain and organize those changes and to trace how data moves through a pull request. Finally, we fused the two by empowering AI agents to use static analyses. Step by step, that became the Haystack we’re showing today.<p>We’d love to hear your thoughts, feedback, or suggestions!

Show HN: TailGuard – Bridge your WireGuard router into Tailscale via a container

My elderly parents are behind a 5G connection in rural areas, and I help them manage their network from overseas. I found a reasonably priced 5G router that can do external antennas required for it to work, but the only reasonable ways to get access to it is either through OpenVPN or WireGuard, the latter of which is much more lightweight and preferred with the memory constraints of the device.<p>The problem with WireGuard is that it requires handling key management oneself, and configuring the keys to every device you want to access it from. It also doesn't play nicely together with other VPNs, meaning I ended up connecting and disconnecting VPNs whenever I wanted to use them. This is especially evident on my phone, which only allows one VPN app at a time.<p>I was already using Tailscale as an easy way to handle homelab access with SSO, even if some computers are behind ISP CGNAT, and came up with this idea of spinning up a Docker container to connect the two. I found some suggestions for it online, but nothing ready to use. It ended up being more work than I expected to fine tune the routing, IPv6, firewall settings, re-resolving the DNS of the router on IP address changes etc.<p>I got it very stable eventually though, and wanted to share with everyone else. I think it's cool to have the WireGuard router looking like any other Tailscale node in my tailnet now.

Show HN: TailGuard – Bridge your WireGuard router into Tailscale via a container

My elderly parents are behind a 5G connection in rural areas, and I help them manage their network from overseas. I found a reasonably priced 5G router that can do external antennas required for it to work, but the only reasonable ways to get access to it is either through OpenVPN or WireGuard, the latter of which is much more lightweight and preferred with the memory constraints of the device.<p>The problem with WireGuard is that it requires handling key management oneself, and configuring the keys to every device you want to access it from. It also doesn't play nicely together with other VPNs, meaning I ended up connecting and disconnecting VPNs whenever I wanted to use them. This is especially evident on my phone, which only allows one VPN app at a time.<p>I was already using Tailscale as an easy way to handle homelab access with SSO, even if some computers are behind ISP CGNAT, and came up with this idea of spinning up a Docker container to connect the two. I found some suggestions for it online, but nothing ready to use. It ended up being more work than I expected to fine tune the routing, IPv6, firewall settings, re-resolving the DNS of the router on IP address changes etc.<p>I got it very stable eventually though, and wanted to share with everyone else. I think it's cool to have the WireGuard router looking like any other Tailscale node in my tailnet now.

Show HN: Superagents – connect spreadsheets to any database, API or MCP server

Hi HN, I’m Eoin, founder of Sourcetable (<a href="https://sourcetable.com" rel="nofollow">https://sourcetable.com</a>).<p>Today, we’re launching Superagents. You can now connect your spreadsheet to any database, API or MCP server on the Internet. All of that data is available inside your spreadsheet, and you can use AI to analyze it and build models, reports and visualizations.<p>The reason I started the company is because I spent 10 years at startups across engineering and operations roles and realized that Excel and Sheets weren't architected for the modern information environment. This creates a tremendous amount of nuisance and busywork cobbling together SaaS tools, reporting suites, and the misery of endless coordination meetings to make it all happen. (Boo meetings!)<p>Spreadsheets aren’t just a business application: they’re the original thinking tool. The quality of these tools has a downstream impact on analytical thinking and creativity writ large, so this is a problem worth solving. Fast forward to today, we’re a 6 person team taking on Excel, Sheets and ChatGPT, so we’re excited to hear what you think!<p>Who are Superagents for? Analysts, operators, and anyone doing data-centric work in spreadsheets. We see a tonne of finance people, of course, but also students, researchers and mom & pop shops. Sourcetable's superagents democratize data access and analysis, which is nice because our company’s mission is to make data accessible to everyone.<p>Why “Superagents”? Because they can plan and orchestrate other task-specific agents to complete your work for you. We have a lot of different AI tools and agents inside Sourcetable, but there’s a whole lot more on the Agentic Web. Superagents are like the conductor that coordinates them all and calls on them when needed. Also, it’s a fun feature name (thanks, Alyssa!)<p>If you remember the linked-data dream of the semantic web movement, that future is now: all of your business data is available and connected in Sourcetable.<p>How does it work? Sourcetable is running a python virtual machine under the hood. Everything is sandboxed, and there are hundreds of AI tools and libraries our AI can access. Superagents are also doing code-gen on the fly to solve problems. The closest system we have found is Replit’s sandboxed operating systems. Beyond that Mixtral, ChatGPT and Anthropic offer some limited data connectivity features, except these AI chat services lack the storage, compute, and code execution that Sourcetable and Replit provide. This is all very new.<p>How is this different to your previous data connectors, etc? We started out using ETL services to sync data and provide a GUI-driven PowerBI like experience in your spreadsheet. This was useful for people who knew SQL and how to write joins to combine fragmented data, but for everyone else (read: practically everyone), this solution just didn’t provide the frictionless, self-serve experience that we wanted.<p>Our choices were to switch the GTM motion or change the product, so we shelved that reporting suite and focused on our AI spreadsheet and waited for models to catch up with our ambitions. Now that they have, we’re re-launching Sourcetable with our original goal in mind: building a spreadsheet-based operating system for the Agent Web, with fully networked data access for <i>everyone</i> on your team.<p>AI is the great UX enabler.<p>Caveats:<p>* We heavily use Postgres, Google Analytics, Stripe and Google Search Console with Superagents.<p>* We haven’t tested every endpoint on the Internet. We find that mainstream, well documented applications work best.<p>* Yes, you can write data back to 3rd party applications and databases. We generally advise against this unless you understand the risks involved in giving AI write-access to your data.<p>Bonus round:<p>* All data connectors added during this launch week are FREE. (Regular AI messaging limits still apply.)<p>Product Feedback? eoin@sourcetable.com

Show HN: An Open Source XR(AR/VR) Operating System

We're two college students building an XR(AR/VR) native Operating System with a custom kernel. We're also Open Source so feel free to check our GitHub Repository- <a href="https://github.com/manaskamal/XenevaOS" rel="nofollow">https://github.com/manaskamal/XenevaOS</a> .<p>The journey hasn't exactly been easy, we've been criticized by a lot saying that whatever we're doing is impractical and that we're too ambitious. Regardless, we've been committed to reach our goal.<p>Here to answer all questions and doubts. Answering one question beforehand because we know someone is going to ask it -<p>Q: Why use your own kernel/ Why don't you use Linux/ Why are you trying to reinvent the wheel?<p>A: Using our own kernel helps us get rid of the baggage of legacy codes, bring the most optimal performance on our target hardware (XR/AR/VR) and achieve more efficiency than what we would've achieved on an existing kernel.<p>We're not trying to reinvent the wheel, but just building Formula One racing tyres for it.

Show HN: An Open Source XR(AR/VR) Operating System

We're two college students building an XR(AR/VR) native Operating System with a custom kernel. We're also Open Source so feel free to check our GitHub Repository- <a href="https://github.com/manaskamal/XenevaOS" rel="nofollow">https://github.com/manaskamal/XenevaOS</a> .<p>The journey hasn't exactly been easy, we've been criticized by a lot saying that whatever we're doing is impractical and that we're too ambitious. Regardless, we've been committed to reach our goal.<p>Here to answer all questions and doubts. Answering one question beforehand because we know someone is going to ask it -<p>Q: Why use your own kernel/ Why don't you use Linux/ Why are you trying to reinvent the wheel?<p>A: Using our own kernel helps us get rid of the baggage of legacy codes, bring the most optimal performance on our target hardware (XR/AR/VR) and achieve more efficiency than what we would've achieved on an existing kernel.<p>We're not trying to reinvent the wheel, but just building Formula One racing tyres for it.

Show HN: Term.everything – Run any GUI app in the terminal

I made a built-from scratch Wayland Compositor to display any GUI app* in the terminal! I think there is a lot of unexplored potential in custom Wayland compositors, a lot of really cool things you can embed existing applications into! So, I started with embedding apps into the terminal because that is the easiest input/output (output is just utf-8 and I use the great `chafa` library for that, and I just read from stdin for the input).<p>If you have any other ideas for cool Wayland compositors, let me know. I purposedly wrote 80% the app in Typescript to appeal to the most developers and attract cool contributions (I do all drawing with the familiar Canvas2D api, so if there is interest, I can also fork this out into a cool Terminal canvas, let me know!)<p>I have a blog post here about how I did it, but it’s pretty high level and non technical, so please ask if you have any questions.<p>[How I Did It](<<a href="https://github.com/mmulet/term.everything/blob/main/resources/HowIDidIt.md" rel="nofollow">https://github.com/mmulet/term.everything/blob/main/resource...</a>>)<p>*technically only Wayland apps and x11 apps with Xwayland. But on Linux that’s mostly everything.

Show HN: Term.everything – Run any GUI app in the terminal

I made a built-from scratch Wayland Compositor to display any GUI app* in the terminal! I think there is a lot of unexplored potential in custom Wayland compositors, a lot of really cool things you can embed existing applications into! So, I started with embedding apps into the terminal because that is the easiest input/output (output is just utf-8 and I use the great `chafa` library for that, and I just read from stdin for the input).<p>If you have any other ideas for cool Wayland compositors, let me know. I purposedly wrote 80% the app in Typescript to appeal to the most developers and attract cool contributions (I do all drawing with the familiar Canvas2D api, so if there is interest, I can also fork this out into a cool Terminal canvas, let me know!)<p>I have a blog post here about how I did it, but it’s pretty high level and non technical, so please ask if you have any questions.<p>[How I Did It](<<a href="https://github.com/mmulet/term.everything/blob/main/resources/HowIDidIt.md" rel="nofollow">https://github.com/mmulet/term.everything/blob/main/resource...</a>>)<p>*technically only Wayland apps and x11 apps with Xwayland. But on Linux that’s mostly everything.

Show HN: OpenCV over WebRTC (in Go)

Show HN: Vicinae – a native, Raycast-compatible launcher for Linux

Hi HN!<p>I’ve always been a fan of application launchers, and I was impressed by the approach the Raycast team took — especially their extension system. About six months ago I started building something similar for Linux, aiming to integrate deeply at the OS level and give extensions a lot of power.<p>Vicinae is written in C++ with Qt Widgets. I chose Widgets over QML for more imperative control of the UI, especially around extension handling. So far that’s worked well — modern C++ is great.<p>To support my goals I built a number of custom widgets, including a fully virtualized list that can efficiently render tens of thousands of items. That gave me a lot of respect for Qt — it’s a powerful framework that mostly stayed out of my way.<p>A key feature is support for Raycast extensions (React + TypeScript), most of which can be installed and used directly inside the launcher (though not all features are implemented yet). There’s also a native API package (@vicinae/api) for writing Vicinae-specific extensions with additional capabilities. This required writing a custom React reconciler — surprisingly straightforward, though still unpolished.<p>Like Raycast, Vicinae ships with powerful built-in modules, but the goal isn’t to make a clone. I want it to grow into its own project that fits the FOSS model better, while staying compatible with the Raycast ecosystem. I also plan to bring it to other OSes eventually.<p>I’d love feedback on the technical approach, and suggestions for what would make this useful to you. Contributions are very welcome — I’ve already been pleasantly surprised by how quickly people started helping.<p>Docs: <a href="https://docs.vicinae.com" rel="nofollow">https://docs.vicinae.com</a> Repo: <a href="https://github.com/vicinaehq/vicinae" rel="nofollow">https://github.com/vicinaehq/vicinae</a>

Show HN: Vicinae – a native, Raycast-compatible launcher for Linux

Hi HN!<p>I’ve always been a fan of application launchers, and I was impressed by the approach the Raycast team took — especially their extension system. About six months ago I started building something similar for Linux, aiming to integrate deeply at the OS level and give extensions a lot of power.<p>Vicinae is written in C++ with Qt Widgets. I chose Widgets over QML for more imperative control of the UI, especially around extension handling. So far that’s worked well — modern C++ is great.<p>To support my goals I built a number of custom widgets, including a fully virtualized list that can efficiently render tens of thousands of items. That gave me a lot of respect for Qt — it’s a powerful framework that mostly stayed out of my way.<p>A key feature is support for Raycast extensions (React + TypeScript), most of which can be installed and used directly inside the launcher (though not all features are implemented yet). There’s also a native API package (@vicinae/api) for writing Vicinae-specific extensions with additional capabilities. This required writing a custom React reconciler — surprisingly straightforward, though still unpolished.<p>Like Raycast, Vicinae ships with powerful built-in modules, but the goal isn’t to make a clone. I want it to grow into its own project that fits the FOSS model better, while staying compatible with the Raycast ecosystem. I also plan to bring it to other OSes eventually.<p>I’d love feedback on the technical approach, and suggestions for what would make this useful to you. Contributions are very welcome — I’ve already been pleasantly surprised by how quickly people started helping.<p>Docs: <a href="https://docs.vicinae.com" rel="nofollow">https://docs.vicinae.com</a> Repo: <a href="https://github.com/vicinaehq/vicinae" rel="nofollow">https://github.com/vicinaehq/vicinae</a>

1 2 3 ... 863 864 865 >