The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: ESP32 RC Cars

This is a projected I started that blends both the fun of playing a split screen multiplayer driving game and controlling real rc cars.<p>The cars can also be controlled via bluetooth gamepads and is meant to be easily hackable.

Show HN: Uscope, a new Linux debugger written from scratch

Hi! I've been building a debugger on my nights and weekends because it's fun, and I personally need a better debugger for my work. GDB and LLDB pain me greatly; we can and will do better!<p>As explained in the README, it's still very early-days and it's not ready for use yet, but check back often because it's improving all the time!<p>Check out <a href="https://calabro.io/uscope" rel="nofollow">https://calabro.io/uscope</a> for a more detailed explanation.<p>Thanks for taking a look!

Show HN: Uscope, a new Linux debugger written from scratch

Hi! I've been building a debugger on my nights and weekends because it's fun, and I personally need a better debugger for my work. GDB and LLDB pain me greatly; we can and will do better!<p>As explained in the README, it's still very early-days and it's not ready for use yet, but check back often because it's improving all the time!<p>Check out <a href="https://calabro.io/uscope" rel="nofollow">https://calabro.io/uscope</a> for a more detailed explanation.<p>Thanks for taking a look!

Show HN: Audiocube – A 3D DAW for Spatial Audio

I’ve recently released my solo project Audiocube<p>I wanted to make a 3D DAW, where spatial audio, physics, and virtual acoustics are all directly integrated into the engine.<p>This makes it easy to create music in 3D, and experiment with new techniques which aren’t possible in traditional DAWs and plugins.<p>I’d love to get any feedback on this software (Mac/Windows) to make it better.<p>You can download it for free through the website.<p>Thanks, Noah

Show HN: Audiocube – A 3D DAW for Spatial Audio

I’ve recently released my solo project Audiocube<p>I wanted to make a 3D DAW, where spatial audio, physics, and virtual acoustics are all directly integrated into the engine.<p>This makes it easy to create music in 3D, and experiment with new techniques which aren’t possible in traditional DAWs and plugins.<p>I’d love to get any feedback on this software (Mac/Windows) to make it better.<p>You can download it for free through the website.<p>Thanks, Noah

Show HN: Meelo, self-hosted music server for collectors and music maniacs

I've been working on this alternative for Plex for almost 3 years now. It's main selling point is that it correctly handles multiple versions of albums and songs. As of today, it only has a web client.<p>It tries to be as flexible as possible, but still requires a bit of configuration (including regexes, but if metadata is embedded into the files, it can be skipped).<p>I just released v3.0, making videos first-class data, and scanning + metadata matching faster.

Show HN: I Made an iOS Podcast Player with Racket

Show HN: Bagels – TUI expense tracker

Hi! I'm Jax and I've been building this cool little terminal app for myself to track my expenses and budgets!<p>Other than challenging myself to learn Python, I built this mainly around the habit of budget tracking at the end of the day. (I tried tracking on-the-go, but the balance was always out of sync.) All data is stored in a single sqlite file, so you can export and process them all you want!<p>The app is built using the textual API for Python! Awesome framework which feels like I'm doing webdev haha.<p>You can check out some screenshots on gh: <a href="https://github.com/EnhancedJax/Bagels">https://github.com/EnhancedJax/Bagels</a><p>Thanks!

Show HN: Bagels – TUI expense tracker

Hi! I'm Jax and I've been building this cool little terminal app for myself to track my expenses and budgets!<p>Other than challenging myself to learn Python, I built this mainly around the habit of budget tracking at the end of the day. (I tried tracking on-the-go, but the balance was always out of sync.) All data is stored in a single sqlite file, so you can export and process them all you want!<p>The app is built using the textual API for Python! Awesome framework which feels like I'm doing webdev haha.<p>You can check out some screenshots on gh: <a href="https://github.com/EnhancedJax/Bagels">https://github.com/EnhancedJax/Bagels</a><p>Thanks!

Show HN: 3D printing giant things with a Python jigsaw generator

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Trolling SMS spammers with Ollama

I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses - Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.

Show HN: Trolling SMS spammers with Ollama

I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses - Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.

Show HN: Lightpanda, an open-source headless browser in Zig

We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?

Show HN: Lightpanda, an open-source headless browser in Zig

We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?

< 1 2 3 4 5 6 7 8 ... 134 135 136 >