The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Geofenced chat communities anyone can create

Hi HN<p>I built a location-based chatroom with Discord-like servers. This started as a portfolio project to learn WebSockets but has spiraled into something else entirely.<p>How it works.<p>There are two features right now:<p>Drop - Single chatrooms that can only be seen within a specified radius and only last for a time less than 48 hours chosen by the user.<p>Hubs - Geofenced servers modeled after Discord. These are not time restricted. They can be created by anyone and the creator becomes the admin able to add channels and set rules. When a user enters the location’s area, they can join the hub and continue seeing messages even after leaving. Hubs cannot overlap, so once one exists in an area another cannot be created on it. The hub will persist as long as it is being actively used. If unused for two weeks, it will be deleted. (Still implementing this deletion aspect, so that is not in the landing page at the moment)<p>Why I built this.<p>I do not like the feel of most social media anymore, but I really like my university’s discord server. I wanted something more general that provided similar interactions. So I thought something that might work is a more general social app tied to location.<p>I think if it is done right it can recreate the atmosphere that I liked. I thought a lot about what that atmosphere is. I think for social media to feel natural it needs a “third thing”: a shared interest or object that creates a connection between two people, or a neutral ground for communication.<p>Having something in common just makes the interactions better and more useful. I think location can serve as general thing in common, especially if the servers are curated by locals. It could also be a good way for people to immediately connect in a new place.<p>Right now, I’m just having fun building this thing. I would honestly like to use it if other people were on there… and it was built better and an app.<p>Feedback<p>I’m looking for any feedback. What’s a good idea or what’s a bad idea. This is really just a prototype, so there are some rough edges, and I am actively working on it. If you find any bugs and feel like communicating them, please do. You can reach me at nhowar@uwo.ca

Show HN: I built a self-hosted error tracker in Rails

This project is inspired by 37signals’ ONCE idea. I replicated the whole process and have already sold a few copies (the testimonials are real).

Show HN: I built a self-hosted error tracker in Rails

This project is inspired by 37signals’ ONCE idea. I replicated the whole process and have already sold a few copies (the testimonials are real).

Show HN: Command line YouTube downloader,a universal media solution for everyone

m2m, a purely command line bash application which allows you to download any video or playlist off of youtube dailymotion and pretty much anything yt-dlp supports

Show HN: I built an HTTP client that perfectly mimics Chrome 142

BoringSSL and nghttp2. Matches JA3N, JA4, and JA4_R fingerprints. Supports HTTP/2, async/await, and works with Cloudflare-protected sites. Not trying to compete with curl_cffi - just a learning project that turned into something functional.

Show HN: I built an HTTP client that perfectly mimics Chrome 142

BoringSSL and nghttp2. Matches JA3N, JA4, and JA4_R fingerprints. Supports HTTP/2, async/await, and works with Cloudflare-protected sites. Not trying to compete with curl_cffi - just a learning project that turned into something functional.

Show HN: Find matching acrylic paints for any HEX color

Show HN: Find matching acrylic paints for any HEX color

Show HN: Find matching acrylic paints for any HEX color

Show HN: OSS implementation of Test Time Diffusion that runs on a 24gb GPU

Show HN: OSS implementation of Test Time Diffusion that runs on a 24gb GPU

Show HN: VoxConvo – "X but it's only voice messages"

Hi HN,<p>I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.<p>So I built VoxConvo.<p>Why this exists:<p>AI-generated content is drowning social media. ChatGPT replies, bot threads, AI slop everywhere.<p>When you hear someone's actual voice: their tone, hesitation, excitement - you know it's real. That authenticity is what we're losing.<p>So I built a simple platform where voice is the ONLY option.<p>The experience:<p>Every post is voice + transcript with word-level timestamps:<p>Read mode: Scan the transcript like normal text or listen mode: hit play and words highlight in real-time.<p>You get the emotion of voice with the scannability of text.<p>Key features:<p>- Voice shorts<p>- Real-time transcription<p>- Visual voice editing - click a word in transcript deletes that audio segment to remove filler words, mistakes, pauses<p>- Word-level timestamp sync<p>- No LLM content generation<p>Technical details:<p>Backend running on Mac Mini M1:<p>- TypeGraphQL + Apollo Server<p>- MongoDB + Atlas Search (community mongo + mongot)<p>- Redis pub/sub for GraphQL subscriptions<p>- Docker containerization for ready to scale<p>Transcription:<p>- VOSK real time gigaspeech model eats about 7GB RAM<p>- WebSocket streaming for real-time partial results<p>- Word-level timestamp extraction plus punctuation model<p>Storage:<p>- Audio files are stored to AWS S3<p>- Everything else is local<p>Why Mac Mini for MVP? Validation first, scaling later. Architecture is containerized and ready to migrate. But I'd rather prove demand on gigabit fiber than burn cloud budget.

Show HN: VoxConvo – "X but it's only voice messages"

Hi HN,<p>I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.<p>So I built VoxConvo.<p>Why this exists:<p>AI-generated content is drowning social media. ChatGPT replies, bot threads, AI slop everywhere.<p>When you hear someone's actual voice: their tone, hesitation, excitement - you know it's real. That authenticity is what we're losing.<p>So I built a simple platform where voice is the ONLY option.<p>The experience:<p>Every post is voice + transcript with word-level timestamps:<p>Read mode: Scan the transcript like normal text or listen mode: hit play and words highlight in real-time.<p>You get the emotion of voice with the scannability of text.<p>Key features:<p>- Voice shorts<p>- Real-time transcription<p>- Visual voice editing - click a word in transcript deletes that audio segment to remove filler words, mistakes, pauses<p>- Word-level timestamp sync<p>- No LLM content generation<p>Technical details:<p>Backend running on Mac Mini M1:<p>- TypeGraphQL + Apollo Server<p>- MongoDB + Atlas Search (community mongo + mongot)<p>- Redis pub/sub for GraphQL subscriptions<p>- Docker containerization for ready to scale<p>Transcription:<p>- VOSK real time gigaspeech model eats about 7GB RAM<p>- WebSocket streaming for real-time partial results<p>- Word-level timestamp extraction plus punctuation model<p>Storage:<p>- Audio files are stored to AWS S3<p>- Everything else is local<p>Why Mac Mini for MVP? Validation first, scaling later. Architecture is containerized and ready to migrate. But I'd rather prove demand on gigabit fiber than burn cloud budget.

Show HN: Ambient light sensor control of keyboard and screen brightness in Linux

I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor.<p>I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future.<p>But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :)

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

< 1 2 3 4 ... 893 894 895 >