The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: OSS implementation of Test Time Diffusion that runs on a 24gb GPU

Show HN: VoxConvo – "X but it's only voice messages"

Hi HN,<p>I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.<p>So I built VoxConvo.<p>Why this exists:<p>AI-generated content is drowning social media. ChatGPT replies, bot threads, AI slop everywhere.<p>When you hear someone's actual voice: their tone, hesitation, excitement - you know it's real. That authenticity is what we're losing.<p>So I built a simple platform where voice is the ONLY option.<p>The experience:<p>Every post is voice + transcript with word-level timestamps:<p>Read mode: Scan the transcript like normal text or listen mode: hit play and words highlight in real-time.<p>You get the emotion of voice with the scannability of text.<p>Key features:<p>- Voice shorts<p>- Real-time transcription<p>- Visual voice editing - click a word in transcript deletes that audio segment to remove filler words, mistakes, pauses<p>- Word-level timestamp sync<p>- No LLM content generation<p>Technical details:<p>Backend running on Mac Mini M1:<p>- TypeGraphQL + Apollo Server<p>- MongoDB + Atlas Search (community mongo + mongot)<p>- Redis pub/sub for GraphQL subscriptions<p>- Docker containerization for ready to scale<p>Transcription:<p>- VOSK real time gigaspeech model eats about 7GB RAM<p>- WebSocket streaming for real-time partial results<p>- Word-level timestamp extraction plus punctuation model<p>Storage:<p>- Audio files are stored to AWS S3<p>- Everything else is local<p>Why Mac Mini for MVP? Validation first, scaling later. Architecture is containerized and ready to migrate. But I'd rather prove demand on gigabit fiber than burn cloud budget.

Show HN: VoxConvo – "X but it's only voice messages"

Hi HN,<p>I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.<p>So I built VoxConvo.<p>Why this exists:<p>AI-generated content is drowning social media. ChatGPT replies, bot threads, AI slop everywhere.<p>When you hear someone's actual voice: their tone, hesitation, excitement - you know it's real. That authenticity is what we're losing.<p>So I built a simple platform where voice is the ONLY option.<p>The experience:<p>Every post is voice + transcript with word-level timestamps:<p>Read mode: Scan the transcript like normal text or listen mode: hit play and words highlight in real-time.<p>You get the emotion of voice with the scannability of text.<p>Key features:<p>- Voice shorts<p>- Real-time transcription<p>- Visual voice editing - click a word in transcript deletes that audio segment to remove filler words, mistakes, pauses<p>- Word-level timestamp sync<p>- No LLM content generation<p>Technical details:<p>Backend running on Mac Mini M1:<p>- TypeGraphQL + Apollo Server<p>- MongoDB + Atlas Search (community mongo + mongot)<p>- Redis pub/sub for GraphQL subscriptions<p>- Docker containerization for ready to scale<p>Transcription:<p>- VOSK real time gigaspeech model eats about 7GB RAM<p>- WebSocket streaming for real-time partial results<p>- Word-level timestamp extraction plus punctuation model<p>Storage:<p>- Audio files are stored to AWS S3<p>- Everything else is local<p>Why Mac Mini for MVP? Validation first, scaling later. Architecture is containerized and ready to migrate. But I'd rather prove demand on gigabit fiber than burn cloud budget.

Show HN: Ambient light sensor control of keyboard and screen brightness in Linux

I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor.<p>I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future.<p>But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :)

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Three Emojis, a daily word puzzle for language learners

I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me.<p>Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German.<p>Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well.<p>It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete).<p>Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro.<p>Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling!

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

I've been programming since I'm 6 and I don't want to quit. Since Agents came into existence I've been increasingly building more of the random ideas.<p><i>BUT, like many</i> I kept getting stuck and frustrated where I wanted to make changes with the Agent that I <i>knew</i> I could've made without it but I had *no clue* how things worked.<p>I created Intraview to help me build and maintain a mental model of what I was building (or had vibed) so I could use my knowledge to either fix it myself, or provide more directed instruction. It grew into something that's transformed my workflow in a pleasant way.<p>Intraview is a VS Code extension that allows you to create: - Dynamic code tours built by your existing Agent - Storage and sharing of tours (it's a file) - Batch Feedback/commenting inline in IDE in-tour and without (it's also a file)<p>Here's a video walkthrough for the show vs tell crowd where I jump in a random (<i>Plotly JS</i>) open source repo and build a tour to get started: <a href="https://www.youtube.com/watch?v=ROBvFlG6vtY" rel="nofollow">https://www.youtube.com/watch?v=ROBvFlG6vtY</a><p>Talking tech design, this is very different than most because the whole App is cloudless. Not server less, there's no external APIs (outside basic usage telemetry).<p><pre><code> - basic TypeScript app, JS/CSS/HTML - Localhost MCP server inside VS Code (one per workspace open) </code></pre> Three of the biggest challenges I faced was:<p><pre><code> - re-considering the user experience given there's no database - trying to build a reasonable experience to manage MCP connection across so many different setups. - testing the many forks, Agents and themes because I wanted to make it look native (I'll probably reverse course here in future iterations) </code></pre> What I'm curious about is, where do you see the value:<p><pre><code> - New project/developer onboarding - PR reviews - Keeping up with Agentic code - Perf reviews (for EM), you could build a tour biggest contributions by a GitHub handle - Planning alignment and review with your Agent </code></pre> You can see the extension page in VS Code with these custom links <i>(Note: this redirects and requires permission to open VS Code, won't actually install, takes another click)</i><p><pre><code> - for VS Code: https://intraview.ai/install?app=vscode - for Cursor: https://intraview.ai/install?app=cursor </code></pre> Once it's installed and you confirm MCP is connected to your local server, just ask your Agent:<p><pre><code> - Create an Intraview the onboarding for this app.. - Let's use Intraview to gather my feedback on [whatever you created]. Break down steps such that I can provide good granular feedback. </code></pre> Looking forward to your feedback and discussion.<p>And because this is HN. A relevant quotable from PG.<p><pre><code> “Your code is your understanding of the problem you’re exploring. So it’s only when you have your code in your head that you really understand the problem.” — Paul Graham</code></pre>

Show HN: Flutter_compositions: Vue-inspired reactive building blocks for Flutter

Show HN: Flutter_compositions: Vue-inspired reactive building blocks for Flutter

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data

I am excited to announce the release of TabPFN-2.5, our tabular foundation model that now scales to datasets of up to 50,000 samples and 2,000 features - a 5x increase from TabPFN v2, published in the Nature journal earlier this year. TabPFN-2.5 delivers state-of-the-art predictions in one forward pass without hyperparameter tuning across classification and regression tasks.<p><i>What’s new in 2.5</i>: TabPFN-2.5 maintains the core approach of v2 - a pretrained transformer trained on more than hundred million synthetic datasets to perform in-context learning and output a predictive distribution for the test data. It natively supports missing values, cateogrical features, text and numerical features is robust to outliers and uninformative features.<p>The major improvements:<p>- 5x scale increase: Now handles 50,000 samples × 2,000 features (up from 10,000 × 500 in v2)<p>- SOTA performance: TabPFN-2.5 outperforms tuned tree-based methods and matches the performance of a complex ensemble (AutoGluon 1.4), that itself includes TabPFN v2, tuned for 4 hours. Tuning the model improves performance, outperforming AutoGluon 1.4 for regression tasks.<p>- Rebuilt API: New REST interface along with Python SDK with dedicated fit & predict endpoints, making deployment and integration more developer-friendly<p>- A distillation engine that converts TabPFN-2.5 into a compact MLP or tree ensemble while preserving accuracy and offer low latency inference.<p>There are still some limitations. The model is designed for datasets up to 50K samples. It can handle larger datasets but that hasn’t been our focus with TabPFN-2.5. The distillation engine is not yet available through the API but only through licenses (though we do show the performance in the model report).<p>We’re actively working on removing these limitations and intend to release newer models focused on context reasoning, causal inference, graph networks, larger data and time-series. TabPFN-2.5 is available via API and a package on Hugging Face. Would love for you to try it and give us your feedback!<p>Model report: <a href="https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report" rel="nofollow">https://priorlabs.ai/technical-reports/tabpfn-2-5-model-repo...</a><p>Package: <a href="https://github.com/PriorLabs/TabPFN" rel="nofollow">https://github.com/PriorLabs/TabPFN</a><p>Client: <a href="https://github.com/PriorLabs/tabpfn-client" rel="nofollow">https://github.com/PriorLabs/tabpfn-client</a><p>Docs: <a href="https://docs.priorlabs.ai/quickstart" rel="nofollow">https://docs.priorlabs.ai/quickstart</a>

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data

I am excited to announce the release of TabPFN-2.5, our tabular foundation model that now scales to datasets of up to 50,000 samples and 2,000 features - a 5x increase from TabPFN v2, published in the Nature journal earlier this year. TabPFN-2.5 delivers state-of-the-art predictions in one forward pass without hyperparameter tuning across classification and regression tasks.<p><i>What’s new in 2.5</i>: TabPFN-2.5 maintains the core approach of v2 - a pretrained transformer trained on more than hundred million synthetic datasets to perform in-context learning and output a predictive distribution for the test data. It natively supports missing values, cateogrical features, text and numerical features is robust to outliers and uninformative features.<p>The major improvements:<p>- 5x scale increase: Now handles 50,000 samples × 2,000 features (up from 10,000 × 500 in v2)<p>- SOTA performance: TabPFN-2.5 outperforms tuned tree-based methods and matches the performance of a complex ensemble (AutoGluon 1.4), that itself includes TabPFN v2, tuned for 4 hours. Tuning the model improves performance, outperforming AutoGluon 1.4 for regression tasks.<p>- Rebuilt API: New REST interface along with Python SDK with dedicated fit & predict endpoints, making deployment and integration more developer-friendly<p>- A distillation engine that converts TabPFN-2.5 into a compact MLP or tree ensemble while preserving accuracy and offer low latency inference.<p>There are still some limitations. The model is designed for datasets up to 50K samples. It can handle larger datasets but that hasn’t been our focus with TabPFN-2.5. The distillation engine is not yet available through the API but only through licenses (though we do show the performance in the model report).<p>We’re actively working on removing these limitations and intend to release newer models focused on context reasoning, causal inference, graph networks, larger data and time-series. TabPFN-2.5 is available via API and a package on Hugging Face. Would love for you to try it and give us your feedback!<p>Model report: <a href="https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report" rel="nofollow">https://priorlabs.ai/technical-reports/tabpfn-2-5-model-repo...</a><p>Package: <a href="https://github.com/PriorLabs/TabPFN" rel="nofollow">https://github.com/PriorLabs/TabPFN</a><p>Client: <a href="https://github.com/PriorLabs/tabpfn-client" rel="nofollow">https://github.com/PriorLabs/tabpfn-client</a><p>Docs: <a href="https://docs.priorlabs.ai/quickstart" rel="nofollow">https://docs.priorlabs.ai/quickstart</a>

Show HN: See chords as flags – Visual harmony of top composers on musescore

I designed a relative piano-roll-based music notation. I used 12 colored arranged in a specific way to make visible the main effects and oppositions of Western tonal harmony. The tonic is always white, so a manual annotation/interpretation is required for each MIDI file.<p>All chords are flags of three to four colors. Minor mode is darker, major mode is lighter. Colors are arranged in thirds.<p>I sorted the pieces from simple complex harmony. I also wrote a bit of text to explain what you may see. There's also a corpus of structures: hyperlinks of tags that allow you to find similar patterns throughout my corpus of 3000+ popular pieces.<p>My method makes chord progressions memorizable and instantly visible in the scores. No preparation of Roman numeral analysis / chord symbols analysis is required. After a bit of training the chords will stare right in your eyes.<p>It's not synesthesia, it's a missing script for tonal music which makes harmonically identical things look the same (or similar).<p>I've also recorded lectures on my method in Russian (<a href="https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJGtiejiN3qtCce" rel="nofollow">https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJG...</a>). I'm sorry I haven't yet found time to re-record in English.<p>I've also sketched a friendlier intro: <a href="https://vpavlenko.github.io/d/" rel="nofollow">https://vpavlenko.github.io/d/</a><p>Sorry, but this thing won't make any sense if you're color-blind.<p>It's open-source: <a href="https://github.com/vpavlenko/rawl" rel="nofollow">https://github.com/vpavlenko/rawl</a><p>Earlier context: <a href="https://news.ycombinator.com/item?id=39165596">https://news.ycombinator.com/item?id=39165596</a><p>(Back then colors were less logical, and there was no corpus of 3000+ piece annotated yet)

Show HN: See chords as flags – Visual harmony of top composers on musescore

I designed a relative piano-roll-based music notation. I used 12 colored arranged in a specific way to make visible the main effects and oppositions of Western tonal harmony. The tonic is always white, so a manual annotation/interpretation is required for each MIDI file.<p>All chords are flags of three to four colors. Minor mode is darker, major mode is lighter. Colors are arranged in thirds.<p>I sorted the pieces from simple complex harmony. I also wrote a bit of text to explain what you may see. There's also a corpus of structures: hyperlinks of tags that allow you to find similar patterns throughout my corpus of 3000+ popular pieces.<p>My method makes chord progressions memorizable and instantly visible in the scores. No preparation of Roman numeral analysis / chord symbols analysis is required. After a bit of training the chords will stare right in your eyes.<p>It's not synesthesia, it's a missing script for tonal music which makes harmonically identical things look the same (or similar).<p>I've also recorded lectures on my method in Russian (<a href="https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJGtiejiN3qtCce" rel="nofollow">https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJG...</a>). I'm sorry I haven't yet found time to re-record in English.<p>I've also sketched a friendlier intro: <a href="https://vpavlenko.github.io/d/" rel="nofollow">https://vpavlenko.github.io/d/</a><p>Sorry, but this thing won't make any sense if you're color-blind.<p>It's open-source: <a href="https://github.com/vpavlenko/rawl" rel="nofollow">https://github.com/vpavlenko/rawl</a><p>Earlier context: <a href="https://news.ycombinator.com/item?id=39165596">https://news.ycombinator.com/item?id=39165596</a><p>(Back then colors were less logical, and there was no corpus of 3000+ piece annotated yet)

Show HN: See chords as flags – Visual harmony of top composers on musescore

I designed a relative piano-roll-based music notation. I used 12 colored arranged in a specific way to make visible the main effects and oppositions of Western tonal harmony. The tonic is always white, so a manual annotation/interpretation is required for each MIDI file.<p>All chords are flags of three to four colors. Minor mode is darker, major mode is lighter. Colors are arranged in thirds.<p>I sorted the pieces from simple complex harmony. I also wrote a bit of text to explain what you may see. There's also a corpus of structures: hyperlinks of tags that allow you to find similar patterns throughout my corpus of 3000+ popular pieces.<p>My method makes chord progressions memorizable and instantly visible in the scores. No preparation of Roman numeral analysis / chord symbols analysis is required. After a bit of training the chords will stare right in your eyes.<p>It's not synesthesia, it's a missing script for tonal music which makes harmonically identical things look the same (or similar).<p>I've also recorded lectures on my method in Russian (<a href="https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJGtiejiN3qtCce" rel="nofollow">https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJG...</a>). I'm sorry I haven't yet found time to re-record in English.<p>I've also sketched a friendlier intro: <a href="https://vpavlenko.github.io/d/" rel="nofollow">https://vpavlenko.github.io/d/</a><p>Sorry, but this thing won't make any sense if you're color-blind.<p>It's open-source: <a href="https://github.com/vpavlenko/rawl" rel="nofollow">https://github.com/vpavlenko/rawl</a><p>Earlier context: <a href="https://news.ycombinator.com/item?id=39165596">https://news.ycombinator.com/item?id=39165596</a><p>(Back then colors were less logical, and there was no corpus of 3000+ piece annotated yet)

Show HN: See chords as flags – Visual harmony of top composers on musescore

I designed a relative piano-roll-based music notation. I used 12 colored arranged in a specific way to make visible the main effects and oppositions of Western tonal harmony. The tonic is always white, so a manual annotation/interpretation is required for each MIDI file.<p>All chords are flags of three to four colors. Minor mode is darker, major mode is lighter. Colors are arranged in thirds.<p>I sorted the pieces from simple complex harmony. I also wrote a bit of text to explain what you may see. There's also a corpus of structures: hyperlinks of tags that allow you to find similar patterns throughout my corpus of 3000+ popular pieces.<p>My method makes chord progressions memorizable and instantly visible in the scores. No preparation of Roman numeral analysis / chord symbols analysis is required. After a bit of training the chords will stare right in your eyes.<p>It's not synesthesia, it's a missing script for tonal music which makes harmonically identical things look the same (or similar).<p>I've also recorded lectures on my method in Russian (<a href="https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJGtiejiN3qtCce" rel="nofollow">https://www.youtube.com/playlist?list=PLzQrZe3EemP5pVPYMwBJG...</a>). I'm sorry I haven't yet found time to re-record in English.<p>I've also sketched a friendlier intro: <a href="https://vpavlenko.github.io/d/" rel="nofollow">https://vpavlenko.github.io/d/</a><p>Sorry, but this thing won't make any sense if you're color-blind.<p>It's open-source: <a href="https://github.com/vpavlenko/rawl" rel="nofollow">https://github.com/vpavlenko/rawl</a><p>Earlier context: <a href="https://news.ycombinator.com/item?id=39165596">https://news.ycombinator.com/item?id=39165596</a><p>(Back then colors were less logical, and there was no corpus of 3000+ piece annotated yet)

Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

I built qqqa as an open-source project, because I was tired of bouncing between shell, ChatGPT / the browser for rather simple commands. It comes with two binaries: qq and qa.<p>qq means "quick question" - it is read-only, perfect for the commands I always forget.<p>qa means "quick agent" - it is qq's sibling that can run things, but only after showing its plan and getting an approval by the user.<p>It is built entirely around the Unix philosophy of focused tools, stateless by default - pretty much the opposite of what most coding agent are focusing on.<p>Personally I've had the best experience using Groq + gpt-oss-20b, as it feels almost instant (up to 1k tokens/s according to Groq) - but any OpenAI-compatible API will do.<p>Curious if the HN crowd finds it useful - and of course, AMA.

Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

I built qqqa as an open-source project, because I was tired of bouncing between shell, ChatGPT / the browser for rather simple commands. It comes with two binaries: qq and qa.<p>qq means "quick question" - it is read-only, perfect for the commands I always forget.<p>qa means "quick agent" - it is qq's sibling that can run things, but only after showing its plan and getting an approval by the user.<p>It is built entirely around the Unix philosophy of focused tools, stateless by default - pretty much the opposite of what most coding agent are focusing on.<p>Personally I've had the best experience using Groq + gpt-oss-20b, as it feels almost instant (up to 1k tokens/s according to Groq) - but any OpenAI-compatible API will do.<p>Curious if the HN crowd finds it useful - and of course, AMA.

< 1 2 3 ... 5 6 7 8 9 ... 898 899 900 >