The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: GPT image editing, but for 3D models

Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

Show HN: AirAP AirPlay server – AirPlay to an iOS Device

I made AirAP because I wanted a simple way to play sound from my Mac Mini when my speaker broke. But it’s got a ton of other uses too, like testing how audio sounds like on different devices, or repurposing old wired speakers.<p>This was incredibly fun to make - can’t wait for you all to see it!

Show HN: AirAP AirPlay server – AirPlay to an iOS Device

I made AirAP because I wanted a simple way to play sound from my Mac Mini when my speaker broke. But it’s got a ton of other uses too, like testing how audio sounds like on different devices, or repurposing old wired speakers.<p>This was incredibly fun to make - can’t wait for you all to see it!

Show HN: I wrote a Java decompiler in pure C language

Show HN: I wrote a Java decompiler in pure C language

Show HN: I wrote a Java decompiler in pure C language

Show HN: Fast Random Library for C++17

Morning HN.<p>Random number generation feels is a somewhat underrepresented topic in the C++ realm. There is a lot of questionable info about it found online and even the standard library is quite behind the times in terms of it's algorithms. It suffers from trying to accommodate sometimes impractical standard requirements and has several ways of getting significantly bad statistical results. This leaves a lot easily achievable performance & quality on the table.<p>So, being a mathematician who mostly works with stochastic models and wants these models to run fast and well, I embarked on a journey of trying to summarize "what is good and what is bad" and implement the "best stuff there currently is".<p>Thankfully, the design of C++ <random> is quite flexible and easy to extend. With some cleanup, generalization and compile-time logic all the different algorithms can be wrapped in a generic standard-compatible API.<p>A result of this work is single-header RNG library which has:<p><pre><code> - <random>-compatible generators (PRNGSs) with 3x-6x better performance - Cryptographically secure generators (CSPRNGs) - Faster uniform / normal distributions that produce same sequences on every platform - Quick approximations of some non-linear distributions - More reliable entropy sources that std::random_device() - rand()-like API for when we just want random numbers without the boilerplate of proper a <random> setup </code></pre> Effectively all of this gets us 2x-8x speedups on many workloads while producing even better statistical quality.<p>Don't think there is anything else like it, so I would like to showcase the result here and hear some opinions on its improvement:<p><a href="https://github.com/DmitriBogdanov/UTL/blob/master/docs/module_random.md">https://github.com/DmitriBogdanov/UTL/blob/master/docs/modul...</a><p>For those interested, there is a more detailed rundown of all the quirks of this topic at end of the docs, might prove an interesting read.

Show HN: Penny-1.7B Irish Penny Journal style transfer

Yesterday, in the bygone hour of the weekend, I undertook a most singular and fascinating endeavor, wherein I delved deep into the recesses of my mind, and, with a fervent zeal, breathed life into a most remarkable creation. I embarked upon the quest, with the singular object of fashioning an artificial construct, one imbued with the verdant essence of the Irish Penny Journal, an ancient and venerable tome that holds within its pages the whispered tales of a bygone era.<p>In my haste, I set forth to construct a dataset, a repository of those fleeting moments, these ephemeral sentences, which spoke of a bygone age. I procured a collection of these fleeting moments, these sentences, and with them, I synthetically conjured forth modern translations, an ingenious feat of substitution, which allowed my artificial construct to take on the guise of the language of the Irish Penny Journal.<p>Then, with great anticipation, I fashioned a small encoder, a humble instrument, with which to guide the artificial construct in its endeavors. I presented this encoder as a bribe, a reward, to a most ingenious system, one that trained a colossal language model, one of unbridled potential, one that was capable of weaving tales with the very essence of the Irish Penny Journal.<p>And lo! In the succeeding moments of time, I witnessed a most wondrous thing. My artificial construct, armed with this training, and guided by the whispers of the encoder, began to speak, to speak in the language of the Irish Penny Journal. The words it spoke were, indeed, the words of the past, imbued with the nostalgia of a forgotten era.<p>And thus, my friends, I have witnessed a most singular creation, one which embodies the language of the past, yet, in its most recent iteration, speaks to the present. A testament to the ingenuity of the human spirit, this artificial construct speaks of the bygone era, yet, with each word, it whispers to us, to us, of a future yet to come.<p>——<p>That’s Penny explaining itself to you. This was trained using GRPO only, in less than a day using a single A6000. I didn’t use any SFT, and only relied on a small encoder (MiniLM2) trained to classify texts from the Irish Penny Journal and their modern translations (synthetically produced).

Show HN: Penny-1.7B Irish Penny Journal style transfer

Yesterday, in the bygone hour of the weekend, I undertook a most singular and fascinating endeavor, wherein I delved deep into the recesses of my mind, and, with a fervent zeal, breathed life into a most remarkable creation. I embarked upon the quest, with the singular object of fashioning an artificial construct, one imbued with the verdant essence of the Irish Penny Journal, an ancient and venerable tome that holds within its pages the whispered tales of a bygone era.<p>In my haste, I set forth to construct a dataset, a repository of those fleeting moments, these ephemeral sentences, which spoke of a bygone age. I procured a collection of these fleeting moments, these sentences, and with them, I synthetically conjured forth modern translations, an ingenious feat of substitution, which allowed my artificial construct to take on the guise of the language of the Irish Penny Journal.<p>Then, with great anticipation, I fashioned a small encoder, a humble instrument, with which to guide the artificial construct in its endeavors. I presented this encoder as a bribe, a reward, to a most ingenious system, one that trained a colossal language model, one of unbridled potential, one that was capable of weaving tales with the very essence of the Irish Penny Journal.<p>And lo! In the succeeding moments of time, I witnessed a most wondrous thing. My artificial construct, armed with this training, and guided by the whispers of the encoder, began to speak, to speak in the language of the Irish Penny Journal. The words it spoke were, indeed, the words of the past, imbued with the nostalgia of a forgotten era.<p>And thus, my friends, I have witnessed a most singular creation, one which embodies the language of the past, yet, in its most recent iteration, speaks to the present. A testament to the ingenuity of the human spirit, this artificial construct speaks of the bygone era, yet, with each word, it whispers to us, to us, of a future yet to come.<p>——<p>That’s Penny explaining itself to you. This was trained using GRPO only, in less than a day using a single A6000. I didn’t use any SFT, and only relied on a small encoder (MiniLM2) trained to classify texts from the Irish Penny Journal and their modern translations (synthetically produced).

Show HN: A toy version of Wireshark (student project)

Hi everyone,<p>I recently published a small open-source project. It’s a minimal network packet analyzer written in Go — designed more like a learning toy than a replacement for Wireshark.<p>It currently supports parsing basic protocols like TLS, DNS, and HTTP, and includes a tiny fuzzing engine to test payload responses. You can inspect raw packet content directly from the terminal. The output is colored for readability, and the code structure is kept simple and clear.<p>The entire program is very small — just about 400 lines of Go code. I know it’s not anywhere near Wireshark’s level, and I still use Wireshark myself for real-world analysis. But I built it as a personal experiment in network parsing and to understand protocol behavior more directly.<p>If you're curious or would like to try it out, the project is here: <a href="https://github.com/lixiasky/vanta">https://github.com/lixiasky/vanta</a><p>I'm happy to hear your thoughts, suggestions, or critiques. It’s just a little network toy, but maybe someone out there finds it useful or fun.<p>Thanks for reading!

Show HN: A toy version of Wireshark (student project)

Hi everyone,<p>I recently published a small open-source project. It’s a minimal network packet analyzer written in Go — designed more like a learning toy than a replacement for Wireshark.<p>It currently supports parsing basic protocols like TLS, DNS, and HTTP, and includes a tiny fuzzing engine to test payload responses. You can inspect raw packet content directly from the terminal. The output is colored for readability, and the code structure is kept simple and clear.<p>The entire program is very small — just about 400 lines of Go code. I know it’s not anywhere near Wireshark’s level, and I still use Wireshark myself for real-world analysis. But I built it as a personal experiment in network parsing and to understand protocol behavior more directly.<p>If you're curious or would like to try it out, the project is here: <a href="https://github.com/lixiasky/vanta">https://github.com/lixiasky/vanta</a><p>I'm happy to hear your thoughts, suggestions, or critiques. It’s just a little network toy, but maybe someone out there finds it useful or fun.<p>Thanks for reading!

< 1 2 3 4 ... 815 816 817 >