The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Air Lab – A portable and open air quality measuring device

Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project: - <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction) - <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!

Show HN: Air Lab – A portable and open air quality measuring device

Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project: - <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction) - <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!

Show HN: Air Lab – A portable and open air quality measuring device

Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project: - <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction) - <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!

Show HN: Gradle plugin for faster Java compiles

Hey HN,<p>We've written a pretty cool Gradle plugin I wanted to share.<p>It turns out if you native-image the Java and Kotlin compilers, you can experience a serious gain, especially for "smaller" projects (under 10,000 classes).<p>By compiling the compiler with native image, JIT warmup normally experienced by Gradle/Maven et al is skipped. Startup time is extremely fast, since native image seals the heap into the binary itself. The native version of javac produces identical outputs from inputs. It's the same exact code, just AOT-compiled, translated to machine code, and pre-optimized by GraalVM.<p>Of course, native image isn't optimal in all cases. Warm JIT still outperforms NI, but I think most projects <i>never hit</i> fully warmed JIT through Gradle or Maven, because the VM running the compiler so rarely survives for long enough.<p>Elide (the tool used by this plugin) also supports fetching Maven dependencies. When active, it prepares a local m2 root where Gradle can find your dependencies already on-disk when it needs them. Preliminary benchmarking shows a 100x+ gain since lockfiles prevent needless re-resolution and native-imaging the resolver results in a similar gain to the compiler.<p>We (the authors) are very much open to feedback in improving this Gradle plugin or the underlying toolchain. Please, let us know what you think!

Show HN: App.build, an open-source AI agent that builds full-stack apps

Show HN: PinSend – Share text between devices using a PIN(P2P, no login)

Hi HN,<p>I built [PinSend](<a href="https://pinsend.app" rel="nofollow">https://pinsend.app</a>) — a free web app for instantly sharing text between devices, using a simple 6-character PIN.<p>- No login, no account, no install. - Peer-to-peer WebRTC transfer (no server relay, no cloud). - Cross-platform: works on any modern browser.<p>I built PinSend for myself while developing web apps—I was always copying ngrok links and sending error logs between my laptop and mobile devices. I wanted a frictionless, instant way to move links and text between anything.<p>*Demo:* 1. Open <a href="https://pinsend.app" rel="nofollow">https://pinsend.app</a> on your phone & laptop 2. Paste or type some text and hit "Send", enter the PIN on the other device 3. Instant sync! 4. No more emailing or Whatsapping notes to yourself<p>Would love feedback!

Show HN: Hacker News historic upvote and score data

Hi yall!<p>I've been using hacker news for a while but one of the things I started wanting recently was the ability to have alerts for any stories I post.<p>The thing that pushed me over the edge was Hackclub's shipwrecked <a href="https://shipwrecked.hackclub.com/" rel="nofollow">https://shipwrecked.hackclub.com/</a> (hackathon in the boston bay for anyone that can make 4 projects over the summer and get at least one of them to go viral). One of the options for "going viral" was to get to the front page of hacker news but I was constantly scared that I would miss it getting on there lol so I whipped up a quick slackbot to send alerts to a channel. It was dead simple but it did work.<p>Once I had the bot I realized I could do wayyyy more with the data I was collecting so I decided to add some historical data initially thinking I would generate graphs and then embed them in the message but decided to quickly try using Bun.serve to host a quick dashboard mainly since I wanted to see how the developer experience was. Spoiler it is amazing. I've gotten really inspired by web components and the idea of only using universally supported `html`, `css`, and `js`. Bun results in an amazingly nice developer experience where you can just import the `index.html` and assign it to your root route and be done. Sorry for shilling about bun but it truly was one of my favorite parts of building this besides drizzle.<p>The dashboard has a graph of the points earned and position on the leaderboard over time (updated every 5 minutes) and then the expected stats like peak points, peak position, author, and comment count.<p>Also btw all the code is open source ofc on both my tangled repo (<a href="https://tangled.sh/@dunkirk.sh/hn-alerts" rel="nofollow">https://tangled.sh/@dunkirk.sh/hn-alerts</a>) as well as a github repo (<a href="https://github.com/taciturnaxolotl/hn-alerts">https://github.com/taciturnaxolotl/hn-alerts</a>) and you can try the hosted version at <a href="https://hn.dunkirk.sh" rel="nofollow">https://hn.dunkirk.sh</a> I'm planning to add the ability to just install the slackbot to any workspace and have workspace specific leaderboards but that will require a bit of refactoring and probably abandoning the slack-edge package.<p>Also you can view specific item's data by simply replacing news.yc.com with hn.dunkirk.sh like: <a href="https://hn.dunkirk.sh/item?id=44115853" rel="nofollow">https://hn.dunkirk.sh/item?id=44115853</a>

Show HN: GPT image editing, but for 3D models

Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>

Show HN: GPT image editing, but for 3D models

Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>

Show HN: GPT image editing, but for 3D models

Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>

Show HN: GPT image editing, but for 3D models

Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Controlling 3D models with voice and hand gestures

I'm sharing my project to control 3D models with voice commands and hand gestures:<p>- use voice commands to change interaction mode (drag, rotate, scale, animate)<p>- use hand gestures to control the 3D model<p>- drag/drop to import other models (only GLTF format supported for now)<p>Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models<p>Githhub repo: <a href="https://github.com/collidingScopes/3d-model-playground">https://github.com/collidingScopes/3d-model-playground</a><p>Demo: <a href="https://xcancel.com/measure_plan/status/1929900748235550912" rel="nofollow">https://xcancel.com/measure_plan/status/1929900748235550912</a><p>I'd love to get your feedback! Thank you

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Localize React apps without rewriting code

Hi HN! We've just released an open-source React bundler plugin that makes apps multilingual—at build time, without modifying the code.<p>React app localization typically requires implementing i18n frameworks, extracting text to JSON files, and wrapping components in translation tags - essentially rewriting your entire codebase before you can even start translating.<p>Our React bundler plugin eliminates this friction entirely. You add it to an existing React app, specify which languages you want, and it automatically makes your app multilingual without touching a single line of your component code.<p>Here's a video showing how it works: <a href="https://www.youtube.com/watch?v=sSo2ERxAvB4" rel="nofollow">https://www.youtube.com/watch?v=sSo2ERxAvB4</a>. The docs are at <a href="https://lingo.dev/en/compiler">https://lingo.dev/en/compiler</a> and, sample apps at <a href="https://github.com/lingodotdev/lingo.dev/tree/main/demo">https://github.com/lingodotdev/lingo.dev/tree/main/demo</a>.<p>Last year, a dev from our Twitter community told us: "I don't want to wrap every React component with `<T>` tags or extract strings to JSON. Can I just wrap the entire React app and make it multilingual?"<p>Our first reaction was "That's not how i18n works in React." But a couple hours later, we found ourselves deep in a technical rabbit hole, wondering what if that actually was possible?<p>That question led us to build the "localization compiler" - a middleware for React that plugs into the codebase, processes the Abstract Syntax Tree of the React code, deterministically locates translatable elements, feeds every context boundary into LLMs, and bakes the translations back into the build, making UI multilingual in seconds.<p>Everything happens locally during build time, keeping the React project as the source of truth. No code modifications, no extraction, and no maintenance of separate translation files are needed, however, overrides are possible via data-lingo-* attributes.<p>Building this was trickier than we expected. Beyond traversing React/JS abstract syntax trees, we had to solve some challenging problems. We wanted to find a way to deterministically group elements that should be translated together, so, for example, a phrase wrapped in the `<a>` link tag wouldn't get mistranslated because it was processed in isolation. We also wanted to detect inline function calls and handle them gracefully during compile-time code generation.<p>For example, this entire text block that our localization compiler identifies as a single translation unit, preserving the HTML structure and context for the LLM.<p>``` function WelcomeMessage() { return ( <div> Welcome to <i>our platform</i>! <a href="/start">Get started</a> today. </div> ); } ```<p>The biggest challenge was making our compiler compatible with Hot Module Replacement. This allows developers to code in English while instantly seeing the UI in Spanish or Japanese, which is invaluable for catching layout issues caused by text expansion or contraction in different languages that take more/less space on the screen.<p>For performance, we implemented aggressive caching that stores AST analysis results between runs and only reprocesses components that have changed. Incremental builds stay fast even on large codebases, since at any point in time as a dev, you update only a limited number of components, and we heavily parallelized LLM calls.<p>This approach was technically possible before LLMs, but practically useless, since for precise translations you'd still need human translators familiar with the product domain. However, now, with context-aware models, we can generate decent translations automatically.<p>We're excited about finally making it production ready and sharing this with the HN community.<p>Run `npm i lingo.dev` , check out the docs at lingo.dev/compiler, try breaking it and let us know what you think about this approach to React i18n!

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

Show HN: Ephe – A minimalist open-source Markdown paper for today

Hi HN,<p>I built Ephe, open-source markdown paper for daily todos and thoughts.<p>No sign-up, no ads, no subscriptions, no AI.<p>## Why I made this<p>We have plenty of Markdown editors. And too many overwhelming to-do apps. But few tools combine both in a way that’s lightweight and focused. I thought that all I need is a single page to organize today. So I built Ephe.<p>It uses CodeMirror v6, React(v19, React Compiler) and Vite with rolldown.<p>## What makes it different<p>“Ephe” comes from ephemeral. The main goal is to organize what you need to do today. It isn’t for teams. It’s a quiet space for your own priorities.<p>Give it a spin if that sounds useful to you.

< 1 2 3 ... 48 49 50 51 52 ... 863 864 865 >