The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Built a desktop app to organize photos locally with duplicate detection
Show HN: I built this to talk Danish to my girlfriend – works with any language
I'm in my 4th year living in Denmark as an expat, and I finally decided it’s time to properly learn Danish. I do have a Danish girlfriend, after all. One way I’ve been practicing is by trying to text only in Danish, but I often find myself stuck. I start my message in Danish, then hit a wall because I don’t know a word or how to fit something naturally into the sentence.<p>Especially in those cases, I used to give up and translate the entire message from English, which kind of defeats the purpose and interrupts the learning process.<p>So I started prompting GPT. I’d write my message with wildcards or notes for the parts I didn’t know, and it would return a corrected version. That worked well, but reusing the prompt each time became tedious.<p>So I built a wrapper around it.<p>Now I can type in the target language, mark unclear parts with curly braces {like this}, and get an instant corrected version with explanations. I also added a history feature so I can review what I got wrong, and I plan to build more on that soon (eg. summary of areas or words to review).<p>This app is for language learners who want to practice writing without feeling insecure about mistakes or breaking their flow by switching to a translator.<p>I hope you find it useful!
Show HN: I built this to talk Danish to my girlfriend – works with any language
I'm in my 4th year living in Denmark as an expat, and I finally decided it’s time to properly learn Danish. I do have a Danish girlfriend, after all. One way I’ve been practicing is by trying to text only in Danish, but I often find myself stuck. I start my message in Danish, then hit a wall because I don’t know a word or how to fit something naturally into the sentence.<p>Especially in those cases, I used to give up and translate the entire message from English, which kind of defeats the purpose and interrupts the learning process.<p>So I started prompting GPT. I’d write my message with wildcards or notes for the parts I didn’t know, and it would return a corrected version. That worked well, but reusing the prompt each time became tedious.<p>So I built a wrapper around it.<p>Now I can type in the target language, mark unclear parts with curly braces {like this}, and get an instant corrected version with explanations. I also added a history feature so I can review what I got wrong, and I plan to build more on that soon (eg. summary of areas or words to review).<p>This app is for language learners who want to practice writing without feeling insecure about mistakes or breaking their flow by switching to a translator.<p>I hope you find it useful!
Show HN: I built this to talk Danish to my girlfriend – works with any language
I'm in my 4th year living in Denmark as an expat, and I finally decided it’s time to properly learn Danish. I do have a Danish girlfriend, after all. One way I’ve been practicing is by trying to text only in Danish, but I often find myself stuck. I start my message in Danish, then hit a wall because I don’t know a word or how to fit something naturally into the sentence.<p>Especially in those cases, I used to give up and translate the entire message from English, which kind of defeats the purpose and interrupts the learning process.<p>So I started prompting GPT. I’d write my message with wildcards or notes for the parts I didn’t know, and it would return a corrected version. That worked well, but reusing the prompt each time became tedious.<p>So I built a wrapper around it.<p>Now I can type in the target language, mark unclear parts with curly braces {like this}, and get an instant corrected version with explanations. I also added a history feature so I can review what I got wrong, and I plan to build more on that soon (eg. summary of areas or words to review).<p>This app is for language learners who want to practice writing without feeling insecure about mistakes or breaking their flow by switching to a translator.<p>I hope you find it useful!
Show HN: FFmpeg in plain English – LLM-assisted FFmpeg in the browser
I found that I am using ChatGPT more and more to get the FFmpeg command I need, but the process can be a bit tedious: copy-pasting commands, dealing with input file names and locations, making sure the prompt contains enough info about the input files.<p>This site attempts to solve that. You just describe what you want to do, pick the input files and an LLM (currently DeepSeek) generates the FFmpeg command. You can then run it directly in your browser or use the command elsewhere.
Show HN: FFmpeg in plain English – LLM-assisted FFmpeg in the browser
I found that I am using ChatGPT more and more to get the FFmpeg command I need, but the process can be a bit tedious: copy-pasting commands, dealing with input file names and locations, making sure the prompt contains enough info about the input files.<p>This site attempts to solve that. You just describe what you want to do, pick the input files and an LLM (currently DeepSeek) generates the FFmpeg command. You can then run it directly in your browser or use the command elsewhere.
Show HN: Bedrock – An 8-bit computing system for running programs anywhere
Hey everyone, this is my latest project.<p>Bedrock is a lightweight program runtime: programs assemble down to a few kilobytes of bytecode that can run on any computer, console, or handheld. The runtime is tiny, it can be implemented from scratch in a few hours, and the I/O devices for accessing the keyboard, screen, networking, etc. can be added on as needed.<p>I designed Bedrock to make it easier to maintain programs as a solo developer. It's deeply inspired by Uxn and PICO-8, but it makes significant departures from Uxn to provide more capabilities to programs and to be easier to implement.<p>Let me know if you try it out or have any questions.
Show HN: Bedrock – An 8-bit computing system for running programs anywhere
Hey everyone, this is my latest project.<p>Bedrock is a lightweight program runtime: programs assemble down to a few kilobytes of bytecode that can run on any computer, console, or handheld. The runtime is tiny, it can be implemented from scratch in a few hours, and the I/O devices for accessing the keyboard, screen, networking, etc. can be added on as needed.<p>I designed Bedrock to make it easier to maintain programs as a solo developer. It's deeply inspired by Uxn and PICO-8, but it makes significant departures from Uxn to provide more capabilities to programs and to be easier to implement.<p>Let me know if you try it out or have any questions.
Show HN: Bedrock – An 8-bit computing system for running programs anywhere
Hey everyone, this is my latest project.<p>Bedrock is a lightweight program runtime: programs assemble down to a few kilobytes of bytecode that can run on any computer, console, or handheld. The runtime is tiny, it can be implemented from scratch in a few hours, and the I/O devices for accessing the keyboard, screen, networking, etc. can be added on as needed.<p>I designed Bedrock to make it easier to maintain programs as a solo developer. It's deeply inspired by Uxn and PICO-8, but it makes significant departures from Uxn to provide more capabilities to programs and to be easier to implement.<p>Let me know if you try it out or have any questions.
Show HN: Refine – A Local Alternative to Grammarly
Show HN: Refine – A Local Alternative to Grammarly
Show HN: Ten years of running every day, visualized
Today marks ten years, 3653 consecutive days, of running at least one mile every day under the USRSA rules [1]. To celebrate, I built an interactive dashboard that turns a decade of GPX files into charts you can explore.<p>Running has truly changed my life: I've made lifelong friends, explored beautiful places, and more importantly invested into my own health and fitness, which I'm starting to see the positive benefits as I get older.<p>The stack is pretty simple: a NextJS app, with a Postgres database to keep all my running data, and all the stats are pre-computed and cached in Redis, so I effectively only hit the database once a day when a new run is ingested. On the fronted, I toyed with the idea of using D3 or pre-existing data viz libraries, but ended up rolling my own using SVGs directly, it gave me more control on the visualizations.<p>I used the Strava bulk export to pre-populate the database, and I'm using their webhook API to do incremental updates. I have to tap into OpenWeatherMap and OpenCageDate to enrich the running data a little bit.<p>Happy to answer anything about the stack, data pipeline, or how I stayed motivated for 10 years!<p>[1] <a href="https://www.runeveryday.com" rel="nofollow">https://www.runeveryday.com</a> Run Streak Association rules: ≥ 1 mile per day
Show HN: Ten years of running every day, visualized
Today marks ten years, 3653 consecutive days, of running at least one mile every day under the USRSA rules [1]. To celebrate, I built an interactive dashboard that turns a decade of GPX files into charts you can explore.<p>Running has truly changed my life: I've made lifelong friends, explored beautiful places, and more importantly invested into my own health and fitness, which I'm starting to see the positive benefits as I get older.<p>The stack is pretty simple: a NextJS app, with a Postgres database to keep all my running data, and all the stats are pre-computed and cached in Redis, so I effectively only hit the database once a day when a new run is ingested. On the fronted, I toyed with the idea of using D3 or pre-existing data viz libraries, but ended up rolling my own using SVGs directly, it gave me more control on the visualizations.<p>I used the Strava bulk export to pre-populate the database, and I'm using their webhook API to do incremental updates. I have to tap into OpenWeatherMap and OpenCageDate to enrich the running data a little bit.<p>Happy to answer anything about the stack, data pipeline, or how I stayed motivated for 10 years!<p>[1] <a href="https://www.runeveryday.com" rel="nofollow">https://www.runeveryday.com</a> Run Streak Association rules: ≥ 1 mile per day
Show HN: I made a JSFiddle-style playground to test and share prompts fast
I built this out of frustration as I lead the development of AI features at Yola.com.<p>Prompt testing should be simple and straightforward. All I wanted was a simple way to test prompts with variables and jinja2 templates across different models, ideally somthing I could open during a call, run few tests, and share results with my team. But every tool I tried hit me with a clunky UI, required login and API keys, or forced a lengthy setup process.<p>And that's not all.<p>Then came the pricing. The last quote I got for one of the tools on the market was $6,000/year for a team of 16 people in a use-it-or-loose-it way. For a tool we use maybe 2–3 times per sprint. That’s just ridiculous!<p>IMO, it should be something more like JSFiddle. A simple prompt playground that does not require you to signup, does not require API keys, and let's experiment instantly, i.e. you just enter a browser URL and start working. Like JSFiddle has. And mainly, something that costs me nothing if I'm or my team is not using it.<p>Eventually I gave up looking for solution and decided to build it by myself.<p>Here it is: <a href="https://langfa.st" rel="nofollow">https://langfa.st</a><p>Help me find what's wrong or missing or does not work from you perspctive.<p>P.S. I did not put any limits or restrictions yet, so test it wisely. Don't make me broke, please.
Show HN: ArchGW – An intelligent edge and service proxy for agents
Hey HN!<p>This is Adil, Salman and Jose and and we’re behind archgw [1]. An intelligent proxy server designed as an edge and AI gateway for agents - one that natively know how to handle prompts, not just network traffic. We’ve made several sweeping changes so sharing the project again.<p>A bit of background on why we’ve built this project. Building AI agent demos is easy, but to create something production-ready there is a lot of repeat low-level plumbing work that everyone is doing. You’re applying guardrails to make sure unsafe or off-topic requests don’t get through. You’re clarifying vague input so agents don’t make mistakes. You’re routing prompts to the right expert agent based on context or task type. You’re writing integration code to quickly and safely add support for new LLMs. And every time a new framework hits the market or is updated, you’re validating or re-implementing that same logic—again and again.<p>Putting all the low-level plumbing code in a framework gets messy to manage, harder to update and scale. Low-level work isn't business logic. That’s why we built archgw - an intelligent proxy server that handles prompts during ingress and egress and offers several related capabilities from a single software service. It lives outside your app runtime, so you can keep your business logic clean and focus on what matters. Think of it like a service mesh, but for AI agents.<p>Prior to building archgw, the team spent time building Envoy [2] at Lyft, API Gateway at AWS, specialized NLP models at Microsoft Research and worked on safety at Meta. archgw was born out of the belief that rule-based, single-purpose tools that handle the work around resiliency, processing and routing prompts should move into a dedicated infrastructure layer for agents, but built on the battle-tested foundational of Envoy Proxy.<p>The intelligence in archgw comes from our fast Task-specific LLMs [3] that can handle things like agent routing and hand off, guardrails and preference-based intelligent LLM calling. Here are some additional details about the open source project. archgw is written in rust, and the request path has three main parts:<p>* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing.
* Prompt handler subsystem. This is where archgw makes decisions on the safety of the incoming request via its prompt_guard hooks and identifies where to forward the conversation to via its prompt_target primitive.
* Model serving subsystem is the interface that hosts all the lightweight LLMs engineered in archgw and offers a framework for things like hallucination detection of our these models<p>We loved building this open source project, and our belief is that this infra primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4]
Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].<p>[1] <a href="https://github.com/katanemo/archgw">https://github.com/katanemo/archgw</a>
[2] <a href="https://www.envoyproxy.io/" rel="nofollow">https://www.envoyproxy.io/</a>
[3] <a href="https://huggingface.co/collections/katanemo/arch-function-66" rel="nofollow">https://huggingface.co/collections/katanemo/arch-function-66</a>...
[4] <a href="https://discord.com/channels/1292630766827737088/12926307682" rel="nofollow">https://discord.com/channels/1292630766827737088/12926307682</a>...
[5] <a href="https://www.youtube.com/watch?v=I4Lbhr-NNXk" rel="nofollow">https://www.youtube.com/watch?v=I4Lbhr-NNXk</a>
[6] <a href="https://docs.archgw.com/" rel="nofollow">https://docs.archgw.com/</a>
[7] <a href="https://huggingface.co/katanemo" rel="nofollow">https://huggingface.co/katanemo</a>
Show HN: ArchGW – An intelligent edge and service proxy for agents
Hey HN!<p>This is Adil, Salman and Jose and and we’re behind archgw [1]. An intelligent proxy server designed as an edge and AI gateway for agents - one that natively know how to handle prompts, not just network traffic. We’ve made several sweeping changes so sharing the project again.<p>A bit of background on why we’ve built this project. Building AI agent demos is easy, but to create something production-ready there is a lot of repeat low-level plumbing work that everyone is doing. You’re applying guardrails to make sure unsafe or off-topic requests don’t get through. You’re clarifying vague input so agents don’t make mistakes. You’re routing prompts to the right expert agent based on context or task type. You’re writing integration code to quickly and safely add support for new LLMs. And every time a new framework hits the market or is updated, you’re validating or re-implementing that same logic—again and again.<p>Putting all the low-level plumbing code in a framework gets messy to manage, harder to update and scale. Low-level work isn't business logic. That’s why we built archgw - an intelligent proxy server that handles prompts during ingress and egress and offers several related capabilities from a single software service. It lives outside your app runtime, so you can keep your business logic clean and focus on what matters. Think of it like a service mesh, but for AI agents.<p>Prior to building archgw, the team spent time building Envoy [2] at Lyft, API Gateway at AWS, specialized NLP models at Microsoft Research and worked on safety at Meta. archgw was born out of the belief that rule-based, single-purpose tools that handle the work around resiliency, processing and routing prompts should move into a dedicated infrastructure layer for agents, but built on the battle-tested foundational of Envoy Proxy.<p>The intelligence in archgw comes from our fast Task-specific LLMs [3] that can handle things like agent routing and hand off, guardrails and preference-based intelligent LLM calling. Here are some additional details about the open source project. archgw is written in rust, and the request path has three main parts:<p>* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing.
* Prompt handler subsystem. This is where archgw makes decisions on the safety of the incoming request via its prompt_guard hooks and identifies where to forward the conversation to via its prompt_target primitive.
* Model serving subsystem is the interface that hosts all the lightweight LLMs engineered in archgw and offers a framework for things like hallucination detection of our these models<p>We loved building this open source project, and our belief is that this infra primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4]
Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].<p>[1] <a href="https://github.com/katanemo/archgw">https://github.com/katanemo/archgw</a>
[2] <a href="https://www.envoyproxy.io/" rel="nofollow">https://www.envoyproxy.io/</a>
[3] <a href="https://huggingface.co/collections/katanemo/arch-function-66" rel="nofollow">https://huggingface.co/collections/katanemo/arch-function-66</a>...
[4] <a href="https://discord.com/channels/1292630766827737088/12926307682" rel="nofollow">https://discord.com/channels/1292630766827737088/12926307682</a>...
[5] <a href="https://www.youtube.com/watch?v=I4Lbhr-NNXk" rel="nofollow">https://www.youtube.com/watch?v=I4Lbhr-NNXk</a>
[6] <a href="https://docs.archgw.com/" rel="nofollow">https://docs.archgw.com/</a>
[7] <a href="https://huggingface.co/katanemo" rel="nofollow">https://huggingface.co/katanemo</a>
Show HN: ArchGW – An intelligent edge and service proxy for agents
Hey HN!<p>This is Adil, Salman and Jose and and we’re behind archgw [1]. An intelligent proxy server designed as an edge and AI gateway for agents - one that natively know how to handle prompts, not just network traffic. We’ve made several sweeping changes so sharing the project again.<p>A bit of background on why we’ve built this project. Building AI agent demos is easy, but to create something production-ready there is a lot of repeat low-level plumbing work that everyone is doing. You’re applying guardrails to make sure unsafe or off-topic requests don’t get through. You’re clarifying vague input so agents don’t make mistakes. You’re routing prompts to the right expert agent based on context or task type. You’re writing integration code to quickly and safely add support for new LLMs. And every time a new framework hits the market or is updated, you’re validating or re-implementing that same logic—again and again.<p>Putting all the low-level plumbing code in a framework gets messy to manage, harder to update and scale. Low-level work isn't business logic. That’s why we built archgw - an intelligent proxy server that handles prompts during ingress and egress and offers several related capabilities from a single software service. It lives outside your app runtime, so you can keep your business logic clean and focus on what matters. Think of it like a service mesh, but for AI agents.<p>Prior to building archgw, the team spent time building Envoy [2] at Lyft, API Gateway at AWS, specialized NLP models at Microsoft Research and worked on safety at Meta. archgw was born out of the belief that rule-based, single-purpose tools that handle the work around resiliency, processing and routing prompts should move into a dedicated infrastructure layer for agents, but built on the battle-tested foundational of Envoy Proxy.<p>The intelligence in archgw comes from our fast Task-specific LLMs [3] that can handle things like agent routing and hand off, guardrails and preference-based intelligent LLM calling. Here are some additional details about the open source project. archgw is written in rust, and the request path has three main parts:<p>* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing.
* Prompt handler subsystem. This is where archgw makes decisions on the safety of the incoming request via its prompt_guard hooks and identifies where to forward the conversation to via its prompt_target primitive.
* Model serving subsystem is the interface that hosts all the lightweight LLMs engineered in archgw and offers a framework for things like hallucination detection of our these models<p>We loved building this open source project, and our belief is that this infra primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4]
Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].<p>[1] <a href="https://github.com/katanemo/archgw">https://github.com/katanemo/archgw</a>
[2] <a href="https://www.envoyproxy.io/" rel="nofollow">https://www.envoyproxy.io/</a>
[3] <a href="https://huggingface.co/collections/katanemo/arch-function-66" rel="nofollow">https://huggingface.co/collections/katanemo/arch-function-66</a>...
[4] <a href="https://discord.com/channels/1292630766827737088/12926307682" rel="nofollow">https://discord.com/channels/1292630766827737088/12926307682</a>...
[5] <a href="https://www.youtube.com/watch?v=I4Lbhr-NNXk" rel="nofollow">https://www.youtube.com/watch?v=I4Lbhr-NNXk</a>
[6] <a href="https://docs.archgw.com/" rel="nofollow">https://docs.archgw.com/</a>
[7] <a href="https://huggingface.co/katanemo" rel="nofollow">https://huggingface.co/katanemo</a>
Show HN: ArchGW – An intelligent edge and service proxy for agents
Hey HN!<p>This is Adil, Salman and Jose and and we’re behind archgw [1]. An intelligent proxy server designed as an edge and AI gateway for agents - one that natively know how to handle prompts, not just network traffic. We’ve made several sweeping changes so sharing the project again.<p>A bit of background on why we’ve built this project. Building AI agent demos is easy, but to create something production-ready there is a lot of repeat low-level plumbing work that everyone is doing. You’re applying guardrails to make sure unsafe or off-topic requests don’t get through. You’re clarifying vague input so agents don’t make mistakes. You’re routing prompts to the right expert agent based on context or task type. You’re writing integration code to quickly and safely add support for new LLMs. And every time a new framework hits the market or is updated, you’re validating or re-implementing that same logic—again and again.<p>Putting all the low-level plumbing code in a framework gets messy to manage, harder to update and scale. Low-level work isn't business logic. That’s why we built archgw - an intelligent proxy server that handles prompts during ingress and egress and offers several related capabilities from a single software service. It lives outside your app runtime, so you can keep your business logic clean and focus on what matters. Think of it like a service mesh, but for AI agents.<p>Prior to building archgw, the team spent time building Envoy [2] at Lyft, API Gateway at AWS, specialized NLP models at Microsoft Research and worked on safety at Meta. archgw was born out of the belief that rule-based, single-purpose tools that handle the work around resiliency, processing and routing prompts should move into a dedicated infrastructure layer for agents, but built on the battle-tested foundational of Envoy Proxy.<p>The intelligence in archgw comes from our fast Task-specific LLMs [3] that can handle things like agent routing and hand off, guardrails and preference-based intelligent LLM calling. Here are some additional details about the open source project. archgw is written in rust, and the request path has three main parts:<p>* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing.
* Prompt handler subsystem. This is where archgw makes decisions on the safety of the incoming request via its prompt_guard hooks and identifies where to forward the conversation to via its prompt_target primitive.
* Model serving subsystem is the interface that hosts all the lightweight LLMs engineered in archgw and offers a framework for things like hallucination detection of our these models<p>We loved building this open source project, and our belief is that this infra primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4]
Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].<p>[1] <a href="https://github.com/katanemo/archgw">https://github.com/katanemo/archgw</a>
[2] <a href="https://www.envoyproxy.io/" rel="nofollow">https://www.envoyproxy.io/</a>
[3] <a href="https://huggingface.co/collections/katanemo/arch-function-66" rel="nofollow">https://huggingface.co/collections/katanemo/arch-function-66</a>...
[4] <a href="https://discord.com/channels/1292630766827737088/12926307682" rel="nofollow">https://discord.com/channels/1292630766827737088/12926307682</a>...
[5] <a href="https://www.youtube.com/watch?v=I4Lbhr-NNXk" rel="nofollow">https://www.youtube.com/watch?v=I4Lbhr-NNXk</a>
[6] <a href="https://docs.archgw.com/" rel="nofollow">https://docs.archgw.com/</a>
[7] <a href="https://huggingface.co/katanemo" rel="nofollow">https://huggingface.co/katanemo</a>
Show HN: I built an LLM chat app because we shouldn't need 10 AI subscriptions
I'm lost between ChatGPT vs Claude vs Gemini... which subscriptions to take? With Cursor and all these specific AI tools, I just wanted one simple chat app where I can use any model and pay only when I use it.<p>Couldn't find one, so I built one.<p>Pay only for what you use. Your prompts and docs, knowledge bases work with every model - no more copy-pasting between apps.<p>Started as a personal project, but thought someone else might benefit from this too.<p><a href="https://prismharmony.com/chat" rel="nofollow">https://prismharmony.com/chat</a><p>What do you think?
Show HN: I built an LLM chat app because we shouldn't need 10 AI subscriptions
I'm lost between ChatGPT vs Claude vs Gemini... which subscriptions to take? With Cursor and all these specific AI tools, I just wanted one simple chat app where I can use any model and pay only when I use it.<p>Couldn't find one, so I built one.<p>Pay only for what you use. Your prompts and docs, knowledge bases work with every model - no more copy-pasting between apps.<p>Started as a personal project, but thought someone else might benefit from this too.<p><a href="https://prismharmony.com/chat" rel="nofollow">https://prismharmony.com/chat</a><p>What do you think?