The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Open-Source DocumentAI with Ollama
Show HN: Open-Source DocumentAI with Ollama
Show HN: Ariana – A time travel debugger for PY/JS right in VSCode
Hello HN!<p>I've recently released and open-sourced a time travel debugging VSCode extension for Python, Javascript & Typescript.<p><a href="https://github.com/dedale-dev/ariana">https://github.com/dedale-dev/ariana</a><p>It's born from the pain of spending hours reproducing bugs, struggling to read parallel streams of logging across client/server, and managing print/console.log statements.<p>You can see a short video here: <a href="https://www.youtube.com/watch?v=M2gZv7IOo7s" rel="nofollow">https://www.youtube.com/watch?v=M2gZv7IOo7s</a><p>Basically its two parts:<p>One part CLI called `ariana` that you install with npm/pip and run alongside your code's run command. For instance `ariana python main.py` or `ariana npm run dev`. It then instrumentizes your code using our specialized parsers & small language models (self-hosted version of the server that does that coming soon).<p>The other part is a VSCode extension^(1). It picks up the traces left from running the code with the CLI. Then it lets you highlight the parts of the code that ran, and just by hovering any expression (or subpart of a complex expression), see which values it took.<p>Our goals with this are:<p>1. Make time-travel debugging easy to use for new coders/vibe coders that would never use a normal debugger, let alone some advanced logging.<p>2. Allow debugging of across the stack, across components, across languages, parallel data flows super easily (typical pain point of maintaining AI agents codebases, multiplayer web games or RL training setups). In prod even some day when we have a more robust feature set.<p>3. Experiment with agents using time-travel debugging to fix code accurately in one shot without re-running the code or spending tokens producing print/log statements.<p>4. Make time-travel debugging applicable to fullstack & frontend development (we plan to sync your frontend's visual state with the traces).<p>Some may ask why not interfacing with debuggers' APIs and instead rewriting code with tracing?<p>I think it gives us maximal granularity and expressivity in the traces we get from the code to minimize performance issue and avoiding looking at non-sensical things. It also opens the door to using this in production in the future. Of course I'd be happy to discuss that further with you if you worked on similar projects in the past :)<p>(1) <a href="https://marketplace.visualstudio.com/items?itemName=dedale-dev.ariana" rel="nofollow">https://marketplace.visualstudio.com/items?itemName=dedale-d...</a><p>Thank you very much for your attention!
Show HN: IEMidi – Cross-platform MIDI map editor for arbitrary controllers
Show HN: A big tech dev experience for an open source CMS
Hey HN! We're building an open-source CMS designed to help creators with every
part of the content production pipeline.<p>We're showing our tiny first step: A tool designed to take in a Twitter username
and produce an "identity card" based on it. We expect to use an approach similar
to [Constitutional AI] with an explicit focus on repeatability, testability, and
verification of an "identity card." We think this approach could be used to
create finetuning examples for training changes, or serve as inference time
insight for LLMs, or most likely a combination of the two.<p>The tooling we're showing today is extremely simplistic (and the AI is frankly
bad) but this is intentional. We're more focused on showing the dev experience
and community aspects. We'd like to make it easier to contribute to this project
than edit Wikipedia. Communities are frustrated with things like Wordpress,
Apache, and other open source foundations focusing on things other than
software. We have a lot of community ideas (governance via vote by jury is
perhaps the most interesting).<p>We're a team of 5, and we've bounced around a few companies with each other.
We're all professional creators (video + music) and we're creating tooling for
ourselves first.<p>Previously, we did a startup called Vidpresso (YC W14) that was acquired by
Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and
have since left to start this thing.<p>After leaving FB, it was painful for us to leave the warm embrace of the
Facebook infra team where we had amazing tooling. Since then, we've pivoted a
bunch of times trying to figure out our "real" product. While we think we've
finally nailed it, the developer experience we built is one we think others
could benefit from.<p>Our tooling is designed so any developer can easily jump in and start
contributing. It's an AI-first dev environment designed with a few key
principles in mind:<p>1. You should be able to discover any command you need to run without looking at
docs.
2. To make a change, as much context as possible should be provided as close to
the code as possible.
3. AIs are "people too", in the sense that they benefit from focused context,
and not being distracted by having to search deeply through multiple files or
documentation to make changes.<p>We have a few non-traditional elements to our stack which we think are worth
exploring. [Isograph] helps us simplify our component usage with GraphQL.
[Replit] lets people use AI coding without needing to set up any additional
tooling. We've learned how to treat it like a junior developer, and think it
will be the best platform for AI-first open source projects going forward.
[Sapling] (and Git together) for version control. It might sound counter
intuitive, but we use Git to manage agent interactionsand we use Sapling to
manage "purposeful" commits.<p>My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder
so I have high hopes for this one. I'm excited to meet anyone, developers,
creators, or nice people in general, and start to work with them to make this
project work. I have good references of being a nice guy, and aim to keep that
going with this project.<p>The best way to work with us is [remix our Replit app] and [join our Discord].<p>Thanks for reading and checking us out! It's super early, but we're excited to
work with you!<p>[Constitutional AI]: <a href="https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback" rel="nofollow">https://www.anthropic.com/research/constitutional-ai-harmles...</a><p>[Isograph]: <a href="https://isograph.dev" rel="nofollow">https://isograph.dev</a><p>[Replit]: <a href="https://replit.com" rel="nofollow">https://replit.com</a><p>[Sapling]: <a href="https://sapling-scm.com" rel="nofollow">https://sapling-scm.com</a><p>[Show HN post in 2013]: <a href="https://news.ycombinator.com/item?id=6993981">https://news.ycombinator.com/item?id=6993981</a><p>[remix our Replit app]: <a href="https://replit.com/t/bolt-foundry/repls/Content-Foundry/view#README.md" rel="nofollow">https://replit.com/t/bolt-foundry/repls/Content-Foundry/view...</a><p>[join our Discord]: <a href="https://discord.gg/TjQZfWjSQ7" rel="nofollow">https://discord.gg/TjQZfWjSQ7</a>
Show HN: A big tech dev experience for an open source CMS
Hey HN! We're building an open-source CMS designed to help creators with every
part of the content production pipeline.<p>We're showing our tiny first step: A tool designed to take in a Twitter username
and produce an "identity card" based on it. We expect to use an approach similar
to [Constitutional AI] with an explicit focus on repeatability, testability, and
verification of an "identity card." We think this approach could be used to
create finetuning examples for training changes, or serve as inference time
insight for LLMs, or most likely a combination of the two.<p>The tooling we're showing today is extremely simplistic (and the AI is frankly
bad) but this is intentional. We're more focused on showing the dev experience
and community aspects. We'd like to make it easier to contribute to this project
than edit Wikipedia. Communities are frustrated with things like Wordpress,
Apache, and other open source foundations focusing on things other than
software. We have a lot of community ideas (governance via vote by jury is
perhaps the most interesting).<p>We're a team of 5, and we've bounced around a few companies with each other.
We're all professional creators (video + music) and we're creating tooling for
ourselves first.<p>Previously, we did a startup called Vidpresso (YC W14) that was acquired by
Facebook in 2018. We all worked at Facebook for 5 years on creator tooling, and
have since left to start this thing.<p>After leaving FB, it was painful for us to leave the warm embrace of the
Facebook infra team where we had amazing tooling. Since then, we've pivoted a
bunch of times trying to figure out our "real" product. While we think we've
finally nailed it, the developer experience we built is one we think others
could benefit from.<p>Our tooling is designed so any developer can easily jump in and start
contributing. It's an AI-first dev environment designed with a few key
principles in mind:<p>1. You should be able to discover any command you need to run without looking at
docs.
2. To make a change, as much context as possible should be provided as close to
the code as possible.
3. AIs are "people too", in the sense that they benefit from focused context,
and not being distracted by having to search deeply through multiple files or
documentation to make changes.<p>We have a few non-traditional elements to our stack which we think are worth
exploring. [Isograph] helps us simplify our component usage with GraphQL.
[Replit] lets people use AI coding without needing to set up any additional
tooling. We've learned how to treat it like a junior developer, and think it
will be the best platform for AI-first open source projects going forward.
[Sapling] (and Git together) for version control. It might sound counter
intuitive, but we use Git to manage agent interactionsand we use Sapling to
manage "purposeful" commits.<p>My last [Show HN post in 2013] ended up helping me find my Vidpresso cofounder
so I have high hopes for this one. I'm excited to meet anyone, developers,
creators, or nice people in general, and start to work with them to make this
project work. I have good references of being a nice guy, and aim to keep that
going with this project.<p>The best way to work with us is [remix our Replit app] and [join our Discord].<p>Thanks for reading and checking us out! It's super early, but we're excited to
work with you!<p>[Constitutional AI]: <a href="https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback" rel="nofollow">https://www.anthropic.com/research/constitutional-ai-harmles...</a><p>[Isograph]: <a href="https://isograph.dev" rel="nofollow">https://isograph.dev</a><p>[Replit]: <a href="https://replit.com" rel="nofollow">https://replit.com</a><p>[Sapling]: <a href="https://sapling-scm.com" rel="nofollow">https://sapling-scm.com</a><p>[Show HN post in 2013]: <a href="https://news.ycombinator.com/item?id=6993981">https://news.ycombinator.com/item?id=6993981</a><p>[remix our Replit app]: <a href="https://replit.com/t/bolt-foundry/repls/Content-Foundry/view#README.md" rel="nofollow">https://replit.com/t/bolt-foundry/repls/Content-Foundry/view...</a><p>[join our Discord]: <a href="https://discord.gg/TjQZfWjSQ7" rel="nofollow">https://discord.gg/TjQZfWjSQ7</a>
Show HN: I Built a Telegraph Simulator
Show HN: I Built a Telegraph Simulator
Show HN: I Built a Telegraph Simulator
Show HN: Open-source, native audio turn detection model
Our goal with this project is to build a completely open source, state of the art turn detection model that can be used in any voice AI application.<p>I've been experimenting with LLM voice conversations since GPT-4 was first released. (There's a previous front page Show HN about Pipecat, the open source voice AI orchestration framework I work on. [1])<p>It's been almost two years, and for most of that time, I've been expecting that someone would "solve" turn detection. We all built initial, pretty good 80/20 versions of turn detection on top of VAD (voice activity detection) models. And then, as an ecosystem, we kind of got stuck.<p>A few production applications have recently started using Gemini 2.0 Flash to do context aware turn detection. [2] But because latency is ~500ms, that's a more complicated approach than using a specialized model. The team at LiveKit released an open weights model that does text-based turn detection. [3] I was really excited to see that, but I'm not super-optimistic that a text-input model will ever be good enough for this task. (A good rule of thumb in deep learning is that you should bet on end-to-end.)<p>So ... I spent Christmas break training several little proof of concept models, and experimenting with generating synthetic audio data. So, so, so much fun. The results were promising enough that I nerd-sniped a few friends and we started working in earnest on this.<p>The model now performs really well on a subset of turn detection tasks. Too well, really. We're overfitting on a not-terribly-broad initial data set of about 8,000 samples. Getting to this point was the initial bar we set for doing a public release and seeing if other people want to get involved in the project.<p>There are lots of ways to contribute. [4]<p>Medium-term goals for the project are:<p><pre><code> - Support for a wide range of languages
- Inference time of <50ms on GPU and <500ms on CPU
- Much wider range of speech nuances captured in training data
- A completely synthetic training data pipeline. (Maybe?)
- Text conditioning of the model, to support "modes" like credit card, telephone number, and address entry.
</code></pre>
If you're interested in voice AI or in audio model ML engineering, please try the model out and see what you think. I'd love to hear your thoughts and ideas.<p>[1] <a href="https://news.ycombinator.com/item?id=40345696">https://news.ycombinator.com/item?id=40345696</a><p>[2] <a href="https://x.com/kwindla/status/1870974144831275410" rel="nofollow">https://x.com/kwindla/status/1870974144831275410</a><p>[3] <a href="https://blog.livekit.io/using-a-transformer-to-improve-end-of-turn-detection/" rel="nofollow">https://blog.livekit.io/using-a-transformer-to-improve-end-o...</a><p>[4] <a href="https://github.com/pipecat-ai/smart-turn#things-to-do">https://github.com/pipecat-ai/smart-turn#things-to-do</a>
Show HN: Open-source, native audio turn detection model
Our goal with this project is to build a completely open source, state of the art turn detection model that can be used in any voice AI application.<p>I've been experimenting with LLM voice conversations since GPT-4 was first released. (There's a previous front page Show HN about Pipecat, the open source voice AI orchestration framework I work on. [1])<p>It's been almost two years, and for most of that time, I've been expecting that someone would "solve" turn detection. We all built initial, pretty good 80/20 versions of turn detection on top of VAD (voice activity detection) models. And then, as an ecosystem, we kind of got stuck.<p>A few production applications have recently started using Gemini 2.0 Flash to do context aware turn detection. [2] But because latency is ~500ms, that's a more complicated approach than using a specialized model. The team at LiveKit released an open weights model that does text-based turn detection. [3] I was really excited to see that, but I'm not super-optimistic that a text-input model will ever be good enough for this task. (A good rule of thumb in deep learning is that you should bet on end-to-end.)<p>So ... I spent Christmas break training several little proof of concept models, and experimenting with generating synthetic audio data. So, so, so much fun. The results were promising enough that I nerd-sniped a few friends and we started working in earnest on this.<p>The model now performs really well on a subset of turn detection tasks. Too well, really. We're overfitting on a not-terribly-broad initial data set of about 8,000 samples. Getting to this point was the initial bar we set for doing a public release and seeing if other people want to get involved in the project.<p>There are lots of ways to contribute. [4]<p>Medium-term goals for the project are:<p><pre><code> - Support for a wide range of languages
- Inference time of <50ms on GPU and <500ms on CPU
- Much wider range of speech nuances captured in training data
- A completely synthetic training data pipeline. (Maybe?)
- Text conditioning of the model, to support "modes" like credit card, telephone number, and address entry.
</code></pre>
If you're interested in voice AI or in audio model ML engineering, please try the model out and see what you think. I'd love to hear your thoughts and ideas.<p>[1] <a href="https://news.ycombinator.com/item?id=40345696">https://news.ycombinator.com/item?id=40345696</a><p>[2] <a href="https://x.com/kwindla/status/1870974144831275410" rel="nofollow">https://x.com/kwindla/status/1870974144831275410</a><p>[3] <a href="https://blog.livekit.io/using-a-transformer-to-improve-end-of-turn-detection/" rel="nofollow">https://blog.livekit.io/using-a-transformer-to-improve-end-o...</a><p>[4] <a href="https://github.com/pipecat-ai/smart-turn#things-to-do">https://github.com/pipecat-ai/smart-turn#things-to-do</a>
Show HN: Shelgon: A Framework for Building Interactive REPL Shells in Rust
I've been working on Shelgon, a framework that lets you build your own custom REPL shells and interactive CLI applications in Rust.<p>You can use Shelgon to:<p>- Create a custom shell with only a few lines of code
- Build interactive debugging tools with persistent state between commands
- Develop domain-specific language interpreters with shell-like interfaces
- Add REPL capabilities to existing applications<p>Getting started is straightforward - implement a single trait that handles your command execution logic, and Shelgon takes care of the terminal UI, input handling, and async runtime integration.<p>For example, a simple echo shell takes less than 50 lines of code, including a full implementation of command history, cursor movement, and tab completion.<p>Repository: <a href="https://github.com/nishantjoshi00/shelgon">https://github.com/nishantjoshi00/shelgon</a>
Show HN: Shelgon: A Framework for Building Interactive REPL Shells in Rust
I've been working on Shelgon, a framework that lets you build your own custom REPL shells and interactive CLI applications in Rust.<p>You can use Shelgon to:<p>- Create a custom shell with only a few lines of code
- Build interactive debugging tools with persistent state between commands
- Develop domain-specific language interpreters with shell-like interfaces
- Add REPL capabilities to existing applications<p>Getting started is straightforward - implement a single trait that handles your command execution logic, and Shelgon takes care of the terminal UI, input handling, and async runtime integration.<p>For example, a simple echo shell takes less than 50 lines of code, including a full implementation of command history, cursor movement, and tab completion.<p>Repository: <a href="https://github.com/nishantjoshi00/shelgon">https://github.com/nishantjoshi00/shelgon</a>
Show HN: Shelgon: A Framework for Building Interactive REPL Shells in Rust
I've been working on Shelgon, a framework that lets you build your own custom REPL shells and interactive CLI applications in Rust.<p>You can use Shelgon to:<p>- Create a custom shell with only a few lines of code
- Build interactive debugging tools with persistent state between commands
- Develop domain-specific language interpreters with shell-like interfaces
- Add REPL capabilities to existing applications<p>Getting started is straightforward - implement a single trait that handles your command execution logic, and Shelgon takes care of the terminal UI, input handling, and async runtime integration.<p>For example, a simple echo shell takes less than 50 lines of code, including a full implementation of command history, cursor movement, and tab completion.<p>Repository: <a href="https://github.com/nishantjoshi00/shelgon">https://github.com/nishantjoshi00/shelgon</a>
Show HN: Shelgon: A Framework for Building Interactive REPL Shells in Rust
I've been working on Shelgon, a framework that lets you build your own custom REPL shells and interactive CLI applications in Rust.<p>You can use Shelgon to:<p>- Create a custom shell with only a few lines of code
- Build interactive debugging tools with persistent state between commands
- Develop domain-specific language interpreters with shell-like interfaces
- Add REPL capabilities to existing applications<p>Getting started is straightforward - implement a single trait that handles your command execution logic, and Shelgon takes care of the terminal UI, input handling, and async runtime integration.<p>For example, a simple echo shell takes less than 50 lines of code, including a full implementation of command history, cursor movement, and tab completion.<p>Repository: <a href="https://github.com/nishantjoshi00/shelgon">https://github.com/nishantjoshi00/shelgon</a>
Show HN: Rust Vector and Quaternion Lib
I use this library I made for Vectors and Quaternions in many personal projects. I've open-sourced it, in case anyone else would get use out of it.<p>I use this on various projects, including quadcopter firmware, a graphics engine, a cosmology simulation, and several molecular dynamics applications. No_std compatible.
Show HN: Rust Vector and Quaternion Lib
I use this library I made for Vectors and Quaternions in many personal projects. I've open-sourced it, in case anyone else would get use out of it.<p>I use this on various projects, including quadcopter firmware, a graphics engine, a cosmology simulation, and several molecular dynamics applications. No_std compatible.
Show HN: Rust Vector and Quaternion Lib
I use this library I made for Vectors and Quaternions in many personal projects. I've open-sourced it, in case anyone else would get use out of it.<p>I use this on various projects, including quadcopter firmware, a graphics engine, a cosmology simulation, and several molecular dynamics applications. No_std compatible.
Show HN: CodeTracer – A time-traveling debugger implemented in Nim and Rust
Hey!<p>We are presenting CodeTracer - a user-friendly time-traveling debugger designed to support a wide range of programming languages:<p><a href="https://github.com/metacraft-labs/codetracer?tab=readme-ov-file#introduction">https://github.com/metacraft-labs/codetracer?tab=readme-ov-f...</a><p>CodeTracer records the execution of a program into a sharable self-contained trace file. You can load the produced trace files in a GUI environment that allows you to move forward and backward through the execution and to examine the history of all memory locations. They say a picture is worth a thousand words — well, a video is even better! Watch the demo below to see CodeTracer in action:<p><a href="https://www.youtube.com/watch?v=xZsJ55JVqmU" rel="nofollow">https://www.youtube.com/watch?v=xZsJ55JVqmU</a><p>The initial release is limited to the Noir programming language, but CodeTracer uses an open format for its trace files and we've started community-driven projects which aim to add support for Ruby and Python.<p>We are also developing an alternative back-end, capable of working with RR recordings, which will make CodeTracer suitable for debugging large-scale programs in a variety of system programming languages such as C/C++, Rust, Nim, D, Zig, Go, Fortran and FreePascal.
Show HN: CodeTracer – A time-traveling debugger implemented in Nim and Rust
Hey!<p>We are presenting CodeTracer - a user-friendly time-traveling debugger designed to support a wide range of programming languages:<p><a href="https://github.com/metacraft-labs/codetracer?tab=readme-ov-file#introduction">https://github.com/metacraft-labs/codetracer?tab=readme-ov-f...</a><p>CodeTracer records the execution of a program into a sharable self-contained trace file. You can load the produced trace files in a GUI environment that allows you to move forward and backward through the execution and to examine the history of all memory locations. They say a picture is worth a thousand words — well, a video is even better! Watch the demo below to see CodeTracer in action:<p><a href="https://www.youtube.com/watch?v=xZsJ55JVqmU" rel="nofollow">https://www.youtube.com/watch?v=xZsJ55JVqmU</a><p>The initial release is limited to the Noir programming language, but CodeTracer uses an open format for its trace files and we've started community-driven projects which aim to add support for Ruby and Python.<p>We are also developing an alternative back-end, capable of working with RR recordings, which will make CodeTracer suitable for debugging large-scale programs in a variety of system programming languages such as C/C++, Rust, Nim, D, Zig, Go, Fortran and FreePascal.