The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I made a super-simple image CDN
Hi HN,<p>MageCDN is a simple and affordable image hosting service I have been working on for the past few weeks. The idea came out of my own frustration with hosting and optimizing images for my blog.<p>While platforms like Imgur make it really simple to upload images, they don't allow you to embed them. Services like Cloudinary, Imagekit exist, but I found them too complex for my needs. Plus, they get really expensive past their free tier.<p>So, I started MageCDN with three simple goals:<p>- pricing should be affordable and scale linearly.<p>- basic image operations (resize, crop, optimize) should be doable within the app.<p>- uploading and getting a link you can use should be fast and hassle-free.<p>Would love to hear what you think!
Show HN: I made a super-simple image CDN
Hi HN,<p>MageCDN is a simple and affordable image hosting service I have been working on for the past few weeks. The idea came out of my own frustration with hosting and optimizing images for my blog.<p>While platforms like Imgur make it really simple to upload images, they don't allow you to embed them. Services like Cloudinary, Imagekit exist, but I found them too complex for my needs. Plus, they get really expensive past their free tier.<p>So, I started MageCDN with three simple goals:<p>- pricing should be affordable and scale linearly.<p>- basic image operations (resize, crop, optimize) should be doable within the app.<p>- uploading and getting a link you can use should be fast and hassle-free.<p>Would love to hear what you think!
Show HN: Flyon UI – Tailwind Components Library
Show HN: Weird Books to Read
Try reading weird books for a change.
Show HN: A Demo for a Strategy Game
This is something that I've been working on on-and-off in my spare time for the last few years. I started in 2020 during Covid trying to do a Mount and Blade style map in UE4, and it grew from there until where it is now - I would just add a feature, then another, until it came together. Since the start of the year I've really been trying to polish it some more.<p>I've got the full thing planned for a Q3 25 release.<p>Github release if you don't have Steam: <a href="https://github.com/joe-gibbs/fall-of-an-empire-release">https://github.com/joe-gibbs/fall-of-an-empire-release</a>
Show HN: A Demo for a Strategy Game
This is something that I've been working on on-and-off in my spare time for the last few years. I started in 2020 during Covid trying to do a Mount and Blade style map in UE4, and it grew from there until where it is now - I would just add a feature, then another, until it came together. Since the start of the year I've really been trying to polish it some more.<p>I've got the full thing planned for a Q3 25 release.<p>Github release if you don't have Steam: <a href="https://github.com/joe-gibbs/fall-of-an-empire-release">https://github.com/joe-gibbs/fall-of-an-empire-release</a>
Show HN: A Demo for a Strategy Game
This is something that I've been working on on-and-off in my spare time for the last few years. I started in 2020 during Covid trying to do a Mount and Blade style map in UE4, and it grew from there until where it is now - I would just add a feature, then another, until it came together. Since the start of the year I've really been trying to polish it some more.<p>I've got the full thing planned for a Q3 25 release.<p>Github release if you don't have Steam: <a href="https://github.com/joe-gibbs/fall-of-an-empire-release">https://github.com/joe-gibbs/fall-of-an-empire-release</a>
Show HN: Kameo – Fault-tolerant async actors built on Tokio
Hi HN,<p>I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes.<p>Key Features:<p>- Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple.<p>- Remote Messaging: Seamlessly send messages to actors across different nodes.<p>- Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies.<p>- Backpressure Support: Supports bounded and unbounded mpsc messaging.<p>I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems.<p>Check out the project on GitHub: <a href="https://github.com/tqwewe/kameo">https://github.com/tqwewe/kameo</a><p>Looking forward to hearing your thoughts!
Show HN: Kameo – Fault-tolerant async actors built on Tokio
Hi HN,<p>I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes.<p>Key Features:<p>- Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple.<p>- Remote Messaging: Seamlessly send messages to actors across different nodes.<p>- Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies.<p>- Backpressure Support: Supports bounded and unbounded mpsc messaging.<p>I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems.<p>Check out the project on GitHub: <a href="https://github.com/tqwewe/kameo">https://github.com/tqwewe/kameo</a><p>Looking forward to hearing your thoughts!
Show HN: Kameo – Fault-tolerant async actors built on Tokio
Hi HN,<p>I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes.<p>Key Features:<p>- Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple.<p>- Remote Messaging: Seamlessly send messages to actors across different nodes.<p>- Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies.<p>- Backpressure Support: Supports bounded and unbounded mpsc messaging.<p>I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems.<p>Check out the project on GitHub: <a href="https://github.com/tqwewe/kameo">https://github.com/tqwewe/kameo</a><p>Looking forward to hearing your thoughts!
I made a game you can play without anyone knowing (no visuals/sound)
Hello everyone! I just launched an iOS game called Tik! and it has no visuals or sound of any kind.
So the obvious question is.. how do you play it?<p>The game uses your phone’s Haptics in order to play a rhythm of “Tiks” (haptic vibrations). The user then has to try and recreate the timing of the rhythm they just felt by tapping it anywhere on the screen. It sounds easy, but getting the timing right is tricky, and so it usually takes a couple tries before your able to get it right.<p>The inspiration for the game came from wanting something to do in a really boring presentation. It would have been disrespectful to look at my phone, but I also needed a distraction. I typically hold my phone in these kinds of scenarios, and fiddle with the case, when it occurred to me: what if there was a game I could play just holding the phone anywhere (under a desk, in my pocket, to the side, etc.). Sometime later Tik! was born :)<p>I would love your feedback on it. The game is paid, but if someone would like a promo code to try it please let me know below.
Link: <a href="https://apps.apple.com/app/id6720712299" rel="nofollow">https://apps.apple.com/app/id6720712299</a>
Show HN: Instead of opentowork on LinkedIn, I'm desperate cos it's true
Just made a random fun tool over at LinkedIn, but why should they have all the fun
Show HN: 73,530 free book summaries in 40 languages (with paid audio)
Web scraping with your web browser: Why not?
Includes working code. First article in a planned series.
Web scraping with your web browser: Why not?
Includes working code. First article in a planned series.
Show HN: Sourcebot, an open-source Sourcegraph alternative
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>). Sourcebot is an open-source code search tool that allows you to quickly search across many large codebases. Check out our demo video here: <a href="https://youtu.be/mrIFYSB_1F4" rel="nofollow">https://youtu.be/mrIFYSB_1F4</a>, or try it for yourself on our demo site here: <a href="https://demo.sourcebot.dev" rel="nofollow">https://demo.sourcebot.dev</a><p>While at prior roles, we’ve both felt the pain of searching across hundreds of multi-million line codebases. Using local tools like grep were ill-suited since you often only had a handful of codebases checked out at a time. Sourcegraph (<a href="https://sourcegraph.com/" rel="nofollow">https://sourcegraph.com/</a>) solves this issue by indexing a collection of codebases in the background and exposing a web-based search interface. It is the de-facto search solution for medium to large orgs, but is often cited as expensive ($49 per user / month) and recently went closed source (<a href="https://news.ycombinator.com/item?id=41296481">https://news.ycombinator.com/item?id=41296481</a>). That’s why we built Sourcebot.<p>We designed Sourcebot to be:<p>- Easily deployed: we provide a single, self-contained Docker image (<a href="https://github.com/sourcebot-dev/sourcebot/pkgs/container/sourcebot">https://github.com/sourcebot-dev/sourcebot/pkgs/container/so...</a>).<p>- Fast & scalable: designed to minimize search times (current average is ~73ms) across many large repositories.<p>- Cross code-host support: we currently support syncing public & private repositories in GitHub and GitLab.<p>- Quality UI: we like to think that a good looking dev-tool is more pleasant to use.<p>- Open source: Sourcebot is free to use by anyone.<p>Under the hood, we use Zoekt (<a href="https://github.com/sourcegraph/zoekt">https://github.com/sourcegraph/zoekt</a>) as our code search engine, which was originally authored by Han-Wen Nienhuys and now maintained by Sourcegraph (<a href="https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-maintainership" rel="nofollow">https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-mai...</a>). Zoekt works by building a trigram index from the source code enabling extremely fast regular expression matching. Russ Cox has a great article on how trigram indexes work if you’re interested: <a href="https://swtch.com/~rsc/regexp/regexp4.html" rel="nofollow">https://swtch.com/~rsc/regexp/regexp4.html</a><p>In the shorter-term, there are several improvements we want to make, like:<p>- Improving how we communicate indexing progress (this is currently non-existent so it’s not obvious how long things will take)<p>- UX improvements like search history, query syntax highlighting & suggestions, etc.<p>- Small QOL improvements like bookmarking code snippets.<p>- Support for more code hosts (e.g., BitBucket, SourceForge, ADO, etc.)<p>In the longer-term, we want to investigate how we could go beyond just traditional code search by leveraging machine learning to enable experiences like semantic code search (“where is system X located?”) and code explanations (”how does system X interact with system Y?”). You could think of this as a copilot being embedded into Sourcebot. Our hunch is that will be useful to devs, especially when packaged with the traditional code search, but let us know what you think.<p>Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: Sourcebot, an open-source Sourcegraph alternative
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>). Sourcebot is an open-source code search tool that allows you to quickly search across many large codebases. Check out our demo video here: <a href="https://youtu.be/mrIFYSB_1F4" rel="nofollow">https://youtu.be/mrIFYSB_1F4</a>, or try it for yourself on our demo site here: <a href="https://demo.sourcebot.dev" rel="nofollow">https://demo.sourcebot.dev</a><p>While at prior roles, we’ve both felt the pain of searching across hundreds of multi-million line codebases. Using local tools like grep were ill-suited since you often only had a handful of codebases checked out at a time. Sourcegraph (<a href="https://sourcegraph.com/" rel="nofollow">https://sourcegraph.com/</a>) solves this issue by indexing a collection of codebases in the background and exposing a web-based search interface. It is the de-facto search solution for medium to large orgs, but is often cited as expensive ($49 per user / month) and recently went closed source (<a href="https://news.ycombinator.com/item?id=41296481">https://news.ycombinator.com/item?id=41296481</a>). That’s why we built Sourcebot.<p>We designed Sourcebot to be:<p>- Easily deployed: we provide a single, self-contained Docker image (<a href="https://github.com/sourcebot-dev/sourcebot/pkgs/container/sourcebot">https://github.com/sourcebot-dev/sourcebot/pkgs/container/so...</a>).<p>- Fast & scalable: designed to minimize search times (current average is ~73ms) across many large repositories.<p>- Cross code-host support: we currently support syncing public & private repositories in GitHub and GitLab.<p>- Quality UI: we like to think that a good looking dev-tool is more pleasant to use.<p>- Open source: Sourcebot is free to use by anyone.<p>Under the hood, we use Zoekt (<a href="https://github.com/sourcegraph/zoekt">https://github.com/sourcegraph/zoekt</a>) as our code search engine, which was originally authored by Han-Wen Nienhuys and now maintained by Sourcegraph (<a href="https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-maintainership" rel="nofollow">https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-mai...</a>). Zoekt works by building a trigram index from the source code enabling extremely fast regular expression matching. Russ Cox has a great article on how trigram indexes work if you’re interested: <a href="https://swtch.com/~rsc/regexp/regexp4.html" rel="nofollow">https://swtch.com/~rsc/regexp/regexp4.html</a><p>In the shorter-term, there are several improvements we want to make, like:<p>- Improving how we communicate indexing progress (this is currently non-existent so it’s not obvious how long things will take)<p>- UX improvements like search history, query syntax highlighting & suggestions, etc.<p>- Small QOL improvements like bookmarking code snippets.<p>- Support for more code hosts (e.g., BitBucket, SourceForge, ADO, etc.)<p>In the longer-term, we want to investigate how we could go beyond just traditional code search by leveraging machine learning to enable experiences like semantic code search (“where is system X located?”) and code explanations (”how does system X interact with system Y?”). You could think of this as a copilot being embedded into Sourcebot. Our hunch is that will be useful to devs, especially when packaged with the traditional code search, but let us know what you think.<p>Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: Sourcebot, an open-source Sourcegraph alternative
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>). Sourcebot is an open-source code search tool that allows you to quickly search across many large codebases. Check out our demo video here: <a href="https://youtu.be/mrIFYSB_1F4" rel="nofollow">https://youtu.be/mrIFYSB_1F4</a>, or try it for yourself on our demo site here: <a href="https://demo.sourcebot.dev" rel="nofollow">https://demo.sourcebot.dev</a><p>While at prior roles, we’ve both felt the pain of searching across hundreds of multi-million line codebases. Using local tools like grep were ill-suited since you often only had a handful of codebases checked out at a time. Sourcegraph (<a href="https://sourcegraph.com/" rel="nofollow">https://sourcegraph.com/</a>) solves this issue by indexing a collection of codebases in the background and exposing a web-based search interface. It is the de-facto search solution for medium to large orgs, but is often cited as expensive ($49 per user / month) and recently went closed source (<a href="https://news.ycombinator.com/item?id=41296481">https://news.ycombinator.com/item?id=41296481</a>). That’s why we built Sourcebot.<p>We designed Sourcebot to be:<p>- Easily deployed: we provide a single, self-contained Docker image (<a href="https://github.com/sourcebot-dev/sourcebot/pkgs/container/sourcebot">https://github.com/sourcebot-dev/sourcebot/pkgs/container/so...</a>).<p>- Fast & scalable: designed to minimize search times (current average is ~73ms) across many large repositories.<p>- Cross code-host support: we currently support syncing public & private repositories in GitHub and GitLab.<p>- Quality UI: we like to think that a good looking dev-tool is more pleasant to use.<p>- Open source: Sourcebot is free to use by anyone.<p>Under the hood, we use Zoekt (<a href="https://github.com/sourcegraph/zoekt">https://github.com/sourcegraph/zoekt</a>) as our code search engine, which was originally authored by Han-Wen Nienhuys and now maintained by Sourcegraph (<a href="https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-maintainership" rel="nofollow">https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-mai...</a>). Zoekt works by building a trigram index from the source code enabling extremely fast regular expression matching. Russ Cox has a great article on how trigram indexes work if you’re interested: <a href="https://swtch.com/~rsc/regexp/regexp4.html" rel="nofollow">https://swtch.com/~rsc/regexp/regexp4.html</a><p>In the shorter-term, there are several improvements we want to make, like:<p>- Improving how we communicate indexing progress (this is currently non-existent so it’s not obvious how long things will take)<p>- UX improvements like search history, query syntax highlighting & suggestions, etc.<p>- Small QOL improvements like bookmarking code snippets.<p>- Support for more code hosts (e.g., BitBucket, SourceForge, ADO, etc.)<p>In the longer-term, we want to investigate how we could go beyond just traditional code search by leveraging machine learning to enable experiences like semantic code search (“where is system X located?”) and code explanations (”how does system X interact with system Y?”). You could think of this as a copilot being embedded into Sourcebot. Our hunch is that will be useful to devs, especially when packaged with the traditional code search, but let us know what you think.<p>Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: A real time AI video agent with under 1 second of latency
Hey it’s Hassaan & Quinn – co-founders of Tavus, an AI research company and developer platform for video APIs. We’ve been building AI video models for ‘digital twins’ or ‘avatars’ since 2020.<p>We’re sharing some of the challenges we faced building an AI video interface that has realistic conversations with a human, including getting it to under 1 second of latency.<p>To try it, talk to Hassaan’s digital twin: <a href="https://www.hassaanraza.com" rel="nofollow">https://www.hassaanraza.com</a>, or to our "demo twin" Carter: <a href="https://www.tavus.io">https://www.tavus.io</a><p>We built this because until now, we've had to adapt communication to the limits of technology. But what if we could interact naturally with a computer? Conversational video makes it possible – we think it'll eventually be a key human-computer interface.<p>To make conversational video effective, it has to have really low latency and conversational awareness. A fast-paced conversation between friends has ~250 ms between utterances, but if you’re talking about something more complex or with someone new, there is additional “thinking” time. So, less than 1000 ms latency makes the conversation feel pretty realistic, and that became our target.<p>Our architecture decisions had to balance 3 things: latency, scale, & cost. Getting all of these was a huge challenge.<p>The first lesson learned was to make it low-latency, we had to build it from the ground up. We went from a team that cared about seconds to a team that counts every millisecond. We also had to support thousands of conversations happening all at once, without getting destroyed on compute costs.<p>For example, during early development, each conversation had to run on an individual H100 in order to fit all components and model weights into GPU memory just to run our Phoenix-1 model faster than 30fps. This was unscalable & expensive.<p>We developed a new model, Phoenix-2, with a number of improvements, including inference speed. We switched from a NeRF based backbone to Gaussian Splatting for a multitude of reasons, one being the requirement that we could generate frames faster than realtime, at 70+ fps on lower-end hardware.
We exceeded this and focused on optimizing memory and core usage on GPU to allow for lower-end hardware to run it all. We did other things to save on time and cost like using streaming vs batching, parallelizing processes, etc. But those are stories for another day.<p>We still had to lower the utterance-to-utterance time to hit our goal of under a second of latency. This meant each component (vision, ASR, LLM, TTS, video generation) had to be hyper-optimized.<p>The worst offender was the LLM. It didn’t matter how fast the tokens per second (t/s) were, it was the time-to-first token (tfft) that really made the difference. That meant services like Groq were actually too slow – they had high t/s, but slow ttft. Most providers were too slow.<p>The next worst offender was actually detecting when someone stopped speaking. This is hard. Basic solutions use time after silence to ‘determine’ when someone has stopped talking. But it adds latency. If you tune it to be too short, the AI agent will talk over you. Too long, and it’ll take a while to respond. The model had to be dedicated to accurately detecting end-of-turn based on conversation signals, and speculating on inputs to get a head start.<p>We went from 3-5 to <1 second (& as fast as 600 ms) with these architectural optimizations while running on lower-end hardware.<p>All this allowed us to ship with a less than 1 second of latency, which we believe is the fastest out there. We have a bunch of customers, including Delphi, a professional coach and expert cloning platform. They have users that have conversations with digital twins that span from minutes, to one hour, to even four hours (!) - which is mind blowing, even to us.<p>Thanks for reading! let us know what you think and what you would build. If you want to play around with our APIs after seeing the demo, you can sign up for free from our website <a href="https://www.tavus.io">https://www.tavus.io</a>.
Show HN: A real time AI video agent with under 1 second of latency
Hey it’s Hassaan & Quinn – co-founders of Tavus, an AI research company and developer platform for video APIs. We’ve been building AI video models for ‘digital twins’ or ‘avatars’ since 2020.<p>We’re sharing some of the challenges we faced building an AI video interface that has realistic conversations with a human, including getting it to under 1 second of latency.<p>To try it, talk to Hassaan’s digital twin: <a href="https://www.hassaanraza.com" rel="nofollow">https://www.hassaanraza.com</a>, or to our "demo twin" Carter: <a href="https://www.tavus.io">https://www.tavus.io</a><p>We built this because until now, we've had to adapt communication to the limits of technology. But what if we could interact naturally with a computer? Conversational video makes it possible – we think it'll eventually be a key human-computer interface.<p>To make conversational video effective, it has to have really low latency and conversational awareness. A fast-paced conversation between friends has ~250 ms between utterances, but if you’re talking about something more complex or with someone new, there is additional “thinking” time. So, less than 1000 ms latency makes the conversation feel pretty realistic, and that became our target.<p>Our architecture decisions had to balance 3 things: latency, scale, & cost. Getting all of these was a huge challenge.<p>The first lesson learned was to make it low-latency, we had to build it from the ground up. We went from a team that cared about seconds to a team that counts every millisecond. We also had to support thousands of conversations happening all at once, without getting destroyed on compute costs.<p>For example, during early development, each conversation had to run on an individual H100 in order to fit all components and model weights into GPU memory just to run our Phoenix-1 model faster than 30fps. This was unscalable & expensive.<p>We developed a new model, Phoenix-2, with a number of improvements, including inference speed. We switched from a NeRF based backbone to Gaussian Splatting for a multitude of reasons, one being the requirement that we could generate frames faster than realtime, at 70+ fps on lower-end hardware.
We exceeded this and focused on optimizing memory and core usage on GPU to allow for lower-end hardware to run it all. We did other things to save on time and cost like using streaming vs batching, parallelizing processes, etc. But those are stories for another day.<p>We still had to lower the utterance-to-utterance time to hit our goal of under a second of latency. This meant each component (vision, ASR, LLM, TTS, video generation) had to be hyper-optimized.<p>The worst offender was the LLM. It didn’t matter how fast the tokens per second (t/s) were, it was the time-to-first token (tfft) that really made the difference. That meant services like Groq were actually too slow – they had high t/s, but slow ttft. Most providers were too slow.<p>The next worst offender was actually detecting when someone stopped speaking. This is hard. Basic solutions use time after silence to ‘determine’ when someone has stopped talking. But it adds latency. If you tune it to be too short, the AI agent will talk over you. Too long, and it’ll take a while to respond. The model had to be dedicated to accurately detecting end-of-turn based on conversation signals, and speculating on inputs to get a head start.<p>We went from 3-5 to <1 second (& as fast as 600 ms) with these architectural optimizations while running on lower-end hardware.<p>All this allowed us to ship with a less than 1 second of latency, which we believe is the fastest out there. We have a bunch of customers, including Delphi, a professional coach and expert cloning platform. They have users that have conversations with digital twins that span from minutes, to one hour, to even four hours (!) - which is mind blowing, even to us.<p>Thanks for reading! let us know what you think and what you would build. If you want to play around with our APIs after seeing the demo, you can sign up for free from our website <a href="https://www.tavus.io">https://www.tavus.io</a>.