The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: I made a URL expander because short links are too mainstream

Show HN: Tenno – Markdown and JavaScript = a hybrid of Word and Excel

Tenno is a web app that lets you create Markdown documents that can include computational cells. You can think of it as a mix of Word and Excel, some sort of "literate programming" environment.<p>This is still an early version but I wanted to get some feedback from HN on what could be nice features to add.<p>Check out the Docs and examples page, it has a ton of (in my humble opinion) cool stuff!<p>Why did I build this? I was building some estimation for cloud costs in Google Sheets and I quickly ended up with a mess. I realized that if I wanted to analyze how a certain thing changes wrt to multiple variables by plotting it, I had to create a bunch of copies of data and copy my formulas everywhere... a SWE nightmare!<p>Tenno simplifies this because you can essentially define a function you are interested in, and only the analyzing it using plots that explore various dimensions.<p>BTW, you can also use Tenno to build dashboards by pulling data from APIs, checkout the weather data example.

Show HN: HTML for People

Show HN: HTML for People

Show HN: I made an SSH tunnel manager to learn Go

Show HN: Winamp and other media players, rebuilt for the web with Web Components

Hey all, creator of Video.js and co-founder of Mux & Zencoder here. My team and I built this. I hope you like the themes we’ve built so far, and maybe even get inspired to build your own.<p>I know Web Components are in a bit of a drama cycle right now. I’m happy to see them get any attention really. I’ve been pretty bullish on them since ~2013 when I started working with them, at least in the context of a <i>widget</i> like a video player. I’ve even given many related talks on them like this one (<a href="https://www.youtube.com/watch?v=N6Mh84SRoDg" rel="nofollow">https://www.youtube.com/watch?v=N6Mh84SRoDg</a>).<p>I would never push them for a large app or as a full replacement for React, but they’ve been incredible for making video players that are compatible across many contexts, and Player.style is a clear demonstration of that when you get to the step of embedding a theme. Web components really shine for building bits of UI that can be shared between projects. They also are the best way to avoid the long term JS framework thrash that’s a challenge for any developer who works on the web for long enough. One of the best decisions I ever made for Video.js was to <i>not</i> build it with jQuery. Video.js is 15 years old now and still in use, while all the jQuery players are not.<p>For some added context of this project, when I was building Video.js back in 2010 I put a lot of thought into how other developers would customize the player controls. I was excited to use web technologies (instead of Flash) to build a player and I <i>knew</i> other web devs would be too.<p>Fast forward 14 years — Video.js has been used on millions of websites including Twitter, Instagram, Amazon, Dropbox, Linkedin and even in United Airlines headrests. In 99.99% of those cases the default Video.js controls were used with little to no customization. So…huge adoption success, utter failure in sparking creativity. In retrospect, asking people to learn a new UI framework just to style their player was too much.<p>Media Chrome and Player.style are my answer to that friction.<p>- Media Chrome - A suite of Web Components and React Components that let you easily build a media player UI from scratch, using components you’re already familiar with.<p>- Player.stye - Themes built with Media Chrome, showing the cross-player and cross-framework flexibility of Media Chrome<p>Media Chrome is already used on sites like TED.com, Syntax.fm, and anywhere the Mux Player is used. We’ve spent the last few months building some great themes for Player.style. I probably had the most fun recreating the Youtube icon animations from scratch using SVGs and CSS. (Whoever made the originals, nicely done!)<p>It’s all free and open source, so don’t hesitate to jump in if you’re interested in the project. And of course I’m happy to answer any questions.

Show HN: Kotlin Money

Manipulating monetary amounts is a common computing chore. However, no mainstream language has a first-class data type for representing money, it’s up to programmers to code abstractions for it. This isn’t an issue per se until dealing with rounding issues from operations like installment payments (e.g., buy now, pay later), foreign exchange, or even simple things like fee processing and tax collection.<p>Inspired by my days at N26 Brasil dealing with these challenges, I introduce Money: a Kotlin library that makes monetary calculations and allocations easy.

Show HN: I built a Iridium/LTE satellite GPS tracker and took it to the Arctic

Show HN: Visualization of website accessibility tree

When COVID-19 started I needed something to get busy to not go crazy. I happened to work on our app WCAG compliance for a few months at the time and was frustrated by the state of of accessibility-related tools for developers.<p>I've spend two months delivering a tool that is easy to understand and helps catching accessibility issues on my apps. A few years later it's pretty popular despite being mostly abandoned.<p>I will be happy to work on this further but honestly lost my enthusiasm some time ago. I'd love to get in touch with some company in the accessibility testing space and discuss how to improve it.

Show HN: Open source framework OpenAI uses for Advanced Voice

Hey HN, we've been working with OpenAI for the past few months on the new Realtime API.<p>The goal is to give everyone access to the same stack that underpins Advanced Voice in the ChatGPT app.<p>Under the hood it works like this: - A user's speech is captured by a LiveKit client SDK in the ChatGPT app - Their speech is streamed using WebRTC to OpenAI’s voice agent - The agent relays the speech prompt over websocket to GPT-4o - GPT-4o runs inference and streams speech packets (over websocket) back to the agent - The agent relays generated speech using WebRTC back to the user’s device<p>The Realtime API that OpenAI launched is the websocket interface to GPT-4o. This backend framework covers the voice agent portion. Besides having additional logic like function calling, the agent fundamentally proxies WebRTC to websocket.<p>The reason for this is because websocket isn’t the best choice for client-server communication. The vast majority of packet loss occurs between a server and client device and websocket doesn’t provide programmatic control or intervention in lossy network environments like WiFi or cellular. Packet loss leads to higher latency and choppy or garbled audio.

Show HN: Chebyshev approximation calculator

Hi everyone,<p>here's a web app I made that generates code for efficiently approximating mathematical functions. This is useful when performance matters more than perfect accuracy, for example in embedded systems.<p>The app uses Chebyshev expansions, which despite their theoretical depth result in suprisingly compact and readable code in practice. This code is generated for you and using it does not require any knowledge of the underlying theory.<p>Source code and more info: <a href="https://github.com/stuffmatic/chebyshev-calculator">https://github.com/stuffmatic/chebyshev-calculator</a>

Show HN: One – A new React framework unifying web, native and local-first

Hey HN, I'm Nate, the creator of Tamagui.<p>One is a React framework that does two things differently in hopes of simplifying how we build websites and apps:<p>1. It unifies React Native and React web with typed file system routing by making Vite able to serve RN. This lets you share (or diverge) your code in a simpler way for cross-platform apps.<p>2. We've partnered with Zero (<a href="https://zerosync.dev" rel="nofollow">https://zerosync.dev</a>) to make local-first work well. We've been building a solution in One that makes Zero supporting server rendering, without waterfalls, and with seamless server/client handoff.<p>---<p>Honestly - I'm a bit hesitant to post One here.<p>HN has really soured on frontend/frameworks. And I get it. We've collectively complicated the hell out of things.<p>That's why I decided to build One. I loved Rails, it made me as a young developer able to finally realize way more ambitious projects than I'd ever done before. I also liked the promise (not implementation) of Meteor - it felt like the clear future, I guess just a bit too early (and a bit too scope-creeped).<p>I worked at Uniswap and built Tamagui and so spent a lot of time building cross-platform apps that share code. Uniswap is built on Tamagui and I think proves you <i>can</i> make really high quality UX while sharing a lot of code - but it's insanely hard and requires a huge team. My goal with One is to make what is now possible but hard dramatically easier.<p>And I think the path to there goes through local-first, because it makes building super responsive apps much, much simpler, and Zero is the first library to actually pull it off in a way that doesn't bloat your bundle or have very limiting constraints.<p>I happened to live down the street from Aaron, one of the founders of Zero, in our tiny town in Hawaii. We talked a lot about Zero over the last couple years, and I found it really admirable how he consistently chose the "harder but better" path in building it. It really shaped into something incredible, and that convinced me to actually launch One, which at the time was more of an experiment.<p>I can see a lot of potential criticism - do we need yet another framework, this is too shiny and vaporware-y, this is just more complexity and abstraction, etc. Happy to respond to those comments if they come.<p>I'm just building out something that I've been wanting for a long time. Opinionated enough to let me move fast like Rails, but leaning on the great work of team Zero so that we don't end up with the scope creep of Meteor. And honestly, it's just really fun to hack on.

Show HN: One – A new React framework unifying web, native and local-first

Hey HN, I'm Nate, the creator of Tamagui.<p>One is a React framework that does two things differently in hopes of simplifying how we build websites and apps:<p>1. It unifies React Native and React web with typed file system routing by making Vite able to serve RN. This lets you share (or diverge) your code in a simpler way for cross-platform apps.<p>2. We've partnered with Zero (<a href="https://zerosync.dev" rel="nofollow">https://zerosync.dev</a>) to make local-first work well. We've been building a solution in One that makes Zero supporting server rendering, without waterfalls, and with seamless server/client handoff.<p>---<p>Honestly - I'm a bit hesitant to post One here.<p>HN has really soured on frontend/frameworks. And I get it. We've collectively complicated the hell out of things.<p>That's why I decided to build One. I loved Rails, it made me as a young developer able to finally realize way more ambitious projects than I'd ever done before. I also liked the promise (not implementation) of Meteor - it felt like the clear future, I guess just a bit too early (and a bit too scope-creeped).<p>I worked at Uniswap and built Tamagui and so spent a lot of time building cross-platform apps that share code. Uniswap is built on Tamagui and I think proves you <i>can</i> make really high quality UX while sharing a lot of code - but it's insanely hard and requires a huge team. My goal with One is to make what is now possible but hard dramatically easier.<p>And I think the path to there goes through local-first, because it makes building super responsive apps much, much simpler, and Zero is the first library to actually pull it off in a way that doesn't bloat your bundle or have very limiting constraints.<p>I happened to live down the street from Aaron, one of the founders of Zero, in our tiny town in Hawaii. We talked a lot about Zero over the last couple years, and I found it really admirable how he consistently chose the "harder but better" path in building it. It really shaped into something incredible, and that convinced me to actually launch One, which at the time was more of an experiment.<p>I can see a lot of potential criticism - do we need yet another framework, this is too shiny and vaporware-y, this is just more complexity and abstraction, etc. Happy to respond to those comments if they come.<p>I'm just building out something that I've been wanting for a long time. Opinionated enough to let me move fast like Rails, but leaning on the great work of team Zero so that we don't end up with the scope creep of Meteor. And honestly, it's just really fun to hack on.

Show HN: Flyon UI – Tailwind Components Library

I made a game you can play without anyone knowing (no visuals/sound)

Hello everyone! I just launched an iOS game called Tik! and it has no visuals or sound of any kind. So the obvious question is.. how do you play it?<p>The game uses your phone’s Haptics in order to play a rhythm of “Tiks” (haptic vibrations). The user then has to try and recreate the timing of the rhythm they just felt by tapping it anywhere on the screen. It sounds easy, but getting the timing right is tricky, and so it usually takes a couple tries before your able to get it right.<p>The inspiration for the game came from wanting something to do in a really boring presentation. It would have been disrespectful to look at my phone, but I also needed a distraction. I typically hold my phone in these kinds of scenarios, and fiddle with the case, when it occurred to me: what if there was a game I could play just holding the phone anywhere (under a desk, in my pocket, to the side, etc.). Sometime later Tik! was born :)<p>I would love your feedback on it. The game is paid, but if someone would like a promo code to try it please let me know below. Link: <a href="https://apps.apple.com/app/id6720712299" rel="nofollow">https://apps.apple.com/app/id6720712299</a>

Show HN: Sourcebot, an open-source Sourcegraph alternative

Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>). Sourcebot is an open-source code search tool that allows you to quickly search across many large codebases. Check out our demo video here: <a href="https://youtu.be/mrIFYSB_1F4" rel="nofollow">https://youtu.be/mrIFYSB_1F4</a>, or try it for yourself on our demo site here: <a href="https://demo.sourcebot.dev" rel="nofollow">https://demo.sourcebot.dev</a><p>While at prior roles, we’ve both felt the pain of searching across hundreds of multi-million line codebases. Using local tools like grep were ill-suited since you often only had a handful of codebases checked out at a time. Sourcegraph (<a href="https://sourcegraph.com/" rel="nofollow">https://sourcegraph.com/</a>) solves this issue by indexing a collection of codebases in the background and exposing a web-based search interface. It is the de-facto search solution for medium to large orgs, but is often cited as expensive ($49 per user / month) and recently went closed source (<a href="https://news.ycombinator.com/item?id=41296481">https://news.ycombinator.com/item?id=41296481</a>). That’s why we built Sourcebot.<p>We designed Sourcebot to be:<p>- Easily deployed: we provide a single, self-contained Docker image (<a href="https://github.com/sourcebot-dev/sourcebot/pkgs/container/sourcebot">https://github.com/sourcebot-dev/sourcebot/pkgs/container/so...</a>).<p>- Fast & scalable: designed to minimize search times (current average is ~73ms) across many large repositories.<p>- Cross code-host support: we currently support syncing public & private repositories in GitHub and GitLab.<p>- Quality UI: we like to think that a good looking dev-tool is more pleasant to use.<p>- Open source: Sourcebot is free to use by anyone.<p>Under the hood, we use Zoekt (<a href="https://github.com/sourcegraph/zoekt">https://github.com/sourcegraph/zoekt</a>) as our code search engine, which was originally authored by Han-Wen Nienhuys and now maintained by Sourcegraph (<a href="https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-maintainership" rel="nofollow">https://sourcegraph.com/blog/sourcegraph-accepting-zoekt-mai...</a>). Zoekt works by building a trigram index from the source code enabling extremely fast regular expression matching. Russ Cox has a great article on how trigram indexes work if you’re interested: <a href="https://swtch.com/~rsc/regexp/regexp4.html" rel="nofollow">https://swtch.com/~rsc/regexp/regexp4.html</a><p>In the shorter-term, there are several improvements we want to make, like:<p>- Improving how we communicate indexing progress (this is currently non-existent so it’s not obvious how long things will take)<p>- UX improvements like search history, query syntax highlighting & suggestions, etc.<p>- Small QOL improvements like bookmarking code snippets.<p>- Support for more code hosts (e.g., BitBucket, SourceForge, ADO, etc.)<p>In the longer-term, we want to investigate how we could go beyond just traditional code search by leveraging machine learning to enable experiences like semantic code search (“where is system X located?”) and code explanations (”how does system X interact with system Y?”). You could think of this as a copilot being embedded into Sourcebot. Our hunch is that will be useful to devs, especially when packaged with the traditional code search, but let us know what you think.<p>Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!

Show HN: A real time AI video agent with under 1 second of latency

Hey it’s Hassaan & Quinn – co-founders of Tavus, an AI research company and developer platform for video APIs. We’ve been building AI video models for ‘digital twins’ or ‘avatars’ since 2020.<p>We’re sharing some of the challenges we faced building an AI video interface that has realistic conversations with a human, including getting it to under 1 second of latency.<p>To try it, talk to Hassaan’s digital twin: <a href="https://www.hassaanraza.com" rel="nofollow">https://www.hassaanraza.com</a>, or to our "demo twin" Carter: <a href="https://www.tavus.io">https://www.tavus.io</a><p>We built this because until now, we've had to adapt communication to the limits of technology. But what if we could interact naturally with a computer? Conversational video makes it possible – we think it'll eventually be a key human-computer interface.<p>To make conversational video effective, it has to have really low latency and conversational awareness. A fast-paced conversation between friends has ~250 ms between utterances, but if you’re talking about something more complex or with someone new, there is additional “thinking” time. So, less than 1000 ms latency makes the conversation feel pretty realistic, and that became our target.<p>Our architecture decisions had to balance 3 things: latency, scale, & cost. Getting all of these was a huge challenge.<p>The first lesson learned was to make it low-latency, we had to build it from the ground up. We went from a team that cared about seconds to a team that counts every millisecond. We also had to support thousands of conversations happening all at once, without getting destroyed on compute costs.<p>For example, during early development, each conversation had to run on an individual H100 in order to fit all components and model weights into GPU memory just to run our Phoenix-1 model faster than 30fps. This was unscalable & expensive.<p>We developed a new model, Phoenix-2, with a number of improvements, including inference speed. We switched from a NeRF based backbone to Gaussian Splatting for a multitude of reasons, one being the requirement that we could generate frames faster than realtime, at 70+ fps on lower-end hardware. We exceeded this and focused on optimizing memory and core usage on GPU to allow for lower-end hardware to run it all. We did other things to save on time and cost like using streaming vs batching, parallelizing processes, etc. But those are stories for another day.<p>We still had to lower the utterance-to-utterance time to hit our goal of under a second of latency. This meant each component (vision, ASR, LLM, TTS, video generation) had to be hyper-optimized.<p>The worst offender was the LLM. It didn’t matter how fast the tokens per second (t/s) were, it was the time-to-first token (tfft) that really made the difference. That meant services like Groq were actually too slow – they had high t/s, but slow ttft. Most providers were too slow.<p>The next worst offender was actually detecting when someone stopped speaking. This is hard. Basic solutions use time after silence to ‘determine’ when someone has stopped talking. But it adds latency. If you tune it to be too short, the AI agent will talk over you. Too long, and it’ll take a while to respond. The model had to be dedicated to accurately detecting end-of-turn based on conversation signals, and speculating on inputs to get a head start.<p>We went from 3-5 to <1 second (& as fast as 600 ms) with these architectural optimizations while running on lower-end hardware.<p>All this allowed us to ship with a less than 1 second of latency, which we believe is the fastest out there. We have a bunch of customers, including Delphi, a professional coach and expert cloning platform. They have users that have conversations with digital twins that span from minutes, to one hour, to even four hours (!) - which is mind blowing, even to us.<p>Thanks for reading! let us know what you think and what you would build. If you want to play around with our APIs after seeing the demo, you can sign up for free from our website <a href="https://www.tavus.io">https://www.tavus.io</a>.

Show HN: qrframe – generate beautiful qr codes with javascript code

I originally built a QR code generator as a resume project using Rust and I realized a web interface would make customization way easier.<p>This still generates the "data" using that rust library via wasm, but the rendering is all editable javascript to make an SVG or paint on an HTML canvas.<p>I was especially inspired by <a href="https://qrbtf.com" rel="nofollow">https://qrbtf.com</a> which had some unique style options I had never seen before, which I ended up copying, and then I made some more.

Show HN: qrframe – generate beautiful qr codes with javascript code

I originally built a QR code generator as a resume project using Rust and I realized a web interface would make customization way easier.<p>This still generates the "data" using that rust library via wasm, but the rendering is all editable javascript to make an SVG or paint on an HTML canvas.<p>I was especially inspired by <a href="https://qrbtf.com" rel="nofollow">https://qrbtf.com</a> which had some unique style options I had never seen before, which I ended up copying, and then I made some more.

Show HN: A macOS app to prevent sound quality degradation on AirPods

Right, here's the thing: If you are using AirPods(or any Bluetooth headphones with a mic in fact) on Mac and something activates the mic(i.e. you Shazam a song), the sound will be interrupted momentarily and will return in very low quality. This is happening because Bluetooth can't handle both way high quality streaming and the bandwidth is decreased to make it work.<p>It's a known issue and here's what Apple recommends to fix it: <a href="https://support.apple.com/en-hk/102217" rel="nofollow">https://support.apple.com/en-hk/102217</a><p>Most of the time(unless you are on a Mac Mini/Studio/Pro), you have much higher quality microphones built in, so in most use cases, you want to hear from your AirPods but be heard from your internal microphone, which means if every time you connect your AirPods and go into the settings and set the default input device as the internal mic, you won't have sound quality degradation on mic activation, and if you use your mic to talk to people or record something, you will have better sound quality too.<p>Based on this observation, first I tried to create a script or some automation that can do it for me but found out that it can be clunky or needlessly complex.<p>Here's someone who used this approach to fix this issue: <a href="https://www.dermitch.de/post/macos-force-microphone-when-using-airpods/" rel="nofollow">https://www.dermitch.de/post/macos-force-microphone-when-usi...</a><p>Anyway, I decided to take the "build your app for that" route and created this app and called it CrystalClear Audio which doesn't involve any technical setup to use. Making it was also not as easy I hoped, I was expecting this to be a half an hour project but ended up filing bug reports with Apple because some API wasn't behaving as expected or mysterious things were happening when using it(like phantom device changes).<p>After spending that much time with all this, I decided to publish it on Mac AppStore and after too many rejections(all my mistakes) I got it published: <a href="https://apps.apple.com/us/app/crystalclear-sound/id6695723746" rel="nofollow">https://apps.apple.com/us/app/crystalclear-sound/id669572374...</a><p>The app is not free but comes with a free trial. I decided to go with a very cheap subscription model because I suspect further development might be needed as bugs emerge or API behavior changes. I know its a hated business model but IMHO it's better than ads or tracking of any sort to justify the work done. It's not free because supporting a free app is just as hard as supporting a paid one and it's not one time payment because I don't know what would the right price be for supporting an app for years to come and still have people willing to pay for it.<p>I hope other people find this useful and if you do, you can support by upvoting on Producthunt so even more people can find it sueful: <a href="https://www.producthunt.com/posts/crystalclear-sound" rel="nofollow">https://www.producthunt.com/posts/crystalclear-sound</a><p>PS: the app is also useful for quickly switching between giving the sound out of the laptop speakers and the headphones, I ended up using that quite often.

< 1 2 3 4 5 6 ... 122 123 124 >