The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: FFlags – Feature flags as code, served from the edge

Hi HN,<p>I'm the creator of FFlags. I built this because I wanted a feature flagging system that gave me the performance and reliability of an enterprise-scale solution without the months of dev time or the vendor lock-in.<p>The core ideas are:<p>1. Feature Flags as Code: You define your flag logic in TypeScript. This lets you write complex rules, which felt more natural as a developer myself than using a complex UI for logic.<p>2. Open Standard: The platform is built on the OpenFeature standard (specifically the Remote Evaluation Protocol). The goal is to avoid vendor lock-in and the usual enterprise slop. You're not tied to my platform if you want to move.<p>3. Performance: It uses an edge network to serve the flags, which keeps the wall-time latency low (sub-25ms) for globally distributed applications.<p>I was trying to avoid the heavy cost and complexity of existing enterprise tools while still getting better performance than a simple self-hosted solution.<p>There's a generous free tier ($39 per million requests after that, with no flag/user limits). I'm looking for feedback on the developer experience, the "flags-as-code" approach, and any technical questions you might have.<p>Thanks for taking a look.

Show HN: FFlags – Feature flags as code, served from the edge

Hi HN,<p>I'm the creator of FFlags. I built this because I wanted a feature flagging system that gave me the performance and reliability of an enterprise-scale solution without the months of dev time or the vendor lock-in.<p>The core ideas are:<p>1. Feature Flags as Code: You define your flag logic in TypeScript. This lets you write complex rules, which felt more natural as a developer myself than using a complex UI for logic.<p>2. Open Standard: The platform is built on the OpenFeature standard (specifically the Remote Evaluation Protocol). The goal is to avoid vendor lock-in and the usual enterprise slop. You're not tied to my platform if you want to move.<p>3. Performance: It uses an edge network to serve the flags, which keeps the wall-time latency low (sub-25ms) for globally distributed applications.<p>I was trying to avoid the heavy cost and complexity of existing enterprise tools while still getting better performance than a simple self-hosted solution.<p>There's a generous free tier ($39 per million requests after that, with no flag/user limits). I'm looking for feedback on the developer experience, the "flags-as-code" approach, and any technical questions you might have.<p>Thanks for taking a look.

Show HN: I built a text-based birthday reminder app

I bought the birthdays.app domain a few years ago and decided to build a simple birthday app, just text message reminders.<p>I had hacked together a duct-tape version for myself a couple years prior (using zapier + google sheets), and found it useful. It was a pain sifting through Facebook birthdays (90% irrelevant), and I found the text reminders simpler than a calendar.<p>The app has been slowly growing since 2023, and is now up to 739 paying users (mostly on the $9/yr plan, with a few paying $3/yr).<p>Earlier this year, I built a Google Calendar integration to automatically sync birthdays to the app. My goal is birthday reminders without the baggage a normal "social" app would have such as ads, engagement bait, etc.<p>I call it an app, but there's actually no app to download. Users log in via phone number + sms login code, then input birthdays manually, via text, or via Google Calendar.<p>At the moment, US users receive SMS messages (which are cheaper to send in the US), while other countries receive WhatsApp text reminders.<p>The app uses Twilio to text users (both SMS + WhatsApp) and Stripe for payments.<p>I'm hoping to build more features (iPhone contacts sync, easier birthdays importing) while also making video content that celebrates strangers' birthdays (such as, going to a farmers market with an "is it your birthday?" sign).<p>Thank you for reading, I'm open to any thoughts or ideas around the app!

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases

Hey HN, we're Julian and Glenn, and we're building stagewise (<a href="https://stagewise.io">https://stagewise.io</a>), a frontend coding agent that lives inside your browser on localhost and operates on local codebases.<p>You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code.<p>Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development.<p>The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise.<p>Here's how it works: When you run `npx stagewise`, our cli proxies your running web application in dev mode and injects a toolbar containing the coding agent on top of it. Each prompt you send will be enriched with browser context and sent to our cli, which will call our backend and modify the source code of your local codebase accordingly.<p>Here's a demo of our agent changing the login UI of Cal.com, a popular open-source meeting scheduling app: <a href="https://www.youtube.com/watch?v=BkDcAozK9L4" rel="nofollow">https://www.youtube.com/watch?v=BkDcAozK9L4</a>.<p>So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console (<a href="https://console.stagewise.io">https://console.stagewise.io</a>).<p>If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with.<p>We're very excited to hear your feedback!

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases

Hey HN, we're Julian and Glenn, and we're building stagewise (<a href="https://stagewise.io">https://stagewise.io</a>), a frontend coding agent that lives inside your browser on localhost and operates on local codebases.<p>You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code.<p>Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development.<p>The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise.<p>Here's how it works: When you run `npx stagewise`, our cli proxies your running web application in dev mode and injects a toolbar containing the coding agent on top of it. Each prompt you send will be enriched with browser context and sent to our cli, which will call our backend and modify the source code of your local codebase accordingly.<p>Here's a demo of our agent changing the login UI of Cal.com, a popular open-source meeting scheduling app: <a href="https://www.youtube.com/watch?v=BkDcAozK9L4" rel="nofollow">https://www.youtube.com/watch?v=BkDcAozK9L4</a>.<p>So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console (<a href="https://console.stagewise.io">https://console.stagewise.io</a>).<p>If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with.<p>We're very excited to hear your feedback!

Show HN: Whittle – A shrinking word game

Whittle is a small word game I've been working on. Each phrase must be whittled down by one letter (or space) each turn. The remaining phrase must still consist of valid words. That's it! There's a daily puzzle, as well as an archive of old puzzles.<p>The idea for the game came to me in a dream (really) and I built the puzzle generator with my partner, who's also a software engineer. It's a labor of love! Any feedback or suggestions are welcome. Thanks for playing!

Show HN: Whittle – A shrinking word game

Whittle is a small word game I've been working on. Each phrase must be whittled down by one letter (or space) each turn. The remaining phrase must still consist of valid words. That's it! There's a daily puzzle, as well as an archive of old puzzles.<p>The idea for the game came to me in a dream (really) and I built the puzzle generator with my partner, who's also a software engineer. It's a labor of love! Any feedback or suggestions are welcome. Thanks for playing!

Show HN: Whittle – A shrinking word game

Whittle is a small word game I've been working on. Each phrase must be whittled down by one letter (or space) each turn. The remaining phrase must still consist of valid words. That's it! There's a daily puzzle, as well as an archive of old puzzles.<p>The idea for the game came to me in a dream (really) and I built the puzzle generator with my partner, who's also a software engineer. It's a labor of love! Any feedback or suggestions are welcome. Thanks for playing!

Show HN: Whittle – A shrinking word game

Whittle is a small word game I've been working on. Each phrase must be whittled down by one letter (or space) each turn. The remaining phrase must still consist of valid words. That's it! There's a daily puzzle, as well as an archive of old puzzles.<p>The idea for the game came to me in a dream (really) and I built the puzzle generator with my partner, who's also a software engineer. It's a labor of love! Any feedback or suggestions are welcome. Thanks for playing!

Show HN: I've been building an ERP for manufacturing for the last 3 years

Show HN: I've been building an ERP for manufacturing for the last 3 years

Show HN: Spatial Web Browser Engine

Show HN: Schematra – Sinatra-inspired minimal web framework for Chicken Scheme

I started this project a couple of weeks ago because I was stuck on my side project and needed some motivation. For a very long time I wanted to get back to do something useful in lisp/scheme, did a quick research and settled on CHICKEN mostly because it's relatively well maintained, fast enough, it's extremely easy to build/install and very easy to write interop to pretty much any library.<p>Most of the projects that I've written on the side have been using some combination of Sinatra + Sequel + Postgres/Redis/Something else + HTMX. I love the simplicity of Sinatra's API so I decided to focus on trying to have a similar experience but in scheme, trying to make it ergonomic for a scheme dev (that part might not be there yet since I'm not an experienced scheme dev).<p>The most fun part was the dev cycle: Emacs + NREPL + Aider (as a code reviewer & rubber ducky. For codegen it's mostly annoying but works great for documentation & refactoring).<p>I hope to add full SSE & WebSocket support some time this week. Anyway, hopefully this is interesting to some of you and might be a source of fun :)

< 1 2 3 ... 39 40 41 42 43 ... 886 887 888 >