The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Octofriend, a cute coding agent that can swap between GPT-5 and Claude

Hey HN! We're shipping Octofriend today, a cute coding assistant that can swap between GPT-5, Claude, local or open-source LLMs, etc mid-conversation as needed. It handles reasoning tokens (including encrypted ones from OpenAI and Anthropic) really well, and includes a couple of custom-trained ML models to fix minor diff edit and JSON encoding errors that we've also open-sourced. Have fun!

Show HN: Octofriend, a cute coding agent that can swap between GPT-5 and Claude

Hey HN! We're shipping Octofriend today, a cute coding assistant that can swap between GPT-5, Claude, local or open-source LLMs, etc mid-conversation as needed. It handles reasoning tokens (including encrypted ones from OpenAI and Anthropic) really well, and includes a couple of custom-trained ML models to fix minor diff edit and JSON encoding errors that we've also open-sourced. Have fun!

Show HN: Octofriend, a cute coding agent that can swap between GPT-5 and Claude

Hey HN! We're shipping Octofriend today, a cute coding assistant that can swap between GPT-5, Claude, local or open-source LLMs, etc mid-conversation as needed. It handles reasoning tokens (including encrypted ones from OpenAI and Anthropic) really well, and includes a couple of custom-trained ML models to fix minor diff edit and JSON encoding errors that we've also open-sourced. Have fun!

Show HN: Sinkzone DNS – Forwarder that blocks everything except your allowlist

Most site blockers work by blacklisting distractions. That never worked for me, the internet is too big, and there’s always something new to waste time on.<p>I wanted the opposite: allowlist‑only browsing. Block everything by default, and explicitly allow only what I need.<p>So I built Sinkzone: a local DNS forwarder with two modes:<p>Monitor mode: lets all traffic through, but logs every domain so you can decide what to allow.<p>Focus mode: only allowlisted domains resolve; everything else is blocked (NXDOMAIN).<p>It’s open source, written in Go, and runs locally on macOS, Linux, and Windows. Works a bit like Pi‑hole, but instead of blocking ads, it blocks everything unless you say otherwise.<p>I’m curious if this would be useful in your workflow. If you try it, please let me know what breaks, what works well, and what you’d improve.

Show HN: Sinkzone DNS – Forwarder that blocks everything except your allowlist

Most site blockers work by blacklisting distractions. That never worked for me, the internet is too big, and there’s always something new to waste time on.<p>I wanted the opposite: allowlist‑only browsing. Block everything by default, and explicitly allow only what I need.<p>So I built Sinkzone: a local DNS forwarder with two modes:<p>Monitor mode: lets all traffic through, but logs every domain so you can decide what to allow.<p>Focus mode: only allowlisted domains resolve; everything else is blocked (NXDOMAIN).<p>It’s open source, written in Go, and runs locally on macOS, Linux, and Windows. Works a bit like Pi‑hole, but instead of blocking ads, it blocks everything unless you say otherwise.<p>I’m curious if this would be useful in your workflow. If you try it, please let me know what breaks, what works well, and what you’d improve.

Show HN: Sinkzone DNS – Forwarder that blocks everything except your allowlist

Most site blockers work by blacklisting distractions. That never worked for me, the internet is too big, and there’s always something new to waste time on.<p>I wanted the opposite: allowlist‑only browsing. Block everything by default, and explicitly allow only what I need.<p>So I built Sinkzone: a local DNS forwarder with two modes:<p>Monitor mode: lets all traffic through, but logs every domain so you can decide what to allow.<p>Focus mode: only allowlisted domains resolve; everything else is blocked (NXDOMAIN).<p>It’s open source, written in Go, and runs locally on macOS, Linux, and Windows. Works a bit like Pi‑hole, but instead of blocking ads, it blocks everything unless you say otherwise.<p>I’m curious if this would be useful in your workflow. If you try it, please let me know what breaks, what works well, and what you’d improve.

Show HN: Sinkzone DNS – Forwarder that blocks everything except your allowlist

Most site blockers work by blacklisting distractions. That never worked for me, the internet is too big, and there’s always something new to waste time on.<p>I wanted the opposite: allowlist‑only browsing. Block everything by default, and explicitly allow only what I need.<p>So I built Sinkzone: a local DNS forwarder with two modes:<p>Monitor mode: lets all traffic through, but logs every domain so you can decide what to allow.<p>Focus mode: only allowlisted domains resolve; everything else is blocked (NXDOMAIN).<p>It’s open source, written in Go, and runs locally on macOS, Linux, and Windows. Works a bit like Pi‑hole, but instead of blocking ads, it blocks everything unless you say otherwise.<p>I’m curious if this would be useful in your workflow. If you try it, please let me know what breaks, what works well, and what you’d improve.

Show HN: An open-source e-book reader for conversational reading with an LLM

Hi HN! I've been working on BookWith, an open-source e-book reader that integrates AI as your reading companion.<p>The problem: Traditional e-readers are passive. When you encounter something unclear, you have to context-switch to search for it. Your highlights and notes remain isolated, and you can't easily connect ideas across different books.<p>My solution: BookWith embeds an AI that maintains full context of what you're reading. It features:<p>- Context-aware AI chat: Ask questions about the current page/chapter and get instant answers<p>- AI podcast generation: Automatically converts book content into conversational podcasts using Google Cloud TTS<p>- Multi-layer memory system: Short-term (last 5 conversations), mid-term (summarized every 20), and long-term (vector search) memory that maintains continuity across reading sessions<p>- Smart annotations: 5-color highlighting system that AI can reference and analyze<p>Technical stack: Built as a fork of Flow (epub reader), with added LLM integration and vector database for semantic search. Supports multiple LLMs and languages (EN/JA/ZH).

Show HN: An open-source e-book reader for conversational reading with an LLM

Hi HN! I've been working on BookWith, an open-source e-book reader that integrates AI as your reading companion.<p>The problem: Traditional e-readers are passive. When you encounter something unclear, you have to context-switch to search for it. Your highlights and notes remain isolated, and you can't easily connect ideas across different books.<p>My solution: BookWith embeds an AI that maintains full context of what you're reading. It features:<p>- Context-aware AI chat: Ask questions about the current page/chapter and get instant answers<p>- AI podcast generation: Automatically converts book content into conversational podcasts using Google Cloud TTS<p>- Multi-layer memory system: Short-term (last 5 conversations), mid-term (summarized every 20), and long-term (vector search) memory that maintains continuity across reading sessions<p>- Smart annotations: 5-color highlighting system that AI can reference and analyze<p>Technical stack: Built as a fork of Flow (epub reader), with added LLM integration and vector database for semantic search. Supports multiple LLMs and languages (EN/JA/ZH).

Show HN: An open-source e-book reader for conversational reading with an LLM

Hi HN! I've been working on BookWith, an open-source e-book reader that integrates AI as your reading companion.<p>The problem: Traditional e-readers are passive. When you encounter something unclear, you have to context-switch to search for it. Your highlights and notes remain isolated, and you can't easily connect ideas across different books.<p>My solution: BookWith embeds an AI that maintains full context of what you're reading. It features:<p>- Context-aware AI chat: Ask questions about the current page/chapter and get instant answers<p>- AI podcast generation: Automatically converts book content into conversational podcasts using Google Cloud TTS<p>- Multi-layer memory system: Short-term (last 5 conversations), mid-term (summarized every 20), and long-term (vector search) memory that maintains continuity across reading sessions<p>- Smart annotations: 5-color highlighting system that AI can reference and analyze<p>Technical stack: Built as a fork of Flow (epub reader), with added LLM integration and vector database for semantic search. Supports multiple LLMs and languages (EN/JA/ZH).

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.<p>This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!<p>We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.<p>We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Show HN: FFlags – Feature flags as code, served from the edge

Hi HN,<p>I'm the creator of FFlags. I built this because I wanted a feature flagging system that gave me the performance and reliability of an enterprise-scale solution without the months of dev time or the vendor lock-in.<p>The core ideas are:<p>1. Feature Flags as Code: You define your flag logic in TypeScript. This lets you write complex rules, which felt more natural as a developer myself than using a complex UI for logic.<p>2. Open Standard: The platform is built on the OpenFeature standard (specifically the Remote Evaluation Protocol). The goal is to avoid vendor lock-in and the usual enterprise slop. You're not tied to my platform if you want to move.<p>3. Performance: It uses an edge network to serve the flags, which keeps the wall-time latency low (sub-25ms) for globally distributed applications.<p>I was trying to avoid the heavy cost and complexity of existing enterprise tools while still getting better performance than a simple self-hosted solution.<p>There's a generous free tier ($39 per million requests after that, with no flag/user limits). I'm looking for feedback on the developer experience, the "flags-as-code" approach, and any technical questions you might have.<p>Thanks for taking a look.

Show HN: FFlags – Feature flags as code, served from the edge

Hi HN,<p>I'm the creator of FFlags. I built this because I wanted a feature flagging system that gave me the performance and reliability of an enterprise-scale solution without the months of dev time or the vendor lock-in.<p>The core ideas are:<p>1. Feature Flags as Code: You define your flag logic in TypeScript. This lets you write complex rules, which felt more natural as a developer myself than using a complex UI for logic.<p>2. Open Standard: The platform is built on the OpenFeature standard (specifically the Remote Evaluation Protocol). The goal is to avoid vendor lock-in and the usual enterprise slop. You're not tied to my platform if you want to move.<p>3. Performance: It uses an edge network to serve the flags, which keeps the wall-time latency low (sub-25ms) for globally distributed applications.<p>I was trying to avoid the heavy cost and complexity of existing enterprise tools while still getting better performance than a simple self-hosted solution.<p>There's a generous free tier ($39 per million requests after that, with no flag/user limits). I'm looking for feedback on the developer experience, the "flags-as-code" approach, and any technical questions you might have.<p>Thanks for taking a look.

Show HN: I built a text-based birthday reminder app

I bought the birthdays.app domain a few years ago and decided to build a simple birthday app, just text message reminders.<p>I had hacked together a duct-tape version for myself a couple years prior (using zapier + google sheets), and found it useful. It was a pain sifting through Facebook birthdays (90% irrelevant), and I found the text reminders simpler than a calendar.<p>The app has been slowly growing since 2023, and is now up to 739 paying users (mostly on the $9/yr plan, with a few paying $3/yr).<p>Earlier this year, I built a Google Calendar integration to automatically sync birthdays to the app. My goal is birthday reminders without the baggage a normal "social" app would have such as ads, engagement bait, etc.<p>I call it an app, but there's actually no app to download. Users log in via phone number + sms login code, then input birthdays manually, via text, or via Google Calendar.<p>At the moment, US users receive SMS messages (which are cheaper to send in the US), while other countries receive WhatsApp text reminders.<p>The app uses Twilio to text users (both SMS + WhatsApp) and Stripe for payments.<p>I'm hoping to build more features (iPhone contacts sync, easier birthdays importing) while also making video content that celebrates strangers' birthdays (such as, going to a farmers market with an "is it your birthday?" sign).<p>Thank you for reading, I'm open to any thoughts or ideas around the app!

< 1 2 3 ... 17 18 19 20 21 ... 864 865 866 >