The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Memex is a Claude Code alternative built on Rust+Tauri for vibe coding
Hi HN,<p>TL;DR Memex is a cross-platform desktop app for vibe coding. Think ChatGPT + Claude Code rolled into one.<p>Why we built it: We love chat tools like Perplexity and ChatGPT. We also love coding agents, like in Cursor and Windsurf. We don’t like that web-based app builders are opinionated about tech stack and we can’t run them locally. So, we built Memex to be a chat tool + coding agent that supports any tech stack.<p>What it can do today: Claude Code-like coding. Agentic web search / research. Pre-built templates (e.g. fullstack, iOS, python + modal, etc). Inline data analysis + viz. Checkpointing (shadow git repo). Privacy mode.<p>How it works: Written in TS+Rust+Python, using Tauri for the cross-platform build (macOS, Windows, Linux). It has a bundled python environment for data analysis. Agent uses a mix of Sonnet 3.7 + Haiku.<p>Status & roadmap: Free download with free tier and paid plan: <a href="https://memex.tech" rel="nofollow">https://memex.tech</a>. Up next: [1] Additional model support (e.g. Gemini 2.5). [2] MCP support. [3] Computer use.<p>Ask: Kick the tires. Give us feedback on product + roadmap. If you love it – spread the word!<p>Thanks! David
Show HN: Heart Rate Zones Plus – The first iOS app I developed
I built this iOS app because I wanted to get an overview of my time in zones per week without checking zones after every workout manually - Now I'm looking for feedback.<p>Description: Track time in heart rate zones. Track per day, week, month, 7 days and 30 days time period and how much time you spend in each zone. Set goals & visualize progress. Get details about heart rates zones of your workouts.<p>Features: Custom time periods, Workout to zone attribution to get a feeling which sport attributed most to each zone, Multiple zone calculation methods, Set personal time goals for any zone, Workout breakdown<p>Pricing: Free<p>Privacy: Nothing is tracked or send somewhere. Data is just on your device.<p>Any feedback and features request is appreciated.<p>Download: <a href="https://apps.apple.com/us/app/heart-rate-zones-plus/id6744743232">https://apps.apple.com/us/app/heart-rate-zones-plus/id674474...</a><p>Video of the app in action: <a href="https://www.youtube.com/shorts/-qtHxEdMEv0" rel="nofollow">https://www.youtube.com/shorts/-qtHxEdMEv0</a>
Show HN: Heart Rate Zones Plus – The first iOS app I developed
I built this iOS app because I wanted to get an overview of my time in zones per week without checking zones after every workout manually - Now I'm looking for feedback.<p>Description: Track time in heart rate zones. Track per day, week, month, 7 days and 30 days time period and how much time you spend in each zone. Set goals & visualize progress. Get details about heart rates zones of your workouts.<p>Features: Custom time periods, Workout to zone attribution to get a feeling which sport attributed most to each zone, Multiple zone calculation methods, Set personal time goals for any zone, Workout breakdown<p>Pricing: Free<p>Privacy: Nothing is tracked or send somewhere. Data is just on your device.<p>Any feedback and features request is appreciated.<p>Download: <a href="https://apps.apple.com/us/app/heart-rate-zones-plus/id6744743232">https://apps.apple.com/us/app/heart-rate-zones-plus/id674474...</a><p>Video of the app in action: <a href="https://www.youtube.com/shorts/-qtHxEdMEv0" rel="nofollow">https://www.youtube.com/shorts/-qtHxEdMEv0</a>
Show HN: Sim Studio – Open-Source Agent Workflow GUI
Hi HN! We're Emir and Waleed, and we're building Sim Studio (<a href="https://simstudio.ai" rel="nofollow">https://simstudio.ai</a>), an open-source drag and drop UI for building and managing multi-agent workflows as a directed graph. You can define how agents interact with each other, use tools, and handle complex logic like branching, loops, transformations, and conditional execution.<p>Our repo is <a href="https://github.com/simstudioai/sim">https://github.com/simstudioai/sim</a>, docs are at <a href="https://docs.simstudio.ai/introduction" rel="nofollow">https://docs.simstudio.ai/introduction</a>, and we have a demo here: <a href="https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4" rel="nofollow">https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4</a><p>Building reliable, multi-step agent systems with current frameworks often gets complicated fast. In OpenAI's 'practical guide to building agents', they claim that the non-declarative approach and single multi-step agents are the best path forward, but from experience and experimentation, we disagree. Debugging these implicit flows across multiple agent calls and tool uses is painful, and iterating on the logic or prompts becomes slow.<p>We built Sim Studio because we believe defining the workflow explicitly and visually is the key to building more reliable and maintainable agentic applications. In Sim Studio, you design the entire architecture, comprising of agent blocks that have system prompts, a variety of models (hosted and local via ollama), tools with granular tool use control, and structured output.<p>We have plenty of pre-built integrations that you can use as standalone blocks or as tools for your agents. The nodes are all connected with if/else conditional blocks, llm-based routing, loops, and branching logic for specialized agents.<p>Also, the visual graph isn't just for prototyping and <i>is</i> actually executable. You can run simulations of the workflows 1, 10, 100 times to see how modifying any small system prompt change, underlying model, or tool call change change impacts the overall performance of the workflow.<p>You can trigger the workflows manually, deploy as an API and interact via HTTP, or schedule the workflows to run periodically. They can also be set up to trigger on incoming webhooks and deployed as standalone chat instances that can be password or domain-protected.<p>We have granular trace spans, logs, and observability built-in so you can easily compare and contrast performance across different model providers and tools. All of these things enable a tighter feedback loop and significantly faster iteration.<p>So far, users have built deep research agents to detect application fraud, chatbots to interface with their internal HR documentation, and agents to automate communication between manufacturing facilities.<p>Sim Studio is Apache 2.0 licensed, and fully open source.<p>We're excited about bringing a visual, workflow-centric approach to agent development. We think it makes building robust, complex agentic workflows far more accessible and reliable. We'd love to hear the HN community's thoughts!
Show HN: Sim Studio – Open-Source Agent Workflow GUI
Hi HN! We're Emir and Waleed, and we're building Sim Studio (<a href="https://simstudio.ai" rel="nofollow">https://simstudio.ai</a>), an open-source drag and drop UI for building and managing multi-agent workflows as a directed graph. You can define how agents interact with each other, use tools, and handle complex logic like branching, loops, transformations, and conditional execution.<p>Our repo is <a href="https://github.com/simstudioai/sim">https://github.com/simstudioai/sim</a>, docs are at <a href="https://docs.simstudio.ai/introduction" rel="nofollow">https://docs.simstudio.ai/introduction</a>, and we have a demo here: <a href="https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4" rel="nofollow">https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4</a><p>Building reliable, multi-step agent systems with current frameworks often gets complicated fast. In OpenAI's 'practical guide to building agents', they claim that the non-declarative approach and single multi-step agents are the best path forward, but from experience and experimentation, we disagree. Debugging these implicit flows across multiple agent calls and tool uses is painful, and iterating on the logic or prompts becomes slow.<p>We built Sim Studio because we believe defining the workflow explicitly and visually is the key to building more reliable and maintainable agentic applications. In Sim Studio, you design the entire architecture, comprising of agent blocks that have system prompts, a variety of models (hosted and local via ollama), tools with granular tool use control, and structured output.<p>We have plenty of pre-built integrations that you can use as standalone blocks or as tools for your agents. The nodes are all connected with if/else conditional blocks, llm-based routing, loops, and branching logic for specialized agents.<p>Also, the visual graph isn't just for prototyping and <i>is</i> actually executable. You can run simulations of the workflows 1, 10, 100 times to see how modifying any small system prompt change, underlying model, or tool call change change impacts the overall performance of the workflow.<p>You can trigger the workflows manually, deploy as an API and interact via HTTP, or schedule the workflows to run periodically. They can also be set up to trigger on incoming webhooks and deployed as standalone chat instances that can be password or domain-protected.<p>We have granular trace spans, logs, and observability built-in so you can easily compare and contrast performance across different model providers and tools. All of these things enable a tighter feedback loop and significantly faster iteration.<p>So far, users have built deep research agents to detect application fraud, chatbots to interface with their internal HR documentation, and agents to automate communication between manufacturing facilities.<p>Sim Studio is Apache 2.0 licensed, and fully open source.<p>We're excited about bringing a visual, workflow-centric approach to agent development. We think it makes building robust, complex agentic workflows far more accessible and reliable. We'd love to hear the HN community's thoughts!
Show HN: Flowcode – Turing-complete visual programming platform
Hey HN! I’m Gabriel, and I’m excited to share a project I’ve been working on for the last few years. Flowcode is a visual programming platform that tries to combine the best of both worlds (code and visual). Over the years I found myself repeatedly drawing architectures and logic. It was always my dream to just press “run” instead of having to write them in code afterwards. But none of the visual tools I found were flexible and transparent enough for building real products.<p>I think that visual programming fits perfectly with modern backend dev tasks that revolve around connecting different services with basic logic. Flowcode is meant to speed up and simplify those tasks, leaving more time to think about design and solve design problems. Visual programming also works really well for developing workflows involving LLM calls that are non-deterministic and require a lot of debugging and prompt tweaking.<p>There are many other visual/low code tools, but they all offer limited control and flexibility (no concurrency, loops, transparency) and most suffer from the same problems (vendor lock-in, hard to integrate with existing code etc.).<p>Flowcode is built on an open source visual programming language (Flyde <a href="https://github.com/flydelabs/flyde">https://github.com/flydelabs/flyde</a>, which I launched last year here on HN - <a href="https://news.ycombinator.com/item?id=39628285">https://news.ycombinator.com/item?id=39628285</a>). This means Flowcode has true concurrency, no vendor lock-in (you can export flows as .flyde files), is Turing-complete (loops, recursion, control flows, multiple IOs etc.), lets you fork any node, integrates with code via an SDK and more.<p>I’d love to hear your thoughts and feedback.
Show HN: Flowcode – Turing-complete visual programming platform
Hey HN! I’m Gabriel, and I’m excited to share a project I’ve been working on for the last few years. Flowcode is a visual programming platform that tries to combine the best of both worlds (code and visual). Over the years I found myself repeatedly drawing architectures and logic. It was always my dream to just press “run” instead of having to write them in code afterwards. But none of the visual tools I found were flexible and transparent enough for building real products.<p>I think that visual programming fits perfectly with modern backend dev tasks that revolve around connecting different services with basic logic. Flowcode is meant to speed up and simplify those tasks, leaving more time to think about design and solve design problems. Visual programming also works really well for developing workflows involving LLM calls that are non-deterministic and require a lot of debugging and prompt tweaking.<p>There are many other visual/low code tools, but they all offer limited control and flexibility (no concurrency, loops, transparency) and most suffer from the same problems (vendor lock-in, hard to integrate with existing code etc.).<p>Flowcode is built on an open source visual programming language (Flyde <a href="https://github.com/flydelabs/flyde">https://github.com/flydelabs/flyde</a>, which I launched last year here on HN - <a href="https://news.ycombinator.com/item?id=39628285">https://news.ycombinator.com/item?id=39628285</a>). This means Flowcode has true concurrency, no vendor lock-in (you can export flows as .flyde files), is Turing-complete (loops, recursion, control flows, multiple IOs etc.), lets you fork any node, integrates with code via an SDK and more.<p>I’d love to hear your thoughts and feedback.
Show HN: A pure WebGL image editor with filters, crop and perspective correction
I'm working on a pure js webgl image editor with effects, filters, crop & perspective correction, etc.
My goal is to give the community an opensource solution as unfortunately most comparable apps are closed sources.<p><a href="https://mini2-photo-editor.netlify.app" rel="nofollow">https://mini2-photo-editor.netlify.app</a> to try it out (<a href="https://github.com/xdadda/mini-photo-editor">https://github.com/xdadda/mini-photo-editor</a>)
Show HN: A Chrome extension that will auto-reject non-essential cookies
A FOSS chrome extension that attempts to remove the annoyance of cookie pop ups and banners.<p>There are some extensions out there that auto-accept cookies, but I didn't find one that auto rejected cookies without either chaining some extensions together or setting up custom rules in tools like uBlock origin. So with this extension, you just need to add it for non-essential cookies to be rejected.<p>Github: <a href="https://github.com/mitch292/reject-cookies">https://github.com/mitch292/reject-cookies</a>
Extension Link: <a href="https://chromewebstore.google.com/detail/bnbodofigkfjljnopfggfoecokhmhamc?utm_source=item-share-cb" rel="nofollow">https://chromewebstore.google.com/detail/bnbodofigkfjljnopfg...</a><p>It's still very early days for the extension. I want it to keep improving and working on more and more sites. Feedback welcome. Thanks!
Show HN: A Chrome extension that will auto-reject non-essential cookies
A FOSS chrome extension that attempts to remove the annoyance of cookie pop ups and banners.<p>There are some extensions out there that auto-accept cookies, but I didn't find one that auto rejected cookies without either chaining some extensions together or setting up custom rules in tools like uBlock origin. So with this extension, you just need to add it for non-essential cookies to be rejected.<p>Github: <a href="https://github.com/mitch292/reject-cookies">https://github.com/mitch292/reject-cookies</a>
Extension Link: <a href="https://chromewebstore.google.com/detail/bnbodofigkfjljnopfggfoecokhmhamc?utm_source=item-share-cb" rel="nofollow">https://chromewebstore.google.com/detail/bnbodofigkfjljnopfg...</a><p>It's still very early days for the extension. I want it to keep improving and working on more and more sites. Feedback welcome. Thanks!
Show HN: A Chrome extension that will auto-reject non-essential cookies
A FOSS chrome extension that attempts to remove the annoyance of cookie pop ups and banners.<p>There are some extensions out there that auto-accept cookies, but I didn't find one that auto rejected cookies without either chaining some extensions together or setting up custom rules in tools like uBlock origin. So with this extension, you just need to add it for non-essential cookies to be rejected.<p>Github: <a href="https://github.com/mitch292/reject-cookies">https://github.com/mitch292/reject-cookies</a>
Extension Link: <a href="https://chromewebstore.google.com/detail/bnbodofigkfjljnopfggfoecokhmhamc?utm_source=item-share-cb" rel="nofollow">https://chromewebstore.google.com/detail/bnbodofigkfjljnopfg...</a><p>It's still very early days for the extension. I want it to keep improving and working on more and more sites. Feedback welcome. Thanks!
Show HN: Beatsync – perfect audio sync across multiple devices
Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices.<p>Try it live right now: <a href="https://www.beatsync.gg/" rel="nofollow">https://www.beatsync.gg/</a><p>The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good.<p>Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms.<p>You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source!<p>I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!
Show HN: Beatsync – perfect audio sync across multiple devices
Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices.<p>Try it live right now: <a href="https://www.beatsync.gg/" rel="nofollow">https://www.beatsync.gg/</a><p>The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good.<p>Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms.<p>You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source!<p>I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!
Show HN: Beatsync – perfect audio sync across multiple devices
Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices.<p>Try it live right now: <a href="https://www.beatsync.gg/" rel="nofollow">https://www.beatsync.gg/</a><p>The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good.<p>Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms.<p>You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source!<p>I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!
Show HN: I built a hardware processor that runs Python
Hi everyone,
I built PyXL — a hardware processor that executes a custom assembly generated from Python programs, without using a traditional interpreter or virtual machine. It compiles Python -> CPython Bytecode -> Instruction set designed for direct hardware execution.<p>I’m sharing an early benchmark: a GPIO test where PyXL achieves a 480ns round-trip toggle — compared to 14-25 micro seconds on a MicroPython Pyboard - even though PyXL runs at a lower clock (100MHz vs. 168MHz).<p>The design is stack-based, fully pipelined, and preserves Python's dynamic typing without static type restrictions.
I independently developed the full stack — toolchain (compiler, linker, codegen), and hardware — to validate the core idea. Full technical details will be presented at PyCon 2025.<p>Demo and explanation here: <a href="https://runpyxl.com/gpio" rel="nofollow">https://runpyxl.com/gpio</a>
Happy to answer any questions
Show HN: I built a hardware processor that runs Python
Hi everyone,
I built PyXL — a hardware processor that executes a custom assembly generated from Python programs, without using a traditional interpreter or virtual machine. It compiles Python -> CPython Bytecode -> Instruction set designed for direct hardware execution.<p>I’m sharing an early benchmark: a GPIO test where PyXL achieves a 480ns round-trip toggle — compared to 14-25 micro seconds on a MicroPython Pyboard - even though PyXL runs at a lower clock (100MHz vs. 168MHz).<p>The design is stack-based, fully pipelined, and preserves Python's dynamic typing without static type restrictions.
I independently developed the full stack — toolchain (compiler, linker, codegen), and hardware — to validate the core idea. Full technical details will be presented at PyCon 2025.<p>Demo and explanation here: <a href="https://runpyxl.com/gpio" rel="nofollow">https://runpyxl.com/gpio</a>
Happy to answer any questions
Show HN: I made a web-based, free alternative to Screen Studio
Show HN: I486SX_soft_FPU – Software FPU Emulator for NetBSD 10 on 486SX
First Release is Here!<p>I'm excited to announce the first release of i486SX_soft_FPU — a software FPU emulator for the classic Intel 486SX CPU, running on NetBSD 10!<p>This project brings floating-point support back to life for 486SX machines, even though modern NetBSD versions no longer natively support processors without a hardware FPU.
If you're into retrocomputing, operating system hacking, or just love old-school hardware, check it out!<p>Project page: <a href="https://github.com/mezantrop/i486SX_soft_FPU">https://github.com/mezantrop/i486SX_soft_FPU</a>
Contributions, feedback, and testing are all very welcome!<p>Let's keep these vintage machines alive!<p>#retrocomputing #NetBSD #486SX #opensource