The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I replaced vector databases with Git for AI memory (PoC)
Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo
Each conversation = one commit
git diff shows how understanding evolves over time
BM25 for search (no embeddings needed)
LLMs generate search queries from conversation context
Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?
Show HN: Project management system for Claude Code
I built a lightweight project management workflow to keep AI-driven development organized.<p>The problem was that context kept disappearing between tasks. With multiple Claude agents running in parallel, I’d lose track of specs, dependencies, and history. External PM tools didn’t help because syncing them with repos always created friction.<p>The solution was to treat GitHub Issues as the database. The "system" is ~50 bash scripts and markdown configs that:<p>- Brainstorm with you to create a markdown PRD, spins up an epic, and decomposes it into tasks and syncs them with GitHub issues
- Track progress across parallel streams
- Keep everything traceable back to the original spec
- Run fast from the CLI (commands finish in seconds)<p>We’ve been using it internally for a few months and it’s cut our shipping time roughly in half. Repo: <a href="https://github.com/automazeio/ccpm" rel="nofollow">https://github.com/automazeio/ccpm</a><p>It’s still early and rough around the edges, but has worked well for us. I’d love feedback from others experimenting with GitHub-centric project management or AI-driven workflows.
Show HN: OS X Mavericks Forever
Show HN: OS X Mavericks Forever
Show HN: OS X Mavericks Forever
Show HN: OS X Mavericks Forever
Show HN: I was curious about spherical helix, ended up making this visualization
I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!
Show HN: I was curious about spherical helix, ended up making this visualization
I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!
Show HN: I was curious about spherical helix, ended up making this visualization
I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!
Show HN: I was curious about spherical helix, ended up making this visualization
I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!
Show HN: Strix - Open-source AI hackers for your apps
Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life
I have been working on making WebRTC + Embedded Devices easier for a few years. This is a hackathon project that pulled some of that together. I hope others build on it/it inspires them to play with hardware. I worked on it with two other people and I had a lot of fun with some of the ideas that came out of it.<p>* Extendable/hackable - I tried to keep the code as simple as possible so others can fork/modify easily.<p>* Communicate with light. With function calling it changes the light bulb, so it can match your mood or feelings.<p>* Populate info from clients you control. I wanted to experiment with having it guide you through yesterday/today.<p>* Phone as control. Setting up new devices can be frustrating. I liked that this didn't require any WiFi setup, it just routed everything through your phone. Also cool then that they device doesn't actually have any sensitive data on it.
Show HN: Chroma Cloud – serverless search database for AI
Hey HN - I’m Jeff, co-founder of Chroma.<p>In December of 2022, I was scrolling Twitter in the wee-hours of the morning holding my then-newborn daughter. ChatGPT had launched, and we were all figuring out what this technology was and how to make it useful. Developers were using retrieval to bring their data to the models - and so I DM’d every person who had tweeted about “embeddings” in the entire month of December. (it was only 120 people!) I saw then how AI was going to need to search to all the world’s information to build useful and reliable applications.<p>Anton Troynikov and I started Chroma with the beliefs that:<p>1. AI-based systems were way too difficult to productionize<p>2. Latent space was incredibly important to improving AI-based systems (no one understood this at the time)<p>On Valentines Day 2023, we launched first version of Chroma and it immediately took off. Chroma made retrieval just work. Chroma is now a large open-source project with 21k+ stars and 5M monthly downloads, used at companies like Apple, Amazon, Salesforce, and Microsoft.<p>Today we’re excited to launch Chroma Cloud - our fully-managed offering backed by an Apache 2.0 serverless database called Chroma Distributed. Chroma Distributed is written in Rust and uses object-storage for extreme scalability and reliability. Chroma Cloud is fast and cheap. Leading AI companies such as Factory, Weights & Biases, Propel, and Foam already use Chroma Cloud in production to power their agents. It brings the “it just works” developer experience developers have come to know Chroma for - to the Cloud.<p>Try it out and let me know what you think!<p>— Jeff
Show HN: I built a toy TPU that can do inference and training on the XOR problem
We wanted to do something very challenging to prove to ourselves that we can do anything we put our mind to. The reasoning for why we chose to build a toy TPU specifically is fairly simple:<p>- Building a chip for ML workloads seemed cool
- There was no well-documented open source repo for an ML accelerator that performed both inference and training<p>None of us have real professional experience in hardware design, which, in a way, made the TPU even more appealing since we weren't able to estimate exactly how difficult it would be. As we worked on the initial stages of this project, we established a strict design philosophy: TO ALWAYS TRY THE HACKY WAY. This meant trying out the "dumb" ideas that came to our mind first BEFORE consulting external sources. This philosophy helped us make sure we weren't reverse engineering the TPU, but rather re-inventing it, which helped us derive many of the key mechanisms used in the TPU ourselves.<p>We also wanted to treat this project as an exercise to code without relying on AI to write for us, since we felt that our initial instinct recently has been to reach for llms whenever we faced a slight struggle. We wanted to cultivate a certain style of thinking that we could take forward with us and use in any future endeavours to think through difficult problems.<p>Throughout this project we tried to learn as much as we could about the fundamentals of deep learning, hardware design and creating algorithms and we found that the best way to learn about this stuff is by drawing everything out and making that our first instinct. In tinytpu.com, you will see how our explanations were inspired by this philosophy.<p>Note that this is NOT a 1-to-1 replica of the TPU--it is our attempt at re-inventing a toy version of it ourselves.
Show HN: I built a toy TPU that can do inference and training on the XOR problem
We wanted to do something very challenging to prove to ourselves that we can do anything we put our mind to. The reasoning for why we chose to build a toy TPU specifically is fairly simple:<p>- Building a chip for ML workloads seemed cool
- There was no well-documented open source repo for an ML accelerator that performed both inference and training<p>None of us have real professional experience in hardware design, which, in a way, made the TPU even more appealing since we weren't able to estimate exactly how difficult it would be. As we worked on the initial stages of this project, we established a strict design philosophy: TO ALWAYS TRY THE HACKY WAY. This meant trying out the "dumb" ideas that came to our mind first BEFORE consulting external sources. This philosophy helped us make sure we weren't reverse engineering the TPU, but rather re-inventing it, which helped us derive many of the key mechanisms used in the TPU ourselves.<p>We also wanted to treat this project as an exercise to code without relying on AI to write for us, since we felt that our initial instinct recently has been to reach for llms whenever we faced a slight struggle. We wanted to cultivate a certain style of thinking that we could take forward with us and use in any future endeavours to think through difficult problems.<p>Throughout this project we tried to learn as much as we could about the fundamentals of deep learning, hardware design and creating algorithms and we found that the best way to learn about this stuff is by drawing everything out and making that our first instinct. In tinytpu.com, you will see how our explanations were inspired by this philosophy.<p>Note that this is NOT a 1-to-1 replica of the TPU--it is our attempt at re-inventing a toy version of it ourselves.
Show HN: We started building an AI dev tool but it turned into a Sims-style game
Hi HN! We’re Max and Peyton from The Interface (<a href="https://www.theinterface.com/" rel="nofollow">https://www.theinterface.com/</a>).<p>We started out building an AI agent dev tool, but somewhere along the way it turned into Sims for AI agents.<p>Demo video: <a href="https://www.youtube.com/watch?v=sRPnX_f2V_c" rel="nofollow">https://www.youtube.com/watch?v=sRPnX_f2V_c</a>.<p>The original idea was simple: make it easy to create AI agents. We started with Jupyter Notebooks, where each cell could be callable by MCP—so agents could turn them into tools for themselves. It worked well enough that the system became self-improving, churning out content, and acting like a co-pilot that helped you build new agents.<p>But when we stepped back, what we had was these endless walls of text. And even though it worked, honestly, it was just boring. We were also convinced that it would be swallowed up by the next model’s capabilities. We wanted to build something else—something that made AI less of a black box and more engaging. Why type into a chat box all day if you could look your agents in the face, see their confusion, and watch when and how they interact?<p>Both of us grew up on simulation games—RollerCoaster Tycoon 3, Age of Empires, SimCity—so we started experimenting with running LLM agents inside a 3D world. At first it was pure curiosity, but right away, watching agents interact in real time was much more interesting than anything we’d done before.<p>The very first version was small: a single Unity room, an MCP server, and a chat box. Even getting two agents to take turns took weeks. Every run surfaced quirks—agents refusing to talk at all, or only “speaking” by dancing or pulling facial expressions to show emotion. That unpredictability kept us building.<p>Now it’s a desktop app (Tauri + Unity via WebGL) where humans and agents share 3D tile-based rooms. Agents receive structured observations every tick and can take actions that change the world. You can edit the rules between runs—prompts, decision logic, even how they see chat history—without rebuilding.<p>On the technical side, we built a Unity bridge with MCP and multi-provider routing via LiteLLM, with local model support via Mistral.rs coming next. All system prompts are editable, so you can directly experiment with coordination strategies—tuning how “chatty” agents are versus how much they move or manipulate the environment.<p>We then added a tilemap editor so you can design custom rooms, set tile-based events with conditions and actions, and turn them into puzzles or hazards. There’s community sharing built in, so you can post rooms you make.<p>Watching agents collude or negotiate through falling tiles, teleports, landmines, fire, “win” and “lose” tiles, and tool calls for things like lethal fires or disco floors is a much more fun way to spend our days.<p>Under the hood, Unity’s ECS drives a whole state machine and event system. And because humans and AI share the same space in real time, every negotiation, success, or failure also becomes useful multi-agent, multimodal data for post-training or world models.<p>Our early users are already using it for prompt-injection testing, social engineering scenarios, cooperative games, and model comparisons.
The bigger vision is to build an open-ended, AI-native sim-game where you can build and interact with anything or anyone. You can design puzzles, levels, and environments, have agents compete or collaborate, set up games, or even replay your favorite TV shows.<p>The fun part is that no two interactions are ever the same. Everything is emergent, not hard-coded, so the same level played six times will play out differently each time.<p>The plan is to keep expanding—bigger rooms, more in-world tools for agents, and then multiplayer hosting. It’s live now, no waitlist. Free to play. You can bring your own API keys, or start with $10 in credits and run agents right away: www.TheInterface.com.<p>We’d love feedback on scenarios worth testing and what to build next. Tell us the weird stuff you’d throw at this—we’ll be in the comments.
Show HN: We started building an AI dev tool but it turned into a Sims-style game
Hi HN! We’re Max and Peyton from The Interface (<a href="https://www.theinterface.com/" rel="nofollow">https://www.theinterface.com/</a>).<p>We started out building an AI agent dev tool, but somewhere along the way it turned into Sims for AI agents.<p>Demo video: <a href="https://www.youtube.com/watch?v=sRPnX_f2V_c" rel="nofollow">https://www.youtube.com/watch?v=sRPnX_f2V_c</a>.<p>The original idea was simple: make it easy to create AI agents. We started with Jupyter Notebooks, where each cell could be callable by MCP—so agents could turn them into tools for themselves. It worked well enough that the system became self-improving, churning out content, and acting like a co-pilot that helped you build new agents.<p>But when we stepped back, what we had was these endless walls of text. And even though it worked, honestly, it was just boring. We were also convinced that it would be swallowed up by the next model’s capabilities. We wanted to build something else—something that made AI less of a black box and more engaging. Why type into a chat box all day if you could look your agents in the face, see their confusion, and watch when and how they interact?<p>Both of us grew up on simulation games—RollerCoaster Tycoon 3, Age of Empires, SimCity—so we started experimenting with running LLM agents inside a 3D world. At first it was pure curiosity, but right away, watching agents interact in real time was much more interesting than anything we’d done before.<p>The very first version was small: a single Unity room, an MCP server, and a chat box. Even getting two agents to take turns took weeks. Every run surfaced quirks—agents refusing to talk at all, or only “speaking” by dancing or pulling facial expressions to show emotion. That unpredictability kept us building.<p>Now it’s a desktop app (Tauri + Unity via WebGL) where humans and agents share 3D tile-based rooms. Agents receive structured observations every tick and can take actions that change the world. You can edit the rules between runs—prompts, decision logic, even how they see chat history—without rebuilding.<p>On the technical side, we built a Unity bridge with MCP and multi-provider routing via LiteLLM, with local model support via Mistral.rs coming next. All system prompts are editable, so you can directly experiment with coordination strategies—tuning how “chatty” agents are versus how much they move or manipulate the environment.<p>We then added a tilemap editor so you can design custom rooms, set tile-based events with conditions and actions, and turn them into puzzles or hazards. There’s community sharing built in, so you can post rooms you make.<p>Watching agents collude or negotiate through falling tiles, teleports, landmines, fire, “win” and “lose” tiles, and tool calls for things like lethal fires or disco floors is a much more fun way to spend our days.<p>Under the hood, Unity’s ECS drives a whole state machine and event system. And because humans and AI share the same space in real time, every negotiation, success, or failure also becomes useful multi-agent, multimodal data for post-training or world models.<p>Our early users are already using it for prompt-injection testing, social engineering scenarios, cooperative games, and model comparisons.
The bigger vision is to build an open-ended, AI-native sim-game where you can build and interact with anything or anyone. You can design puzzles, levels, and environments, have agents compete or collaborate, set up games, or even replay your favorite TV shows.<p>The fun part is that no two interactions are ever the same. Everything is emergent, not hard-coded, so the same level played six times will play out differently each time.<p>The plan is to keep expanding—bigger rooms, more in-world tools for agents, and then multiplayer hosting. It’s live now, no waitlist. Free to play. You can bring your own API keys, or start with $10 in credits and run agents right away: www.TheInterface.com.<p>We’d love feedback on scenarios worth testing and what to build next. Tell us the weird stuff you’d throw at this—we’ll be in the comments.
Show HN: Fractional jobs – part-time roles for engineers
I'm Taylor, I spent about a year as a Fractional Head of Product. It was my first time not in a full-time W2 role, and I quickly learned that the hardest part of the job wasn't doing the Product work (I was a PM for 10+ years), it was finding good clients to work with.<p>So I built Fractional Jobs.<p>The goal is to help more people break out of W2 life and into their own independent careers by helping them find great clients to work with.<p>We find and vet the clients, and then engineers can request intros to any that seem like a good fit. We'll make the intro assuming the client opts in after seeing your profile.<p>We have 9 open engineering roles right now:
- 2x Fractional CTO
- 2x AI engineers
- 3x full-stack
- 1x staff frontend
- 1x mobile
Show HN: Fractional jobs – part-time roles for engineers
I'm Taylor, I spent about a year as a Fractional Head of Product. It was my first time not in a full-time W2 role, and I quickly learned that the hardest part of the job wasn't doing the Product work (I was a PM for 10+ years), it was finding good clients to work with.<p>So I built Fractional Jobs.<p>The goal is to help more people break out of W2 life and into their own independent careers by helping them find great clients to work with.<p>We find and vet the clients, and then engineers can request intros to any that seem like a good fit. We'll make the intro assuming the client opts in after seeing your profile.<p>We have 9 open engineering roles right now:
- 2x Fractional CTO
- 2x AI engineers
- 3x full-stack
- 1x staff frontend
- 1x mobile
Show HN: Whispering – Open-source, local-first dictation you can trust
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app.<p>I really like dictation. For years, I relied on transcription tools that were <i>almost</i> good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.<p>So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. Your data is stored locally on your device, and your audio goes directly from your machine to a local provider (Whisper C++, Speaches, etc.) or your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.). For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).<p>Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.<p>Here’s an overview: <a href="https://www.youtube.com/watch?v=1jYgBMrfVZs" rel="nofollow">https://www.youtube.com/watch?v=1jYgBMrfVZs</a>, and here’s how I personally am using it with Claude Code these days: <a href="https://www.youtube.com/watch?v=tpix588SeiQ" rel="nofollow">https://www.youtube.com/watch?v=tpix588SeiQ</a>.<p>There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy <a href="https://github.com/cjpais/Handy" rel="nofollow">https://github.com/cjpais/Handy</a>). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.<p>Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (<a href="https://github.com/epicenter-so/epicenter" rel="nofollow">https://github.com/epicenter-so/epicenter</a>), which I should explain a bit...<p>I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.<p>Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.<p>I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.<p>Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (<a href="https://github.com/epicenter-so/epicenter" rel="nofollow">https://github.com/epicenter-so/epicenter</a>) and join the Discord here (<a href="https://go.epicenter.so/discord">https://go.epicenter.so/discord</a>). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!