The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Miditui – A terminal app/UI for MIDI composing, mixing, and playback

Show HN: Various shape regularization algorithms

I deal with a lot of geometry stuff at work with computer vision and photogrammetry, which usually comes from the real world. It's seldom clean and neat, and I'm often trying to find a way to "make it nice" or "make it pretty". I've always struggled with what that really means formally.<p>That led me to shape regularization (a technique used in computational geometry to clean up geometric data). CGAL had implemented a few methods for that, but there are more ways to do it, which I thought were nice. Also I typically work in Python, so it was nice to have a pure Python library could handle this.<p>I struggled to get the first version working as a QP. At a high level most of these boil down to minimizing a cost A + B where A is the cost associated the geometry and goes up the more you move it, and B is the cost associated "niceness" or rather the constraints you impose, and goes down the more you impose them. Then you try and minimize A + B or rather H<i>A + (1-H)</i>B where H is a hyper-parameter that controls the relative importance of A and B.<p>I needed a Python implementation so started with the examples implemented in CGAL then added a couple more for snap and joint regularization and metric regularization.

Show HN: I built a tool to create AI agents that live in iMessage

Hey everyone, I made this thing: <a href="https://tryflux.ai/" rel="nofollow">https://tryflux.ai/</a>. Here's a demo video: <a href="https://screen.studio/share/1y2EnC26" rel="nofollow">https://screen.studio/share/1y2EnC26</a><p>Context: I've tried probably 15 different AI apps over the past year. ChatGPT, note-taking apps, productivity apps, all of it. But most of them are just clutter on my iphone.<p>They live in some app I have to deliberately open. And I just... don't. But you know what I open 50 times a day without thinking? iMessage. So out of mild frustration with the "AI app graveyard" on my phone, I built Flux.<p>What it does: - You describe a personality and what you want the agent to do - In about 2 minutes, you have a live AI agent in iMessage - Blue bars. Native. No app download for whoever texts it.<p>The thesis that got us here: AI is already smart enough. The bottleneck is interaction. Dashboards get forgotten. Texts get answered.<p>This was also my first time hitting #1 on Product Hunt, which was surreal.<p>It's still rough and probably broke something. If you try it, feedback is super welcome, weird edge cases, "this doesn't work," or "why would anyone use this" comments all help.<p>That's all. Happy to answer questions.

Show HN: Play poker with LLMs, or watch them play against each other

I was curious to see how some of the latest models behaved and played no limit texas holdem.<p>I built this website which allows you to:<p>Spectate: Watch different models play against each other.<p>Play: Create your own table and play hands against the agents directly.

Show HN: I used Claude Code to discover connections between 100 books

I think LLMs are overused to summarise and underused to help us read deeper.<p>I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.<p>I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.<p>On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.<p>One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (<a href="https://trails.pieterma.es/trail/useful-lies/" rel="nofollow">https://trails.pieterma.es/trail/useful-lies/</a>). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.<p>Details:<p>* The books are picked from HN’s favourites (which I collected before: <a href="https://hnbooks.pieterma.es/" rel="nofollow">https://hnbooks.pieterma.es/</a>).<p>* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.<p>* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.<p>* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.<p>* Everything is stored in SQLite and manipulated using a set of CLI tools.<p>I wrote more about the process here: <a href="https://pieterma.es/syntopic-reading-claude/" rel="nofollow">https://pieterma.es/syntopic-reading-claude/</a><p>I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

Show HN: EuConform – Offline-first EU AI Act compliance tool (open source)

I built this as a personal open-source project to explore how EU AI Act requirements can be translated into concrete, inspectable technical checks.<p>The core idea is local-first compliance: – risk classification (Articles 5–15, incl. prohibited use cases) – bias evaluation using CrowS-Pairs – automatic Annex IV–oriented PDF reports – no cloud services or external APIs (browser-based + Ollama)<p>I’m especially interested in feedback on whether this kind of technical framing of AI regulation makes sense in real-world projects.

Show HN: EuConform – Offline-first EU AI Act compliance tool (open source)

I built this as a personal open-source project to explore how EU AI Act requirements can be translated into concrete, inspectable technical checks.<p>The core idea is local-first compliance: – risk classification (Articles 5–15, incl. prohibited use cases) – bias evaluation using CrowS-Pairs – automatic Annex IV–oriented PDF reports – no cloud services or external APIs (browser-based + Ollama)<p>I’m especially interested in feedback on whether this kind of technical framing of AI regulation makes sense in real-world projects.

Show HN: Rocket Launch and Orbit Simulator

I (17y/o) have been developing a rocket launch simulation that allows the user to explore what it's like launching a rocket from earth and putting it into orbit. This idea originally started as an educational simulation but as i've gone more down the rabbit hole the more i've wanted to make it realistic. The problem is that I've never had a formal orbital mechanics class or anything like that so I don't know what I'm missing, what I currently have implemented is:<p><pre><code> Variable gravity Variable Atmospheric drag (US Standard Atmosphere 1976) Multi-stage rockets Closed-loop guidance / pitch programs (works well within ranges 350km to 600km) Orbital prediction and thrusting options to change your orbit. </code></pre> The feedback I'm looking for is: UI improvements and possible future physics implementations that I can work on.<p>Current code and physics can be found at: <a href="https://github.com/donutTheJedi/Rocket-Launch-Simulation" rel="nofollow">https://github.com/donutTheJedi/Rocket-Launch-Simulation</a>

Show HN: Rocket Launch and Orbit Simulator

I (17y/o) have been developing a rocket launch simulation that allows the user to explore what it's like launching a rocket from earth and putting it into orbit. This idea originally started as an educational simulation but as i've gone more down the rabbit hole the more i've wanted to make it realistic. The problem is that I've never had a formal orbital mechanics class or anything like that so I don't know what I'm missing, what I currently have implemented is:<p><pre><code> Variable gravity Variable Atmospheric drag (US Standard Atmosphere 1976) Multi-stage rockets Closed-loop guidance / pitch programs (works well within ranges 350km to 600km) Orbital prediction and thrusting options to change your orbit. </code></pre> The feedback I'm looking for is: UI improvements and possible future physics implementations that I can work on.<p>Current code and physics can be found at: <a href="https://github.com/donutTheJedi/Rocket-Launch-Simulation" rel="nofollow">https://github.com/donutTheJedi/Rocket-Launch-Simulation</a>

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side

GitHub profile analysis - Build your embedding from your Stars - Compare and discover popular people with similar interests and share yours - Generate a Skill Radar - Recommend repositories you might like

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side

GitHub profile analysis - Build your embedding from your Stars - Compare and discover popular people with similar interests and share yours - Generate a Skill Radar - Recommend repositories you might like

Show HN: Executable Markdown files with Unix pipes

I wanted to run markdown files like shell scripts. So I built an open source tool that lets you use a shebang to pipe them through Claude Code with full stdin/stdout support.<p>task.md:<p><pre><code> #!/usr/bin/env claude-run Analyze this codebase and summarize the architecture. </code></pre> Then:<p><pre><code> chmod +x task.md ./task.md </code></pre> These aren't just prompts. Claude Code has tool use, so a markdown file can run shell commands, write scripts, read files, make API calls. The prompt orchestrates everything.<p>A script that runs your tests and reports results (`run_tests.md`):<p><pre><code> #!/usr/bin/env claude-run --permission-mode bypassPermissions Run ./test/run_tests.sh and summarize what passed and failed. </code></pre> Because stdin/stdout work like any Unix program, you can chain them:<p><pre><code> cat data.json | ./analyze.md > results.txt git log -10 | ./summarize.md ./generate.md | ./review.md > final.txt </code></pre> Or mix them with traditional shell scripts:<p><pre><code> for f in logs/\*.txt; do cat "$f" | ./analyze.md >> summary.txt done </code></pre> This replaced a lot of Python glue code for us. Tasks that needed LLM orchestration libraries are now markdown files composed with standard Unix tools. Composable as building blocks, runnable as cron jobs, etc.<p>One thing we didn't expect is that these are more auditable (and shareable) than shell scripts. Install scripts like `curl -fsSL <a href="https://bun.com/install" rel="nofollow">https://bun.com/install</a> | bash` could become:<p><pre><code> `curl -fsSL https://bun.com/install.md | claude-run` </code></pre> Where install.md says something like "Detect my OS and architecture, download the right binary from GitHub releases, extract to ~/.local/bin, update my shell config." A normal human can actually read and verify that.<p>The (really cool) executable markdown idea and auditability examples are from Pete Koomen (@koomen on X). As Pete says: "Markdown feels increasingly important in a way I'm not sure most people have wrapped their heads around yet."<p>We implemented it and added Unix pipe semantics. Currently works with Claude Code - hoping to support other AI coding tools too. You can also route scripts through different cloud providers (AWS Bedrock, etc.) if you want separate billing for automated jobs.<p>GitHub: <a href="https://github.com/andisearch/claude-switcher" rel="nofollow">https://github.com/andisearch/claude-switcher</a><p>What workflows would you use this for?

Show HN: Scroll Wikipedia like TikTok

Hey - I've been playing with LLMs since GPT-2 and recently experimented with fully generative UIs where the HTML/Canvas are generated just-in-time.<p>Every post on the feed( on slop/duck/storytime) you see is streamed and generated just-in-time with HTML and into a Canvas with Gemini 3 Flash.<p>Comments and DMs are bidirectionally linked with a Cloudflare Workers Durable Object which is why they feel so fast. Every generated post is saved into a DO SQLite which is then served into the "Following" feed so it can be served quicker.<p>This was inspired by Wikitok, a VSCode Extension I made around brainrot, and another fully generative UI site I made.

Show HN: Scroll Wikipedia like TikTok

Hey - I've been playing with LLMs since GPT-2 and recently experimented with fully generative UIs where the HTML/Canvas are generated just-in-time.<p>Every post on the feed( on slop/duck/storytime) you see is streamed and generated just-in-time with HTML and into a Canvas with Gemini 3 Flash.<p>Comments and DMs are bidirectionally linked with a Cloudflare Workers Durable Object which is why they feel so fast. Every generated post is saved into a DO SQLite which is then served into the "Following" feed so it can be served quicker.<p>This was inspired by Wikitok, a VSCode Extension I made around brainrot, and another fully generative UI site I made.

Show HN: I made a memory game to teach you to play piano by ear

Show HN: I made a memory game to teach you to play piano by ear

Show HN: Comet MCP – Give Claude Code a browser that can click

Hey HN,<p>Claude Code is pretty agentic now. It writes scripts, calls APIs, uses CLIs. But when something requires actually clicking through a website, it stops and asks me to do it.<p>Problem is, I'm often unfamiliar with these platforms myself. "Go to App Store Connect and generate a P8 key" okay but where? I end up spending 10 minutes navigating menus I've never seen before.<p>I started delegating these tasks to Perplexity's Comet browser. It handles the clicking, returns what I need. But copy-pasting between Claude and Comet got old fast.<p>So I built this MCP server to connect them directly. Now when Claude needs to interact with a website that has no API, it can just ask Comet to handle it.<p><pre><code> Examples: - Grab my app ID from RevenueCat dashboard - Generate a P8 key in App Store Connect - Navigate admin panels behind login walls </code></pre> I tried Playwright MCP but having Claude do the clicking itself overwhelms the context window. Comet's agentic browsing just works better in my experience.<p>Comet doesn't have an API, so this uses CDP to communicate with it directly.

Show HN: Free and local browser tool for designing gear models for 3D printing

Just build a local tool for designing gears that kinda looks and works nice

Show HN: DeepDream for Video with Temporal Consistency

I forked a PyTorch DeepDream implementation and added video support with temporal consistency. It produces smooth DeepDream videos with minimal flickering, and is highly flexible including many parameters and supports multiple pretrained image classifiers including GoogLeNet. Check out the repo for sample videos! Features:<p>- Optical flow warps previous hallucinations into the current frame<p>- Occlusion masking prevents ghosting and hallucination transfer when objects move<p>- Advanced parameters (layers, octaves, iterations) still work<p>- Works on GPU, CPU, and Apple Silicon

Show HN: A geofence-based social network app 6 years in development

My name is Adrian. I'm a Software Engineer and I spent 6 years developing a perimeter-based geofence-based social media app.<p>What it does:<p>- Allows you to load a custom perimeter anywhere on the geographic map (180° E and W longitude and 90° N and S latitude), to cover area any area of interest<p>- Chat rooms get loaded within the perimeter<p>- You can chat with people within the perimeter<p>I developed a mobile app that uses an advanced geofence-based networking system from 2013 to 2019. My goal was to connect users within polygon geofences anywhere in the world. The app is capable of loading millions of polygon geofences anywhere in the world.<p><a href="https://enterpriseandroidfoundation.com/assets/images/other/manhattan-polygon-geofences.png" rel="nofollow">https://enterpriseandroidfoundation.com/assets/images/other/...</a><p>But people didn't really have a need for this. So after failing, I spent the next 6 years trying new ideas to use FencedIn for. I tried a location-based video app and a place-based app that had multiple features. Nothing worked, but now I'm almost finished developing ChatLocal, an app that allows you to load a perimeter anywhere on the geographic map, which loads chat rooms.<p>The tech stack is 100% Java (low-level mostly). I have a backend, commons library and an Android app. Java was the natural choice back in 2013. However, I still wouldn't choose anything else today. Java is the best for long-term large-scale projects. (I'm also using WildFly. PostgreSQL and a Linux server.)<p>This app is still not fully finished, but I think the impact on society might be tremendous.<p>The previous app to ChatLocal, LocalVideo, is fully up on the Google Play store and can be tested. It has 88% of the features of ChatLocal, including especially the perimeter-based loading system.<p>The feedback I'm mostly looking for is new ideas and concepts to add to this location-based social media app. And how strong of a value proposition does the app have for society.

1 2 3 ... 923 924 925 >