The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I created a small 2D game about an ant

Hello everyone! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples<p>This game also features random landscape generation, with clouds and trees arranged in a chaotic pattern across all coordinates. This is what took me the longest time :)<p>I would appreciate your feedback ^ ^

Show HN: I created a small 2D game about an ant

Hello everyone! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples<p>This game also features random landscape generation, with clouds and trees arranged in a chaotic pattern across all coordinates. This is what took me the longest time :)<p>I would appreciate your feedback ^ ^

Show HN: I created a small 2D game about an ant

Hello everyone! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples<p>This game also features random landscape generation, with clouds and trees arranged in a chaotic pattern across all coordinates. This is what took me the longest time :)<p>I would appreciate your feedback ^ ^

Show HN: I created a small 2D game about an ant

Hello everyone! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples<p>This game also features random landscape generation, with clouds and trees arranged in a chaotic pattern across all coordinates. This is what took me the longest time :)<p>I would appreciate your feedback ^ ^

Show HN: One prompt generates an app with its own database

Hey HN, manyminiapps is the world first massively multiplayer online mini app builder (MMOMA)<p>*Here’s what it does:*<p>You load the page. You write 1 prompt and you get a mini app back in under 2 minutes. There’s no sign up, and you can see what everyone’s creating in real-time!<p>Each mini app comes with it’s own database and backend, so you can build shareable apps that save data.<p>*What’s different*<p>There are a lot of app builders that promise you’ll build production software for others. But we think true production software can take a long time to get right. Even if you don’t need to program there’s a lot of work involved.<p>What if we turned the promise around? Instead of “you vibe code software companies”, it’s “you build fun software for yourself”.<p>If you cut the problem right, LLMs as they are today can already deliver personal software. manyminiapps is meant to be an experiment to demonstrate this.<p>You may wonder: do you really need personal software? We’re not 100% sure, but it’s definitely an interesting question. Using manyminiapps so far has been surprising! We thought our friends would just try to build the common todo app, but instead we found them building wedding planners, chord progression helpers, inspiration lists, and retro games.<p>*How it works*<p>Instead of spinning up VMs or separate instances per app, we built a multi-tenant graph database on top of 1 large Postgres instance.<p>All databases live under 1 table, on an EAV table (entity, attribute, value). This makes it so creating an “app” is as light as creating a new row.<p>If you have heard of EAV tables before, you may know that most Postgres experts will tell you <i>not</i> to use them. Postgres needs to know statistics in order to make efficient query plans. But when you use EAV tables, Postgres can no longer get good statistics. This is usually a bad idea.<p>But we thought it was worth solving to get a multi-tenant relational database. To solve this problem we started saving our own statistics in a custom table. We use count-min sketches to keep stats about each app’s columns. When a user writes a query, we figure out the indexes to use and get pg_hint_plan to tell Postgres what to do.<p>*What we’ve learned so far*<p>We’ve tried both GPT 5, Claude Opus, and Claude Sonnet for LLM providers.<p>GPT 5 followed the instructions the best amongst the models. Even if you told it a completely nonsensical prompt (like “absda”, it would follow the system prompt and make an app for you. But GPT 5 was also the “most lazy”. The apps that came out tended to feel too simple, with little UI detail.<p>Both Claude Opus and Sonnet were less good at following instructions. Even when we told them to return just the code, they wanted to returned markdown blocks. But, after parsing through those blocks, the resulting apps felt much better.<p>To our surprise, we didn’t notice a difference in quality from Opus and Sonnet. Both models did well, with perhaps Sonnet following instructions more closely.<p>To get good results we iterated on prompts. We initially tried giving point-by-point instructions, but found that a prompt with a full example tended to do better. Here’s what we landed on:<p><a href="https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f665bf5" rel="nofollow">https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f...</a><p>Let us know what you think, and hope you have fun : )

Show HN: One prompt generates an app with its own database

Hey HN, manyminiapps is the world first massively multiplayer online mini app builder (MMOMA)<p>*Here’s what it does:*<p>You load the page. You write 1 prompt and you get a mini app back in under 2 minutes. There’s no sign up, and you can see what everyone’s creating in real-time!<p>Each mini app comes with it’s own database and backend, so you can build shareable apps that save data.<p>*What’s different*<p>There are a lot of app builders that promise you’ll build production software for others. But we think true production software can take a long time to get right. Even if you don’t need to program there’s a lot of work involved.<p>What if we turned the promise around? Instead of “you vibe code software companies”, it’s “you build fun software for yourself”.<p>If you cut the problem right, LLMs as they are today can already deliver personal software. manyminiapps is meant to be an experiment to demonstrate this.<p>You may wonder: do you really need personal software? We’re not 100% sure, but it’s definitely an interesting question. Using manyminiapps so far has been surprising! We thought our friends would just try to build the common todo app, but instead we found them building wedding planners, chord progression helpers, inspiration lists, and retro games.<p>*How it works*<p>Instead of spinning up VMs or separate instances per app, we built a multi-tenant graph database on top of 1 large Postgres instance.<p>All databases live under 1 table, on an EAV table (entity, attribute, value). This makes it so creating an “app” is as light as creating a new row.<p>If you have heard of EAV tables before, you may know that most Postgres experts will tell you <i>not</i> to use them. Postgres needs to know statistics in order to make efficient query plans. But when you use EAV tables, Postgres can no longer get good statistics. This is usually a bad idea.<p>But we thought it was worth solving to get a multi-tenant relational database. To solve this problem we started saving our own statistics in a custom table. We use count-min sketches to keep stats about each app’s columns. When a user writes a query, we figure out the indexes to use and get pg_hint_plan to tell Postgres what to do.<p>*What we’ve learned so far*<p>We’ve tried both GPT 5, Claude Opus, and Claude Sonnet for LLM providers.<p>GPT 5 followed the instructions the best amongst the models. Even if you told it a completely nonsensical prompt (like “absda”, it would follow the system prompt and make an app for you. But GPT 5 was also the “most lazy”. The apps that came out tended to feel too simple, with little UI detail.<p>Both Claude Opus and Sonnet were less good at following instructions. Even when we told them to return just the code, they wanted to returned markdown blocks. But, after parsing through those blocks, the resulting apps felt much better.<p>To our surprise, we didn’t notice a difference in quality from Opus and Sonnet. Both models did well, with perhaps Sonnet following instructions more closely.<p>To get good results we iterated on prompts. We initially tried giving point-by-point instructions, but found that a prompt with a full example tended to do better. Here’s what we landed on:<p><a href="https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f665bf5" rel="nofollow">https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f...</a><p>Let us know what you think, and hope you have fun : )

Show HN: One prompt generates an app with its own database

Hey HN, manyminiapps is the world first massively multiplayer online mini app builder (MMOMA)<p>*Here’s what it does:*<p>You load the page. You write 1 prompt and you get a mini app back in under 2 minutes. There’s no sign up, and you can see what everyone’s creating in real-time!<p>Each mini app comes with it’s own database and backend, so you can build shareable apps that save data.<p>*What’s different*<p>There are a lot of app builders that promise you’ll build production software for others. But we think true production software can take a long time to get right. Even if you don’t need to program there’s a lot of work involved.<p>What if we turned the promise around? Instead of “you vibe code software companies”, it’s “you build fun software for yourself”.<p>If you cut the problem right, LLMs as they are today can already deliver personal software. manyminiapps is meant to be an experiment to demonstrate this.<p>You may wonder: do you really need personal software? We’re not 100% sure, but it’s definitely an interesting question. Using manyminiapps so far has been surprising! We thought our friends would just try to build the common todo app, but instead we found them building wedding planners, chord progression helpers, inspiration lists, and retro games.<p>*How it works*<p>Instead of spinning up VMs or separate instances per app, we built a multi-tenant graph database on top of 1 large Postgres instance.<p>All databases live under 1 table, on an EAV table (entity, attribute, value). This makes it so creating an “app” is as light as creating a new row.<p>If you have heard of EAV tables before, you may know that most Postgres experts will tell you <i>not</i> to use them. Postgres needs to know statistics in order to make efficient query plans. But when you use EAV tables, Postgres can no longer get good statistics. This is usually a bad idea.<p>But we thought it was worth solving to get a multi-tenant relational database. To solve this problem we started saving our own statistics in a custom table. We use count-min sketches to keep stats about each app’s columns. When a user writes a query, we figure out the indexes to use and get pg_hint_plan to tell Postgres what to do.<p>*What we’ve learned so far*<p>We’ve tried both GPT 5, Claude Opus, and Claude Sonnet for LLM providers.<p>GPT 5 followed the instructions the best amongst the models. Even if you told it a completely nonsensical prompt (like “absda”, it would follow the system prompt and make an app for you. But GPT 5 was also the “most lazy”. The apps that came out tended to feel too simple, with little UI detail.<p>Both Claude Opus and Sonnet were less good at following instructions. Even when we told them to return just the code, they wanted to returned markdown blocks. But, after parsing through those blocks, the resulting apps felt much better.<p>To our surprise, we didn’t notice a difference in quality from Opus and Sonnet. Both models did well, with perhaps Sonnet following instructions more closely.<p>To get good results we iterated on prompts. We initially tried giving point-by-point instructions, but found that a prompt with a full example tended to do better. Here’s what we landed on:<p><a href="https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f665bf5" rel="nofollow">https://gist.github.com/stopachka/a6b07e1e6daeb85fa7c9555d8f...</a><p>Let us know what you think, and hope you have fun : )

Show HN: Asxiv.org – Ask ArXiv papers questions through chat

I built this yesterday to help understand papers I'm interested in. It's using the gemini 2.5 flash lite model, but you can run it yourself[1] and switch to 2.5 pro for better results.<p>Happy to answer any questions or take suggestions on how I can improve it!<p>1. <a href="https://github.com/montanaflynn/asxiv" rel="nofollow">https://github.com/montanaflynn/asxiv</a>

Show HN: Asxiv.org – Ask ArXiv papers questions through chat

I built this yesterday to help understand papers I'm interested in. It's using the gemini 2.5 flash lite model, but you can run it yourself[1] and switch to 2.5 pro for better results.<p>Happy to answer any questions or take suggestions on how I can improve it!<p>1. <a href="https://github.com/montanaflynn/asxiv" rel="nofollow">https://github.com/montanaflynn/asxiv</a>

Show HN: Asxiv.org – Ask ArXiv papers questions through chat

I built this yesterday to help understand papers I'm interested in. It's using the gemini 2.5 flash lite model, but you can run it yourself[1] and switch to 2.5 pro for better results.<p>Happy to answer any questions or take suggestions on how I can improve it!<p>1. <a href="https://github.com/montanaflynn/asxiv" rel="nofollow">https://github.com/montanaflynn/asxiv</a>

Show HN: The text disappears when you screenshot it

Show HN: The text disappears when you screenshot it

Show HN: The text disappears when you screenshot it

Show HN: The text disappears when you screenshot it

Show HN: Pgmcp, an MCP server to query any Postgres database in natural language

Show HN: I built a platform for long-form media recs (books, articles, etc.)

Would love any feedback

Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer

So, after years of abandoning Rust after the hello world stage, I finally decided to do something substantial. It started with simple line rendering, but I liked how it was progressing so I figured I could make a reasonably complete PSX style renderer and a game with it.<p>My only dependency is SDL2; I treat it as my "platform", so it handles windowing, input and audio. This means my Cargo.toml is as simple as:<p>[dependencies.sdl2] version = "0.35" default-features = false features = ["mixer"]<p>this pulls around 6-7 other dependencies.<p>I am doing actual true color 3D rendering (with Z buffer, transforming, lighting and rasterizing each triangle and so on, no special techniques or raycasting), the framebuffer is 320x180 (widescreen 320x240). SDL handles the hardware-accelerated final scaling to the display resolution (if available, for example in VMs it's sometimes not so it's pure software). I do my own physics, quaternion/matrix/vector math, TGA and OBJ loading.<p>Performance: I have not spent a lot of time on this really, but I am kind of satisfied: FPS ranges from [200-500] on a 2011 i5 Thinkpad to [70-80] on a 2005 Pentium laptop (this could barely run rustc...I had to jump through some hoops to make it work on 32 bit Linux), to [40-50] on a RaspberryPi 3B+. I don't have more modern hardware to test.<p>All of this is single threaded, no SIMD, no inline asm. Also, implementing interlaced rendering provided a +50% perf boost (and a nice effect).<p>The Pentium laptop has an ATI (yes) chip which is, maybe not surprisingly, supported perfectly by SDL.<p>Regarding Rust: I've barely touched the language. I am using it more as a "C with vec!s, borrow checker, pattern matching, error propagation, and traits". I love the syntax of the subset that I use; it's crystal clear, readable, ergonomic. Things like matches/ifs returning values are extremely useful for concise and productive code. However, pro/idiomatic code that I see around, looks unreadable to me. I've written all of the code from scratch on my own terms, so this was not a problem, but still... In any case, the ecosystem and tooling are amazing. All in all, an amazing development experience. I am a bit afraid to switch back to C++ for my next project.<p>Also, rustup/cargo made things a walk in the park while creating a deployment script that automates the whole process: after a commit, it scans source files for used assets and packages only those, copies dependencies (DLLs for Win), sets up build dependencies depending on the target, builds all 3 targets (Win10_64, Linux32, Linux64), bundles everything into separate zips and uploads them to my local server. I am doing this from a 64bit Lubuntu 18.04 virtual machine.<p>You can try the game and read all info about it on the linked itch.io page: <a href="https://totenarctanz.itch.io/a-scavenging-trip" rel="nofollow">https://totenarctanz.itch.io/a-scavenging-trip</a><p>All assets (audio/images/fonts) where also made by me for this project (you could guess from the low quality).<p>Development tools: Geany (on Linux), notepad++ (on Windows), both vanilla with no plugins, Blender, Gimp, REAPER.

Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer

So, after years of abandoning Rust after the hello world stage, I finally decided to do something substantial. It started with simple line rendering, but I liked how it was progressing so I figured I could make a reasonably complete PSX style renderer and a game with it.<p>My only dependency is SDL2; I treat it as my "platform", so it handles windowing, input and audio. This means my Cargo.toml is as simple as:<p>[dependencies.sdl2] version = "0.35" default-features = false features = ["mixer"]<p>this pulls around 6-7 other dependencies.<p>I am doing actual true color 3D rendering (with Z buffer, transforming, lighting and rasterizing each triangle and so on, no special techniques or raycasting), the framebuffer is 320x180 (widescreen 320x240). SDL handles the hardware-accelerated final scaling to the display resolution (if available, for example in VMs it's sometimes not so it's pure software). I do my own physics, quaternion/matrix/vector math, TGA and OBJ loading.<p>Performance: I have not spent a lot of time on this really, but I am kind of satisfied: FPS ranges from [200-500] on a 2011 i5 Thinkpad to [70-80] on a 2005 Pentium laptop (this could barely run rustc...I had to jump through some hoops to make it work on 32 bit Linux), to [40-50] on a RaspberryPi 3B+. I don't have more modern hardware to test.<p>All of this is single threaded, no SIMD, no inline asm. Also, implementing interlaced rendering provided a +50% perf boost (and a nice effect).<p>The Pentium laptop has an ATI (yes) chip which is, maybe not surprisingly, supported perfectly by SDL.<p>Regarding Rust: I've barely touched the language. I am using it more as a "C with vec!s, borrow checker, pattern matching, error propagation, and traits". I love the syntax of the subset that I use; it's crystal clear, readable, ergonomic. Things like matches/ifs returning values are extremely useful for concise and productive code. However, pro/idiomatic code that I see around, looks unreadable to me. I've written all of the code from scratch on my own terms, so this was not a problem, but still... In any case, the ecosystem and tooling are amazing. All in all, an amazing development experience. I am a bit afraid to switch back to C++ for my next project.<p>Also, rustup/cargo made things a walk in the park while creating a deployment script that automates the whole process: after a commit, it scans source files for used assets and packages only those, copies dependencies (DLLs for Win), sets up build dependencies depending on the target, builds all 3 targets (Win10_64, Linux32, Linux64), bundles everything into separate zips and uploads them to my local server. I am doing this from a 64bit Lubuntu 18.04 virtual machine.<p>You can try the game and read all info about it on the linked itch.io page: <a href="https://totenarctanz.itch.io/a-scavenging-trip" rel="nofollow">https://totenarctanz.itch.io/a-scavenging-trip</a><p>All assets (audio/images/fonts) where also made by me for this project (you could guess from the low quality).<p>Development tools: Geany (on Linux), notepad++ (on Windows), both vanilla with no plugins, Blender, Gimp, REAPER.

Show HN: Pyproc – Call Python from Go Without CGO or Microservices

Hi HN!I built *pyproc* to let Go services call Python like a local function — *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn. * Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>``` go get github.com/YuminosukeSato/pyproc@latest ```<p>Python (worker):<p>``` pip install pyproc-worker ```<p>Minimal worker (Python):<p>``` from pyproc_worker import expose, run_worker @expose def predict(req): return {"result": req["value"] * 2} if __name__ == "__main__": run_worker() ```<p>Call from Go:<p>``` import ( "context" "fmt" "github.com/YuminosukeSato/pyproc/pkg/pyproc" ) func main() { pool, _ := pyproc.NewPool(pyproc.PoolOptions{ Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10}, WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"}, }, nil) _ = pool.Start(context.Background()) defer pool.Shutdown(context.Background()) var out map[string]any _ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out) fmt.Println(out["result"]) // 84 } ```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet. * Best for request/response payloads ≲ \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*What’s included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"

Show HN: Pyproc – Call Python from Go Without CGO or Microservices

Hi HN!I built *pyproc* to let Go services call Python like a local function — *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn. * Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>``` go get github.com/YuminosukeSato/pyproc@latest ```<p>Python (worker):<p>``` pip install pyproc-worker ```<p>Minimal worker (Python):<p>``` from pyproc_worker import expose, run_worker @expose def predict(req): return {"result": req["value"] * 2} if __name__ == "__main__": run_worker() ```<p>Call from Go:<p>``` import ( "context" "fmt" "github.com/YuminosukeSato/pyproc/pkg/pyproc" ) func main() { pool, _ := pyproc.NewPool(pyproc.PoolOptions{ Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10}, WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"}, }, nil) _ = pool.Start(context.Background()) defer pool.Shutdown(context.Background()) var out map[string]any _ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out) fmt.Println(out["result"]) // 84 } ```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet. * Best for request/response payloads ≲ \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*What’s included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"

< 1 2 3 ... 15 16 17 18 19 ... 883 884 885 >