The best Hacker News stories from Show from the past day
Latest posts:
Show HN: The text disappears when you screenshot it
Show HN: The text disappears when you screenshot it
Show HN: The text disappears when you screenshot it
Show HN: The text disappears when you screenshot it
Show HN: Pgmcp, an MCP server to query any Postgres database in natural language
Show HN: I built a platform for long-form media recs (books, articles, etc.)
Would love any feedback
Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer
So, after years of abandoning Rust after the hello world stage, I finally decided to do something substantial. It started with simple line rendering, but I liked how it was progressing so I figured I could make a reasonably complete PSX style renderer and a game with it.<p>My only dependency is SDL2; I treat it as my "platform", so it handles windowing, input and audio. This means my Cargo.toml is as simple as:<p>[dependencies.sdl2]
version = "0.35"
default-features = false
features = ["mixer"]<p>this pulls around 6-7 other dependencies.<p>I am doing actual true color 3D rendering (with Z buffer, transforming, lighting and rasterizing each triangle and so on, no special techniques or raycasting), the framebuffer is 320x180 (widescreen 320x240). SDL handles the hardware-accelerated final scaling to the display resolution (if available, for example in VMs it's sometimes not so it's pure software). I do my own physics, quaternion/matrix/vector math, TGA and OBJ loading.<p>Performance: I have not spent a lot of time on this really, but I am kind of satisfied: FPS ranges from [200-500] on a 2011 i5 Thinkpad to [70-80] on a 2005 Pentium laptop (this could barely run rustc...I had to jump through some hoops to make it work on 32 bit Linux), to [40-50] on a RaspberryPi 3B+. I don't have more modern hardware to test.<p>All of this is single threaded, no SIMD, no inline asm. Also, implementing interlaced rendering provided a +50% perf boost (and a nice effect).<p>The Pentium laptop has an ATI (yes) chip which is, maybe not surprisingly, supported perfectly by SDL.<p>Regarding Rust: I've barely touched the language. I am using it more as a "C with vec!s, borrow checker, pattern matching, error propagation, and traits". I love the syntax of the subset that I use; it's crystal clear, readable, ergonomic. Things like matches/ifs returning values are extremely useful for concise and productive code. However, pro/idiomatic code that I see around, looks unreadable to me. I've written all of the code from scratch on my own terms, so this was not a problem, but still... In any case, the ecosystem and tooling are amazing. All in all, an amazing development experience. I am a bit afraid to switch back to C++ for my next project.<p>Also, rustup/cargo made things a walk in the park while creating a deployment script that automates the whole process: after a commit, it scans source files for used assets and packages only those, copies dependencies (DLLs for Win), sets up build dependencies depending on the target, builds all 3 targets (Win10_64, Linux32, Linux64), bundles everything into separate zips and uploads them to my local server. I am doing this from a 64bit Lubuntu 18.04 virtual machine.<p>You can try the game and read all info about it on the linked itch.io page:
<a href="https://totenarctanz.itch.io/a-scavenging-trip" rel="nofollow">https://totenarctanz.itch.io/a-scavenging-trip</a><p>All assets (audio/images/fonts) where also made by me for this project (you could guess from the low quality).<p>Development tools: Geany (on Linux), notepad++ (on Windows), both vanilla with no plugins, Blender, Gimp, REAPER.
Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer
So, after years of abandoning Rust after the hello world stage, I finally decided to do something substantial. It started with simple line rendering, but I liked how it was progressing so I figured I could make a reasonably complete PSX style renderer and a game with it.<p>My only dependency is SDL2; I treat it as my "platform", so it handles windowing, input and audio. This means my Cargo.toml is as simple as:<p>[dependencies.sdl2]
version = "0.35"
default-features = false
features = ["mixer"]<p>this pulls around 6-7 other dependencies.<p>I am doing actual true color 3D rendering (with Z buffer, transforming, lighting and rasterizing each triangle and so on, no special techniques or raycasting), the framebuffer is 320x180 (widescreen 320x240). SDL handles the hardware-accelerated final scaling to the display resolution (if available, for example in VMs it's sometimes not so it's pure software). I do my own physics, quaternion/matrix/vector math, TGA and OBJ loading.<p>Performance: I have not spent a lot of time on this really, but I am kind of satisfied: FPS ranges from [200-500] on a 2011 i5 Thinkpad to [70-80] on a 2005 Pentium laptop (this could barely run rustc...I had to jump through some hoops to make it work on 32 bit Linux), to [40-50] on a RaspberryPi 3B+. I don't have more modern hardware to test.<p>All of this is single threaded, no SIMD, no inline asm. Also, implementing interlaced rendering provided a +50% perf boost (and a nice effect).<p>The Pentium laptop has an ATI (yes) chip which is, maybe not surprisingly, supported perfectly by SDL.<p>Regarding Rust: I've barely touched the language. I am using it more as a "C with vec!s, borrow checker, pattern matching, error propagation, and traits". I love the syntax of the subset that I use; it's crystal clear, readable, ergonomic. Things like matches/ifs returning values are extremely useful for concise and productive code. However, pro/idiomatic code that I see around, looks unreadable to me. I've written all of the code from scratch on my own terms, so this was not a problem, but still... In any case, the ecosystem and tooling are amazing. All in all, an amazing development experience. I am a bit afraid to switch back to C++ for my next project.<p>Also, rustup/cargo made things a walk in the park while creating a deployment script that automates the whole process: after a commit, it scans source files for used assets and packages only those, copies dependencies (DLLs for Win), sets up build dependencies depending on the target, builds all 3 targets (Win10_64, Linux32, Linux64), bundles everything into separate zips and uploads them to my local server. I am doing this from a 64bit Lubuntu 18.04 virtual machine.<p>You can try the game and read all info about it on the linked itch.io page:
<a href="https://totenarctanz.itch.io/a-scavenging-trip" rel="nofollow">https://totenarctanz.itch.io/a-scavenging-trip</a><p>All assets (audio/images/fonts) where also made by me for this project (you could guess from the low quality).<p>Development tools: Geany (on Linux), notepad++ (on Windows), both vanilla with no plugins, Blender, Gimp, REAPER.
Show HN: Pyproc – Call Python from Go Without CGO or Microservices
Hi HN!I built *pyproc* to let Go services call Python like a local function — *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn.
* Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>```
go get github.com/YuminosukeSato/pyproc@latest
```<p>Python (worker):<p>```
pip install pyproc-worker
```<p>Minimal worker (Python):<p>```
from pyproc_worker import expose, run_worker
@expose
def predict(req):
return {"result": req["value"] * 2}
if __name__ == "__main__":
run_worker()
```<p>Call from Go:<p>```
import (
"context"
"fmt"
"github.com/YuminosukeSato/pyproc/pkg/pyproc"
)
func main() {
pool, _ := pyproc.NewPool(pyproc.PoolOptions{
Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10},
WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"},
}, nil)
_ = pool.Start(context.Background())
defer pool.Shutdown(context.Background())
var out map[string]any
_ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out)
fmt.Println(out["result"]) // 84
}
```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet.
* Best for request/response payloads ≲ \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*What’s included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"
Show HN: Pyproc – Call Python from Go Without CGO or Microservices
Hi HN!I built *pyproc* to let Go services call Python like a local function — *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn.
* Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>```
go get github.com/YuminosukeSato/pyproc@latest
```<p>Python (worker):<p>```
pip install pyproc-worker
```<p>Minimal worker (Python):<p>```
from pyproc_worker import expose, run_worker
@expose
def predict(req):
return {"result": req["value"] * 2}
if __name__ == "__main__":
run_worker()
```<p>Call from Go:<p>```
import (
"context"
"fmt"
"github.com/YuminosukeSato/pyproc/pkg/pyproc"
)
func main() {
pool, _ := pyproc.NewPool(pyproc.PoolOptions{
Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10},
WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"},
}, nil)
_ = pool.Start(context.Background())
defer pool.Shutdown(context.Background())
var out map[string]any
_ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out)
fmt.Println(out["result"]) // 84
}
```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet.
* Best for request/response payloads ≲ \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*What’s included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"
Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
Hey all!
I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.<p>To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: <a href="https://www.petrostechchronicles.com/" rel="nofollow">https://www.petrostechchronicles.com/</a>)
<a href="https://github.com/Aherontas/Pycon_Greece_2025_Presentation_Agents" rel="nofollow">https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...</a><p>The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.<p>Features:<p>- Multiple agents running in containers<p>- MCP servers (Brave search, GitHub, filesystem, etc.) as tools<p>- A2A communication between services<p>- Minimal UI for experimentation for Tech Trend - repo analysis<p>I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.<p>It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems.
Big questions:<p>Do you think agent-to-agent protocols like MCP/A2A will stick?<p>Or will the future be mostly single powerful LLMs with plugin stacks?<p>Thanks — excited to hear what the HN crowd thinks!
Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
Hey all!
I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.<p>To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: <a href="https://www.petrostechchronicles.com/" rel="nofollow">https://www.petrostechchronicles.com/</a>)
<a href="https://github.com/Aherontas/Pycon_Greece_2025_Presentation_Agents" rel="nofollow">https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...</a><p>The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.<p>Features:<p>- Multiple agents running in containers<p>- MCP servers (Brave search, GitHub, filesystem, etc.) as tools<p>- A2A communication between services<p>- Minimal UI for experimentation for Tech Trend - repo analysis<p>I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.<p>It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems.
Big questions:<p>Do you think agent-to-agent protocols like MCP/A2A will stick?<p>Or will the future be mostly single powerful LLMs with plugin stacks?<p>Thanks — excited to hear what the HN crowd thinks!
Show HN: Semlib – Semantic Data Processing
Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform
Hello everyone!<p>I’ve been building an Open Source Ecommerce framework for Angular called Daffodil. I think Daffodil is really cool because it allows you to connect to any arbitrary ecommerce platform. I’ve been hacking away at it slowly (for 7 years now) as I’ve had time and it's finally feeling “ready”. I would love feedback from anyone who’s spent any time in ecommerce (especially as a frontend developer).<p>For those who are not javascript ecosystem devs, here’s a demo of the concept: <a href="https://demo.daff.io/" rel="nofollow">https://demo.daff.io/</a><p>For those who are familiar with Angular, you can just run the following from a new Angular app (use Angular 19, we’re working on support for Angular 20!) to get the exact same result as the demo above:<p>```bash
ng add @daffodil/commerce
```<p>I’m trying to solve two distinct challenges:<p>First, I absolutely hate having to learn a new ecommerce platform. We have drivers for printers, mice, keyboards, microphones, and many other physical widgets in the operating system, why not have them for ecommerce software? It’s not that I hate the existing platforms, their UIs or APIs, it's that every platform repeats the same concepts and I always have to learn some new fangled way of doing the same thing. I’ve long desired for these platforms to act more like operating systems on the Web than like custom built software. Ideally, I would like to call them through a standard interface and forget about their existence beyond that.<p>Second, I’d like to keep it simple to start. I’d like to (on day 1) not have to set up any additional software beyond the core frontend stack (essentially yarn/npm + Angular). All too often, I’m forced to set up docker-compose, Kubernetes, pay for a SaaS, wait for IT at the merchant to get me access, or run a VM somewhere just to build some UI for an ecommerce platform that a company uses. More often than not, I just want to start up a little local http server and start writing.<p>I currently have support for Magento/MageOS/Adobe Commerce, I have partial support for Shopify and I recently wrote a product driver for Medusa - <a href="https://github.com/graycoreio/daffodil/pull/3939" rel="nofollow">https://github.com/graycoreio/daffodil/pull/3939</a>.<p>Finally, if you’re thinking “this isn’t performant, can’t you just do all of this with GraphQl on the server”, you’re exactly correct! That’s where I’d like to get to eventually, but that’s a “yet another tool” barrier to “getting started” that I’d like to be able to allow developers to do without for as long as I can in the development cycle. I’m shooting to eventually ship the same “driver” code that we run in the browser in a GraphQl server once all is said and done with just another driver (albeit much simpler than all the others) that uses the native GraphQl format.<p>Any suggestions for drivers and platforms are welcome, though I can’t promise I will implement them. :)
Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform
Hello everyone!<p>I’ve been building an Open Source Ecommerce framework for Angular called Daffodil. I think Daffodil is really cool because it allows you to connect to any arbitrary ecommerce platform. I’ve been hacking away at it slowly (for 7 years now) as I’ve had time and it's finally feeling “ready”. I would love feedback from anyone who’s spent any time in ecommerce (especially as a frontend developer).<p>For those who are not javascript ecosystem devs, here’s a demo of the concept: <a href="https://demo.daff.io/" rel="nofollow">https://demo.daff.io/</a><p>For those who are familiar with Angular, you can just run the following from a new Angular app (use Angular 19, we’re working on support for Angular 20!) to get the exact same result as the demo above:<p>```bash
ng add @daffodil/commerce
```<p>I’m trying to solve two distinct challenges:<p>First, I absolutely hate having to learn a new ecommerce platform. We have drivers for printers, mice, keyboards, microphones, and many other physical widgets in the operating system, why not have them for ecommerce software? It’s not that I hate the existing platforms, their UIs or APIs, it's that every platform repeats the same concepts and I always have to learn some new fangled way of doing the same thing. I’ve long desired for these platforms to act more like operating systems on the Web than like custom built software. Ideally, I would like to call them through a standard interface and forget about their existence beyond that.<p>Second, I’d like to keep it simple to start. I’d like to (on day 1) not have to set up any additional software beyond the core frontend stack (essentially yarn/npm + Angular). All too often, I’m forced to set up docker-compose, Kubernetes, pay for a SaaS, wait for IT at the merchant to get me access, or run a VM somewhere just to build some UI for an ecommerce platform that a company uses. More often than not, I just want to start up a little local http server and start writing.<p>I currently have support for Magento/MageOS/Adobe Commerce, I have partial support for Shopify and I recently wrote a product driver for Medusa - <a href="https://github.com/graycoreio/daffodil/pull/3939" rel="nofollow">https://github.com/graycoreio/daffodil/pull/3939</a>.<p>Finally, if you’re thinking “this isn’t performant, can’t you just do all of this with GraphQl on the server”, you’re exactly correct! That’s where I’d like to get to eventually, but that’s a “yet another tool” barrier to “getting started” that I’d like to be able to allow developers to do without for as long as I can in the development cycle. I’m shooting to eventually ship the same “driver” code that we run in the browser in a GraphQl server once all is said and done with just another driver (albeit much simpler than all the others) that uses the native GraphQl format.<p>Any suggestions for drivers and platforms are welcome, though I can’t promise I will implement them. :)
Show HN: AI Code Detector – detect AI-generated code with 95% accuracy
Hey HN,<p>I’m Henry, cofounder and CTO at Span (<a href="https://span.app/" rel="nofollow">https://span.app/</a>). Today we’re launching AI Code Detector, an AI code detection tool you can try in your browser.<p>The explosion of AI generated code has created some weird problems for engineering orgs. Tools like Cursor and Copilot are used by virtually every org on the planet – but each codegen tool has its own idiosyncratic way of reporting usage. Some don’t report usage at all.<p>Our view is that token spend will start competing with payroll spend as AI becomes more deeply ingrained in how we build software, so understanding how to drive proficiency, improve ROI, and allocate resources relating to AI tools will become at least as important as parallel processes on the talent side.<p>Getting true visibility into AI-generated code is incredibly difficult. And yet it’s the number one thing customers ask us for.<p>So we built a new approach from the ground up.<p>Our AI Code Detector is powered by span-detect-1, a state-of-the-art model trained on millions of AI- and human-written code samples. It detects AI-generated code with 95% accuracy, and ties it to specific lines shipped into production. Within the Span platform, it’ll give teams a clear view into AI’s real impact on velocity, quality, and ROI.<p>It does have some limitations. Most notably, it only works for TypeScript and Python code. We are adding support for more languages: Java, Ruby, and C# are next. Its accuracy is around 95% today, and we’re working on improving that, too.<p>If you’d like to take it for a spin, you can run a code snippet here (<a href="https://code-detector.ai/" rel="nofollow">https://code-detector.ai/</a>) and get results in about five seconds. We also have a more narrative-driven microsite (<a href="https://www.span.app/detector" rel="nofollow">https://www.span.app/detector</a>) that my marketing team says I have to share.<p>Would love your thoughts, both on the tool itself and your own experiences. I’ll be hanging out in the comments to answer questions, too.
Show HN: AI Code Detector – detect AI-generated code with 95% accuracy
Hey HN,<p>I’m Henry, cofounder and CTO at Span (<a href="https://span.app/" rel="nofollow">https://span.app/</a>). Today we’re launching AI Code Detector, an AI code detection tool you can try in your browser.<p>The explosion of AI generated code has created some weird problems for engineering orgs. Tools like Cursor and Copilot are used by virtually every org on the planet – but each codegen tool has its own idiosyncratic way of reporting usage. Some don’t report usage at all.<p>Our view is that token spend will start competing with payroll spend as AI becomes more deeply ingrained in how we build software, so understanding how to drive proficiency, improve ROI, and allocate resources relating to AI tools will become at least as important as parallel processes on the talent side.<p>Getting true visibility into AI-generated code is incredibly difficult. And yet it’s the number one thing customers ask us for.<p>So we built a new approach from the ground up.<p>Our AI Code Detector is powered by span-detect-1, a state-of-the-art model trained on millions of AI- and human-written code samples. It detects AI-generated code with 95% accuracy, and ties it to specific lines shipped into production. Within the Span platform, it’ll give teams a clear view into AI’s real impact on velocity, quality, and ROI.<p>It does have some limitations. Most notably, it only works for TypeScript and Python code. We are adding support for more languages: Java, Ruby, and C# are next. Its accuracy is around 95% today, and we’re working on improving that, too.<p>If you’d like to take it for a spin, you can run a code snippet here (<a href="https://code-detector.ai/" rel="nofollow">https://code-detector.ai/</a>) and get results in about five seconds. We also have a more narrative-driven microsite (<a href="https://www.span.app/detector" rel="nofollow">https://www.span.app/detector</a>) that my marketing team says I have to share.<p>Would love your thoughts, both on the tool itself and your own experiences. I’ll be hanging out in the comments to answer questions, too.
Show HN: Pooshit – Sync local code to remote Docker containers
Pronounced Push-It....<p>I'm a lazy developer for the most part, so this is for people like me. Sometimes I just want my local code running in live remote containers quickly, without building images and syncing to cloud docker repos or setting up git workflows or any of the other draining ways to get your code running remotely.<p>With pooshit (and a simple config file), you can simply push your local dev files to a remote folder on a VM then automatically remove relevant running containers, then build and run an updated container with one command line call.<p>It works well with reverse proxies like nginx or caddy as you can specify the docker run arguments in the pooshit_config files.<p><a href="https://github.com/marktolson/pooshit" rel="nofollow">https://github.com/marktolson/pooshit</a>
Show HN: Pooshit – Sync local code to remote Docker containers
Pronounced Push-It....<p>I'm a lazy developer for the most part, so this is for people like me. Sometimes I just want my local code running in live remote containers quickly, without building images and syncing to cloud docker repos or setting up git workflows or any of the other draining ways to get your code running remotely.<p>With pooshit (and a simple config file), you can simply push your local dev files to a remote folder on a VM then automatically remove relevant running containers, then build and run an updated container with one command line call.<p>It works well with reverse proxies like nginx or caddy as you can specify the docker run arguments in the pooshit_config files.<p><a href="https://github.com/marktolson/pooshit" rel="nofollow">https://github.com/marktolson/pooshit</a>
Show HN: Omarchy on CachyOS
An install script to create a strong and stable blend of Omarchy on top of CachyOS. You must install CachyOS first (please read the README file.)<p>Feedback and contributions welcome!