The best Hacker News stories from Show from the past day
Latest posts:
Show HN: My LLM CLI tool can run tools now, from Python code or plugins
Show HN: Photoshop Clone Built in React
I built React Photo Studio, a free image editor that runs entirely in the browser with React and WebGL. Right now only the brush tool is functional for drawing, while the rest of the interface is a prototype awaiting development. Everything is client side, no backend.<p>I would appreciate feedback on usability and code structure, and I welcome pull requests.<p>Live demo <a href="https://reactphotostudio.app" rel="nofollow">https://reactphotostudio.app</a>
Code <a href="https://github.com/chase-manning/react-photo-studio">https://github.com/chase-manning/react-photo-studio</a>
Show HN: We built an AI to review your pull requests
Hey HN,<p>We’re two developers (co-founders) with a team of 20 who got tired of spending hours reviewing PRs, so we built Infinitcode.ai, an AI-powered code reviewer that:<p>- *Summarizes PRs in plain English*: No more deciphering 1,000-line diff jungles<p>- *Catches more than bugs*: Security holes, performance pitfalls, code smells, even typos (yes, we’ll flag “vurnerabilities” and vulnerabilities)<p>- *Zero onboarding*: Works instantly—no “let me learn your codebase for weeks” nonsense.<p>Why we’re posting: We’re in alpha and need brutal honesty. Roast our tool, mock our UI, or tell us why AI will never replace your team’s Senior Engineer.<p>Free alpha access for 100 teams: All we ask is feedback.<p>Give it a try: <a href="https://infinitcode.ai" rel="nofollow">https://infinitcode.ai</a><p>Demo repo: <a href="https://github.com/infinitcodecom/infinitcode-ai-demo">https://github.com/infinitcodecom/infinitcode-ai-demo</a>
Show HN: Zli – A Batteries-Included CLI Framework for Zig
I built zli, a batteries-included CLI framework for Zig with a focus on DX and composability.<p>Key features:<p>- Typed flags with default values and help output
- Rich formatting, and layout support
- Command trees with isolated execution logic
- It’s designed to feel good to use, not just to work.
- Built for real-world CLI apps, not toy examples.<p>Would love feedback, feature ideas, or thoughts from other Zig devs.<p>repo here: <a href="https://github.com/xcaeser/zli">https://github.com/xcaeser/zli</a>
Show HN: Zli – A Batteries-Included CLI Framework for Zig
I built zli, a batteries-included CLI framework for Zig with a focus on DX and composability.<p>Key features:<p>- Typed flags with default values and help output
- Rich formatting, and layout support
- Command trees with isolated execution logic
- It’s designed to feel good to use, not just to work.
- Built for real-world CLI apps, not toy examples.<p>Would love feedback, feature ideas, or thoughts from other Zig devs.<p>repo here: <a href="https://github.com/xcaeser/zli">https://github.com/xcaeser/zli</a>
Show HN: A minimalist web timer for focus and time tracking
Started as a timer to track my freelance coding hours and stay focused. It’s local-only, zero-login, minimalist. Not monetizing it (yet), just seeing if others find it helpful. Thoughts welcome.
Show HN: A minimalist web timer for focus and time tracking
Started as a timer to track my freelance coding hours and stay focused. It’s local-only, zero-login, minimalist. Not monetizing it (yet), just seeing if others find it helpful. Thoughts welcome.
Show HN: A minimalist web timer for focus and time tracking
Started as a timer to track my freelance coding hours and stay focused. It’s local-only, zero-login, minimalist. Not monetizing it (yet), just seeing if others find it helpful. Thoughts welcome.
Show HN: PgDog – Shard Postgres without extensions
Hey HN! Lev here, author of PgDog (<a href="https://github.com/pgdogdev/pgdog">https://github.com/pgdogdev/pgdog</a>). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.<p>Here’s a walkthrough of how it works: <a href="https://www.youtube.com/watch?v=y6sebczWZ-c" rel="nofollow">https://www.youtube.com/watch?v=y6sebczWZ-c</a><p>Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.<p>If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.<p>So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)<p>Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:<p><pre><code> SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
</code></pre>
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.<p>All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.<p>How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.<p>PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.<p>My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (<a href="https://pgdog.dev/blog/pgbouncer-vs-pgdog">https://pgdog.dev/blog/pgbouncer-vs-pgdog</a>).<p>Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!<p>Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.<p>One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (<a href="https://pgdog.dev/blog/hacking-postgres-wire-protocol">https://pgdog.dev/blog/hacking-postgres-wire-protocol</a>). You can just use it without thinking about cluster topology, e.g.:<p><pre><code> COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
</code></pre>
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.<p>What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: <a href="https://pgdog.dev/blog/multi-tenant-pg-can-be-easy">https://pgdog.dev/blog/multi-tenant-pg-can-be-easy</a>.<p>What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.<p>PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.<p>Happy hacking!<p>Lev
Show HN: PgDog – Shard Postgres without extensions
Hey HN! Lev here, author of PgDog (<a href="https://github.com/pgdogdev/pgdog">https://github.com/pgdogdev/pgdog</a>). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.<p>Here’s a walkthrough of how it works: <a href="https://www.youtube.com/watch?v=y6sebczWZ-c" rel="nofollow">https://www.youtube.com/watch?v=y6sebczWZ-c</a><p>Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.<p>If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.<p>So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)<p>Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:<p><pre><code> SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
</code></pre>
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.<p>All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.<p>How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.<p>PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.<p>My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (<a href="https://pgdog.dev/blog/pgbouncer-vs-pgdog">https://pgdog.dev/blog/pgbouncer-vs-pgdog</a>).<p>Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!<p>Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.<p>One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (<a href="https://pgdog.dev/blog/hacking-postgres-wire-protocol">https://pgdog.dev/blog/hacking-postgres-wire-protocol</a>). You can just use it without thinking about cluster topology, e.g.:<p><pre><code> COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
</code></pre>
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.<p>What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: <a href="https://pgdog.dev/blog/multi-tenant-pg-can-be-easy">https://pgdog.dev/blog/multi-tenant-pg-can-be-easy</a>.<p>What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.<p>PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.<p>Happy hacking!<p>Lev
Show HN: PgDog – Shard Postgres without extensions
Hey HN! Lev here, author of PgDog (<a href="https://github.com/pgdogdev/pgdog">https://github.com/pgdogdev/pgdog</a>). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.<p>Here’s a walkthrough of how it works: <a href="https://www.youtube.com/watch?v=y6sebczWZ-c" rel="nofollow">https://www.youtube.com/watch?v=y6sebczWZ-c</a><p>Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.<p>If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.<p>So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)<p>Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:<p><pre><code> SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
</code></pre>
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.<p>All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.<p>How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.<p>PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.<p>My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (<a href="https://pgdog.dev/blog/pgbouncer-vs-pgdog">https://pgdog.dev/blog/pgbouncer-vs-pgdog</a>).<p>Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!<p>Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.<p>One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (<a href="https://pgdog.dev/blog/hacking-postgres-wire-protocol">https://pgdog.dev/blog/hacking-postgres-wire-protocol</a>). You can just use it without thinking about cluster topology, e.g.:<p><pre><code> COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
</code></pre>
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.<p>What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: <a href="https://pgdog.dev/blog/multi-tenant-pg-can-be-easy">https://pgdog.dev/blog/multi-tenant-pg-can-be-easy</a>.<p>What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.<p>PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.<p>Happy hacking!<p>Lev
Show HN: PgDog – Shard Postgres without extensions
Hey HN! Lev here, author of PgDog (<a href="https://github.com/pgdogdev/pgdog">https://github.com/pgdogdev/pgdog</a>). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.<p>Here’s a walkthrough of how it works: <a href="https://www.youtube.com/watch?v=y6sebczWZ-c" rel="nofollow">https://www.youtube.com/watch?v=y6sebczWZ-c</a><p>Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.<p>If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.<p>So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)<p>Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:<p><pre><code> SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
</code></pre>
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.<p>All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.<p>How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.<p>PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.<p>My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (<a href="https://pgdog.dev/blog/pgbouncer-vs-pgdog">https://pgdog.dev/blog/pgbouncer-vs-pgdog</a>).<p>Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!<p>Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.<p>One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (<a href="https://pgdog.dev/blog/hacking-postgres-wire-protocol">https://pgdog.dev/blog/hacking-postgres-wire-protocol</a>). You can just use it without thinking about cluster topology, e.g.:<p><pre><code> COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
</code></pre>
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.<p>What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: <a href="https://pgdog.dev/blog/multi-tenant-pg-can-be-easy">https://pgdog.dev/blog/multi-tenant-pg-can-be-easy</a>.<p>What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.<p>PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.<p>Happy hacking!<p>Lev
Show HN: PgDog – Shard Postgres without extensions
Hey HN! Lev here, author of PgDog (<a href="https://github.com/pgdogdev/pgdog">https://github.com/pgdogdev/pgdog</a>). I’m scaling our favorite database, PostgreSQL. PgDog is a new open source proxy, written in Rust, with first-class support for sharding — without changes to your app or needing database extensions.<p>Here’s a walkthrough of how it works: <a href="https://www.youtube.com/watch?v=y6sebczWZ-c" rel="nofollow">https://www.youtube.com/watch?v=y6sebczWZ-c</a><p>Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.<p>If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.<p>So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)<p>Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:<p><pre><code> SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
</code></pre>
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.<p>All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.<p>How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.<p>PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.<p>My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (<a href="https://pgdog.dev/blog/pgbouncer-vs-pgdog">https://pgdog.dev/blog/pgbouncer-vs-pgdog</a>).<p>Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!<p>Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.<p>One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (<a href="https://pgdog.dev/blog/hacking-postgres-wire-protocol">https://pgdog.dev/blog/hacking-postgres-wire-protocol</a>). You can just use it without thinking about cluster topology, e.g.:<p><pre><code> COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
</code></pre>
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.<p>What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: <a href="https://pgdog.dev/blog/multi-tenant-pg-can-be-easy">https://pgdog.dev/blog/multi-tenant-pg-can-be-easy</a>.<p>What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.<p>PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.<p>Happy hacking!<p>Lev
Show HN: AI Baby Monitor – local Video-LLM that beeps when safety rules break
Hi HN! I built AI Baby Monitor – a tiny stack (Redis + vLLM + Streamlit) that watches a video stream and a YAML list of safety rules.
If the model spots a rule being broken it plays beep sound, so you can quickly glance over and check on your baby.<p>Why?<p>When we bought a crib for our daughter, the first thing she tried was climbing over the rail :/
I got a bit paranoid about constantly watching her over, so I thought of a helper that can *actively* watch the baby, while parents could stay *semi-actively* alert.
It’s meant to be an additional set of eyes, and *not* a replacement for the adult. Thus, just a beep sound and not phone notifications.<p>How it works<p>* *stream_to_redis.py* captures video stream frames → Redis streams<p>* *run_watcher.py* pulls the latest N frames, injects them + the rules into a prompt and hits a local *vLLM* server running *Qwen 2.5 VL*<p>* Model returns structured JSON (`should_alert`, `reasoning`, `awareness_level`)<p>* If `should_alert=True` → `playsound` beep<p>* Streamlit page displays both camera and LLM logs
Show HN: AI Baby Monitor – local Video-LLM that beeps when safety rules break
Hi HN! I built AI Baby Monitor – a tiny stack (Redis + vLLM + Streamlit) that watches a video stream and a YAML list of safety rules.
If the model spots a rule being broken it plays beep sound, so you can quickly glance over and check on your baby.<p>Why?<p>When we bought a crib for our daughter, the first thing she tried was climbing over the rail :/
I got a bit paranoid about constantly watching her over, so I thought of a helper that can *actively* watch the baby, while parents could stay *semi-actively* alert.
It’s meant to be an additional set of eyes, and *not* a replacement for the adult. Thus, just a beep sound and not phone notifications.<p>How it works<p>* *stream_to_redis.py* captures video stream frames → Redis streams<p>* *run_watcher.py* pulls the latest N frames, injects them + the rules into a prompt and hits a local *vLLM* server running *Qwen 2.5 VL*<p>* Model returns structured JSON (`should_alert`, `reasoning`, `awareness_level`)<p>* If `should_alert=True` → `playsound` beep<p>* Streamlit page displays both camera and LLM logs
Show HN: AI Baby Monitor – local Video-LLM that beeps when safety rules break
Hi HN! I built AI Baby Monitor – a tiny stack (Redis + vLLM + Streamlit) that watches a video stream and a YAML list of safety rules.
If the model spots a rule being broken it plays beep sound, so you can quickly glance over and check on your baby.<p>Why?<p>When we bought a crib for our daughter, the first thing she tried was climbing over the rail :/
I got a bit paranoid about constantly watching her over, so I thought of a helper that can *actively* watch the baby, while parents could stay *semi-actively* alert.
It’s meant to be an additional set of eyes, and *not* a replacement for the adult. Thus, just a beep sound and not phone notifications.<p>How it works<p>* *stream_to_redis.py* captures video stream frames → Redis streams<p>* *run_watcher.py* pulls the latest N frames, injects them + the rules into a prompt and hits a local *vLLM* server running *Qwen 2.5 VL*<p>* Model returns structured JSON (`should_alert`, `reasoning`, `awareness_level`)<p>* If `should_alert=True` → `playsound` beep<p>* Streamlit page displays both camera and LLM logs
Show HN: DaedalOS – Desktop Environment in the Browser
Demo: <a href="https://dustinbrett.com" rel="nofollow">https://dustinbrett.com</a><p>Hey HN!<p>I've been building my passion project daedalOS for over 4 years now.<p>The original idea was to give visitors to my website the experience as if they had remotely connected to my personal machine. To do this I decided I would attempt to recreate as much of the functionality as possible.<p>My hope is to keep working on this project for the rest of my life and continue to evolve it's capabilities as technologies progress.<p>Thanks for checking it out!
Show HN: DaedalOS – Desktop Environment in the Browser
Demo: <a href="https://dustinbrett.com" rel="nofollow">https://dustinbrett.com</a><p>Hey HN!<p>I've been building my passion project daedalOS for over 4 years now.<p>The original idea was to give visitors to my website the experience as if they had remotely connected to my personal machine. To do this I decided I would attempt to recreate as much of the functionality as possible.<p>My hope is to keep working on this project for the rest of my life and continue to evolve it's capabilities as technologies progress.<p>Thanks for checking it out!
Show HN: SVG Animation Software
Expressive Animator is an SVG vector animation software that helps users create and export animated icons, logos, and illustrations. Users can import SVG, PDF, Adobe Illustrator files, and Figma designs, and animate them using easing controls, motion paths, masking, and other techniques. Expressive Animator also allow users to export their animations in other formats as Lottie, GIF, PNG, and video.
Show HN: SVG Animation Software
Expressive Animator is an SVG vector animation software that helps users create and export animated icons, logos, and illustrations. Users can import SVG, PDF, Adobe Illustrator files, and Figma designs, and animate them using easing controls, motion paths, masking, and other techniques. Expressive Animator also allow users to export their animations in other formats as Lottie, GIF, PNG, and video.