The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Repogather – copy relevant files to clipboard for LLM coding workflows
Hey HN, I wanted to share a simple command line tool I made that has sped up and simplified my LLM assisted coding workflow. Whenever possible, I’ve been trying to use Claude as a first pass when implementing new features / changes. But I found that depending on the type of change I was making, I was spending a lot of thought finding and deciding which source files should be included in the prompt. The need to copy/paste each file individually also becomes a mild annoyance.<p>First, I implemented `repogather --all` , which unintelligently copies <i>all</i> sources files in your repository to the clipboard (delimited by their relative filepaths). To my surprise, for less complex repositories, this alone is often completely workable for Claude — much better than pasting in the just the few files you are looking to update. But I never would have done it if I had to copy/paste everything individually. 200k is quite a lot of tokens!<p>But as soon as the repository grows to a certain complexity level (even if it is under the input token limit), I’ve found that Claude can get confused by different unrelated parts / concepts across the code. It performs much better if you make an attempt to exclude logic that is irrelevant to your current change. So I implemented `repogather "<query here>"` , e.g. `repogather "only files related to authentication"` . This uses gpt-4o-mini with structured outputs to provide a relevance score for each source file (with automatic exclusions for .gitignore patterns, tests, configuration, and other manual exclusions with `--exclude <pattern>` ).<p>gpt-4o-mini is so cheap and fast, that for my ~8 dev startup’s repo, it takes under 5 seconds and costs 3-4 cents (with appropriate exclusions). Plus, you get to watch the output stream while you wait which always feels fun.<p>The retrieval isn’t always perfect the first time — but it is fast, which allows you to see what files it returned, and iterate quickly on your command. I’ve found this to be much more satisfying than embedding-search based solutions I’ve used, which seem to fail in pretty opaque ways.<p><a href="https://github.com/gr-b/repogather">https://github.com/gr-b/repogather</a><p>Let me know if it is useful to you! Always love to talk about how to better integrate LLMs into coding workflows.
Show HN: Repogather – copy relevant files to clipboard for LLM coding workflows
Hey HN, I wanted to share a simple command line tool I made that has sped up and simplified my LLM assisted coding workflow. Whenever possible, I’ve been trying to use Claude as a first pass when implementing new features / changes. But I found that depending on the type of change I was making, I was spending a lot of thought finding and deciding which source files should be included in the prompt. The need to copy/paste each file individually also becomes a mild annoyance.<p>First, I implemented `repogather --all` , which unintelligently copies <i>all</i> sources files in your repository to the clipboard (delimited by their relative filepaths). To my surprise, for less complex repositories, this alone is often completely workable for Claude — much better than pasting in the just the few files you are looking to update. But I never would have done it if I had to copy/paste everything individually. 200k is quite a lot of tokens!<p>But as soon as the repository grows to a certain complexity level (even if it is under the input token limit), I’ve found that Claude can get confused by different unrelated parts / concepts across the code. It performs much better if you make an attempt to exclude logic that is irrelevant to your current change. So I implemented `repogather "<query here>"` , e.g. `repogather "only files related to authentication"` . This uses gpt-4o-mini with structured outputs to provide a relevance score for each source file (with automatic exclusions for .gitignore patterns, tests, configuration, and other manual exclusions with `--exclude <pattern>` ).<p>gpt-4o-mini is so cheap and fast, that for my ~8 dev startup’s repo, it takes under 5 seconds and costs 3-4 cents (with appropriate exclusions). Plus, you get to watch the output stream while you wait which always feels fun.<p>The retrieval isn’t always perfect the first time — but it is fast, which allows you to see what files it returned, and iterate quickly on your command. I’ve found this to be much more satisfying than embedding-search based solutions I’ve used, which seem to fail in pretty opaque ways.<p><a href="https://github.com/gr-b/repogather">https://github.com/gr-b/repogather</a><p>Let me know if it is useful to you! Always love to talk about how to better integrate LLMs into coding workflows.
Show HN: Konty – A Balsamiq-alternative lo-fi wireframe tool for modern apps
Show HN: Konty – A Balsamiq-alternative lo-fi wireframe tool for modern apps
Show HN: Konty – A Balsamiq-alternative lo-fi wireframe tool for modern apps
Show HN: iFixit created a new USB-C, repairable soldering system
After years of making screwdrivers and teaching people to repair electronics, we just made our first electronic tool. It's been a journey for us to build while hewing to our repairable principles. We're really excited about it.<p>It's a USB-C powered soldering iron and smart battery power hub. Super repairable, of course. Our goal is to make soldering so easy everyone can do it:
<a href="https://www.ifixit.com/fixhub" rel="nofollow">https://www.ifixit.com/fixhub</a><p>We didn’t want to make just another iron, so we spent years sweating the details and crafting something that met our exacting standards. This is a high-performance iron: it can output 100W of heat, gets to soldering temperature in under 5 seconds, and automatically cools off when you set it down. The accelerometer detects when you pick it up and heats it back up. Keeping the iron at a lower temperature while you’re not soldering shouold prolong the life of the tip.<p>What’s the difference between this iron and other USB-C irons on the market? Here’s a quick list:<p>Higher power (our Smart Iron is 100W, competitors max out at 60W over USB-C, 88W over DC Supply)<p>Heat-resistant storage cap (you just have to try this out, it’s a real game changer in day-to-day use)
Polished user experience<p>A warranty and a local company to talk to (I can’t find any contact information for Miniware)<p>Comfier / more natural grip<p>Shorter soldering tip length<p>No-tangle, heat-resistant cable<p>Locking ring on the cable, so it can’t snag and get disconnected (this happens to me all the time on other irons)<p>More intuitive settings, either on the Power Station or on the computer<p>We used Web Serial <a href="https://caniuse.com/web-serial" rel="nofollow">https://caniuse.com/web-serial</a> for the interface, which is only supported in Chromium browsers. The biggest bummer with that is that no mobile browsers support it, yet. Hopefully that changes soon.<p>Hardware is hard! It's been a journey for us. Happy to answer any questions about how we made it.<p>Schematics and repair information are online here: <a href="https://www.ifixit.com/Device/FixHub_Portable_Soldering_Station" rel="nofollow">https://www.ifixit.com/Device/FixHub_Portable_Soldering_Stat...</a>
Show HN: iFixit created a new USB-C, repairable soldering system
After years of making screwdrivers and teaching people to repair electronics, we just made our first electronic tool. It's been a journey for us to build while hewing to our repairable principles. We're really excited about it.<p>It's a USB-C powered soldering iron and smart battery power hub. Super repairable, of course. Our goal is to make soldering so easy everyone can do it:
<a href="https://www.ifixit.com/fixhub" rel="nofollow">https://www.ifixit.com/fixhub</a><p>We didn’t want to make just another iron, so we spent years sweating the details and crafting something that met our exacting standards. This is a high-performance iron: it can output 100W of heat, gets to soldering temperature in under 5 seconds, and automatically cools off when you set it down. The accelerometer detects when you pick it up and heats it back up. Keeping the iron at a lower temperature while you’re not soldering shouold prolong the life of the tip.<p>What’s the difference between this iron and other USB-C irons on the market? Here’s a quick list:<p>Higher power (our Smart Iron is 100W, competitors max out at 60W over USB-C, 88W over DC Supply)<p>Heat-resistant storage cap (you just have to try this out, it’s a real game changer in day-to-day use)
Polished user experience<p>A warranty and a local company to talk to (I can’t find any contact information for Miniware)<p>Comfier / more natural grip<p>Shorter soldering tip length<p>No-tangle, heat-resistant cable<p>Locking ring on the cable, so it can’t snag and get disconnected (this happens to me all the time on other irons)<p>More intuitive settings, either on the Power Station or on the computer<p>We used Web Serial <a href="https://caniuse.com/web-serial" rel="nofollow">https://caniuse.com/web-serial</a> for the interface, which is only supported in Chromium browsers. The biggest bummer with that is that no mobile browsers support it, yet. Hopefully that changes soon.<p>Hardware is hard! It's been a journey for us. Happy to answer any questions about how we made it.<p>Schematics and repair information are online here: <a href="https://www.ifixit.com/Device/FixHub_Portable_Soldering_Station" rel="nofollow">https://www.ifixit.com/Device/FixHub_Portable_Soldering_Stat...</a>
Show HN: iFixit created a new USB-C, repairable soldering system
After years of making screwdrivers and teaching people to repair electronics, we just made our first electronic tool. It's been a journey for us to build while hewing to our repairable principles. We're really excited about it.<p>It's a USB-C powered soldering iron and smart battery power hub. Super repairable, of course. Our goal is to make soldering so easy everyone can do it:
<a href="https://www.ifixit.com/fixhub" rel="nofollow">https://www.ifixit.com/fixhub</a><p>We didn’t want to make just another iron, so we spent years sweating the details and crafting something that met our exacting standards. This is a high-performance iron: it can output 100W of heat, gets to soldering temperature in under 5 seconds, and automatically cools off when you set it down. The accelerometer detects when you pick it up and heats it back up. Keeping the iron at a lower temperature while you’re not soldering shouold prolong the life of the tip.<p>What’s the difference between this iron and other USB-C irons on the market? Here’s a quick list:<p>Higher power (our Smart Iron is 100W, competitors max out at 60W over USB-C, 88W over DC Supply)<p>Heat-resistant storage cap (you just have to try this out, it’s a real game changer in day-to-day use)
Polished user experience<p>A warranty and a local company to talk to (I can’t find any contact information for Miniware)<p>Comfier / more natural grip<p>Shorter soldering tip length<p>No-tangle, heat-resistant cable<p>Locking ring on the cable, so it can’t snag and get disconnected (this happens to me all the time on other irons)<p>More intuitive settings, either on the Power Station or on the computer<p>We used Web Serial <a href="https://caniuse.com/web-serial" rel="nofollow">https://caniuse.com/web-serial</a> for the interface, which is only supported in Chromium browsers. The biggest bummer with that is that no mobile browsers support it, yet. Hopefully that changes soon.<p>Hardware is hard! It's been a journey for us. Happy to answer any questions about how we made it.<p>Schematics and repair information are online here: <a href="https://www.ifixit.com/Device/FixHub_Portable_Soldering_Station" rel="nofollow">https://www.ifixit.com/Device/FixHub_Portable_Soldering_Stat...</a>
Show HN: Clace – Application Server with support for scaling down to zero
I have been building the open source project <a href="https://github.com/claceio/clace">https://github.com/claceio/clace</a>. Clace is an application server that builds and deploys containers, allowing it to manage webapps in any language/framework.<p>Compared to application servers like Nginx Unit, Clace has the advantage of being able to work with any application, without requiring any dependency or packaging changes. Clace provides a blue-green staged deployment model for apps. Not just code changes, even configuration changes are staged and can be verified before being made live.<p>Clace is not a PaaS solution, it does not support deploying databases and other auxiliary services. It does share the fact that it manages containers with PaaS solutions. Clace is different in that it builds its own reverse proxy, instead of depending on Traefik/Nginx. This allows Clace to implement features like shutting down idle apps and adding app level OAuth authentication. Clace runs natively on Windows/OSX in addition to Linux. Clace works with Docker/Podman/Orbstack.<p>Clace allows you to run hundreds of apps on a single machine. Since app containers are shut down when not in use, there is no CPU/memory resource usage when the apps are idle. It provides a Google Cloud Run type interface on your own hardware.<p><a href="https://clace.io/" rel="nofollow">https://clace.io/</a> has a demo video and docs. Do let me know any feedback.
Show HN: Clace – Application Server with support for scaling down to zero
I have been building the open source project <a href="https://github.com/claceio/clace">https://github.com/claceio/clace</a>. Clace is an application server that builds and deploys containers, allowing it to manage webapps in any language/framework.<p>Compared to application servers like Nginx Unit, Clace has the advantage of being able to work with any application, without requiring any dependency or packaging changes. Clace provides a blue-green staged deployment model for apps. Not just code changes, even configuration changes are staged and can be verified before being made live.<p>Clace is not a PaaS solution, it does not support deploying databases and other auxiliary services. It does share the fact that it manages containers with PaaS solutions. Clace is different in that it builds its own reverse proxy, instead of depending on Traefik/Nginx. This allows Clace to implement features like shutting down idle apps and adding app level OAuth authentication. Clace runs natively on Windows/OSX in addition to Linux. Clace works with Docker/Podman/Orbstack.<p>Clace allows you to run hundreds of apps on a single machine. Since app containers are shut down when not in use, there is no CPU/memory resource usage when the apps are idle. It provides a Google Cloud Run type interface on your own hardware.<p><a href="https://clace.io/" rel="nofollow">https://clace.io/</a> has a demo video and docs. Do let me know any feedback.
Show HN: Clace – Application Server with support for scaling down to zero
I have been building the open source project <a href="https://github.com/claceio/clace">https://github.com/claceio/clace</a>. Clace is an application server that builds and deploys containers, allowing it to manage webapps in any language/framework.<p>Compared to application servers like Nginx Unit, Clace has the advantage of being able to work with any application, without requiring any dependency or packaging changes. Clace provides a blue-green staged deployment model for apps. Not just code changes, even configuration changes are staged and can be verified before being made live.<p>Clace is not a PaaS solution, it does not support deploying databases and other auxiliary services. It does share the fact that it manages containers with PaaS solutions. Clace is different in that it builds its own reverse proxy, instead of depending on Traefik/Nginx. This allows Clace to implement features like shutting down idle apps and adding app level OAuth authentication. Clace runs natively on Windows/OSX in addition to Linux. Clace works with Docker/Podman/Orbstack.<p>Clace allows you to run hundreds of apps on a single machine. Since app containers are shut down when not in use, there is no CPU/memory resource usage when the apps are idle. It provides a Google Cloud Run type interface on your own hardware.<p><a href="https://clace.io/" rel="nofollow">https://clace.io/</a> has a demo video and docs. Do let me know any feedback.
Show HN: How much is 13B euros?
Hi,<p>I made this page to contextualize 13 billion euros (or 14): an amount due to Ireland in an EU Apple tax case and all over the airwaves here this week. I use some pretty silly back-of-the-envelope type calculations (the same ones also repeated a lot in Ireland this week!).<p>These calculations aren't especially interesting, but at least they are present: you can see them, and you can change them. If you do - change the 13 billion to 14 billion for example, related numbers will flash with updates.<p>It's an example using calculang[1]: a language for calculations, and an example that focuses on a close connection between numbers that we read or share and formulas/workings behind those numbers.<p>I plan to do a separate Show HN about calculang perhaps when I have more docs and newer playground and gallery together, but Showing this page in case it's interesting, & happy if there is feedback!<p>Declan<p>[0] <a href="https://HowMuchIs13BillionEuros.com" rel="nofollow">https://HowMuchIs13BillionEuros.com</a> repo: <a href="https://github.com/declann/HowMuchIs13BillionEuros.com">https://github.com/declann/HowMuchIs13BillionEuros.com</a><p>[1] <a href="https://calculang.dev" rel="nofollow">https://calculang.dev</a>
Show HN: How much is 13B euros?
Hi,<p>I made this page to contextualize 13 billion euros (or 14): an amount due to Ireland in an EU Apple tax case and all over the airwaves here this week. I use some pretty silly back-of-the-envelope type calculations (the same ones also repeated a lot in Ireland this week!).<p>These calculations aren't especially interesting, but at least they are present: you can see them, and you can change them. If you do - change the 13 billion to 14 billion for example, related numbers will flash with updates.<p>It's an example using calculang[1]: a language for calculations, and an example that focuses on a close connection between numbers that we read or share and formulas/workings behind those numbers.<p>I plan to do a separate Show HN about calculang perhaps when I have more docs and newer playground and gallery together, but Showing this page in case it's interesting, & happy if there is feedback!<p>Declan<p>[0] <a href="https://HowMuchIs13BillionEuros.com" rel="nofollow">https://HowMuchIs13BillionEuros.com</a> repo: <a href="https://github.com/declann/HowMuchIs13BillionEuros.com">https://github.com/declann/HowMuchIs13BillionEuros.com</a><p>[1] <a href="https://calculang.dev" rel="nofollow">https://calculang.dev</a>
Show HN: Tune LLaMa3.1 on Google Cloud TPUs
Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs).<p>The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs.<p>But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors.<p>We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward.<p>Here's a demo (<a href="https://dub.sh/felafax-demo" rel="nofollow">https://dub.sh/felafax-demo</a>) of our managed solution.<p>Would love your thoughts on our repo and vision as we keep chugging along!
Show HN: Tune LLaMa3.1 on Google Cloud TPUs
Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs).<p>The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs.<p>But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors.<p>We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward.<p>Here's a demo (<a href="https://dub.sh/felafax-demo" rel="nofollow">https://dub.sh/felafax-demo</a>) of our managed solution.<p>Would love your thoughts on our repo and vision as we keep chugging along!
Show HN: Tune LLaMa3.1 on Google Cloud TPUs
Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs).<p>The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs.<p>But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors.<p>We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward.<p>Here's a demo (<a href="https://dub.sh/felafax-demo" rel="nofollow">https://dub.sh/felafax-demo</a>) of our managed solution.<p>Would love your thoughts on our repo and vision as we keep chugging along!
Show HN: Tune LLaMa3.1 on Google Cloud TPUs
Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs).<p>The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs.<p>But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors.<p>We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward.<p>Here's a demo (<a href="https://dub.sh/felafax-demo" rel="nofollow">https://dub.sh/felafax-demo</a>) of our managed solution.<p>Would love your thoughts on our repo and vision as we keep chugging along!
Show HN: GitOps Template for Kubernetes
Hello HN, we’re Philip Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We are working on a package manager for Kubernetes to simplify the packaging of complex applications with multiple dependencies, ensuring they are installed and kept up-to-date across multiple Kubernetes clusters.<p>Nowadays, it is best practice to use Git as a revision control system for your Kubernetes configurations. Update automation workflows like Renovate or Dependabot can create pull requests for new versions of Docker images and Helm charts, but ensuring these new package versions work is still a manual task. By using the central (or a private) Glasskube repository (<a href="https://github.com/glasskube/packages">https://github.com/glasskube/packages</a>) together with our Renovate integration (<a href="https://docs.renovatebot.com/modules/manager/glasskube/" rel="nofollow">https://docs.renovatebot.com/modules/manager/glasskube/</a>), you can ensure that new package versions will run through our Minikube-based CI workflows before they get published—similar to how the Homebrew core tap works. We’ve just introduced readiness checks for manifest-based deployments and utilize the flux-helm-controller to wait for a Helm release to succeed.<p>Dependencies are resolved by our package controller. These dependencies can either be cluster-scoped (installed in the recommended namespace, e.g., operators wird CRDs) or namespace-scoped components of a package (e.g., a database or Redis cache). In such cases, we will prefix resources with the dependent package name to ensure multiple packages can use the same dependencies without naming conflicts (we use Kustomize on a virtual filesystem for this).<p>Glasskube packages can currently be Helm charts (from an OCI or Helm repository) or manifests, which are mostly built using Kustomize’s overlay approach.<p>Since neither the overlay approach (using Kustomize) nor Helm’s limited templating functionality will help us and other Kubernetes users scale to more complex packages, we are considering creating a more programmatic approach to package creation, similar to Timoni. Currently, KCL is our frontrunner (<a href="https://github.com/glasskube/glasskube/discussions/1018">https://github.com/glasskube/glasskube/discussions/1018</a>), as it already integrates well with the Kubernetes ecosystem.<p>We would appreciate if you give our GitOps template a try. It also works work existing Kubernetes clusters if just want to use GitOps for some applications. Just make sure that the argocd and glasskube-system namespaces are not yet in use. See: <a href="https://github.com/glasskube/gitops-template/">https://github.com/glasskube/gitops-template/</a>
Show HN: GitOps Template for Kubernetes
Hello HN, we’re Philip Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We are working on a package manager for Kubernetes to simplify the packaging of complex applications with multiple dependencies, ensuring they are installed and kept up-to-date across multiple Kubernetes clusters.<p>Nowadays, it is best practice to use Git as a revision control system for your Kubernetes configurations. Update automation workflows like Renovate or Dependabot can create pull requests for new versions of Docker images and Helm charts, but ensuring these new package versions work is still a manual task. By using the central (or a private) Glasskube repository (<a href="https://github.com/glasskube/packages">https://github.com/glasskube/packages</a>) together with our Renovate integration (<a href="https://docs.renovatebot.com/modules/manager/glasskube/" rel="nofollow">https://docs.renovatebot.com/modules/manager/glasskube/</a>), you can ensure that new package versions will run through our Minikube-based CI workflows before they get published—similar to how the Homebrew core tap works. We’ve just introduced readiness checks for manifest-based deployments and utilize the flux-helm-controller to wait for a Helm release to succeed.<p>Dependencies are resolved by our package controller. These dependencies can either be cluster-scoped (installed in the recommended namespace, e.g., operators wird CRDs) or namespace-scoped components of a package (e.g., a database or Redis cache). In such cases, we will prefix resources with the dependent package name to ensure multiple packages can use the same dependencies without naming conflicts (we use Kustomize on a virtual filesystem for this).<p>Glasskube packages can currently be Helm charts (from an OCI or Helm repository) or manifests, which are mostly built using Kustomize’s overlay approach.<p>Since neither the overlay approach (using Kustomize) nor Helm’s limited templating functionality will help us and other Kubernetes users scale to more complex packages, we are considering creating a more programmatic approach to package creation, similar to Timoni. Currently, KCL is our frontrunner (<a href="https://github.com/glasskube/glasskube/discussions/1018">https://github.com/glasskube/glasskube/discussions/1018</a>), as it already integrates well with the Kubernetes ecosystem.<p>We would appreciate if you give our GitOps template a try. It also works work existing Kubernetes clusters if just want to use GitOps for some applications. Just make sure that the argocd and glasskube-system namespaces are not yet in use. See: <a href="https://github.com/glasskube/gitops-template/">https://github.com/glasskube/gitops-template/</a>
Show HN: GitOps Template for Kubernetes
Hello HN, we’re Philip Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We are working on a package manager for Kubernetes to simplify the packaging of complex applications with multiple dependencies, ensuring they are installed and kept up-to-date across multiple Kubernetes clusters.<p>Nowadays, it is best practice to use Git as a revision control system for your Kubernetes configurations. Update automation workflows like Renovate or Dependabot can create pull requests for new versions of Docker images and Helm charts, but ensuring these new package versions work is still a manual task. By using the central (or a private) Glasskube repository (<a href="https://github.com/glasskube/packages">https://github.com/glasskube/packages</a>) together with our Renovate integration (<a href="https://docs.renovatebot.com/modules/manager/glasskube/" rel="nofollow">https://docs.renovatebot.com/modules/manager/glasskube/</a>), you can ensure that new package versions will run through our Minikube-based CI workflows before they get published—similar to how the Homebrew core tap works. We’ve just introduced readiness checks for manifest-based deployments and utilize the flux-helm-controller to wait for a Helm release to succeed.<p>Dependencies are resolved by our package controller. These dependencies can either be cluster-scoped (installed in the recommended namespace, e.g., operators wird CRDs) or namespace-scoped components of a package (e.g., a database or Redis cache). In such cases, we will prefix resources with the dependent package name to ensure multiple packages can use the same dependencies without naming conflicts (we use Kustomize on a virtual filesystem for this).<p>Glasskube packages can currently be Helm charts (from an OCI or Helm repository) or manifests, which are mostly built using Kustomize’s overlay approach.<p>Since neither the overlay approach (using Kustomize) nor Helm’s limited templating functionality will help us and other Kubernetes users scale to more complex packages, we are considering creating a more programmatic approach to package creation, similar to Timoni. Currently, KCL is our frontrunner (<a href="https://github.com/glasskube/glasskube/discussions/1018">https://github.com/glasskube/glasskube/discussions/1018</a>), as it already integrates well with the Kubernetes ecosystem.<p>We would appreciate if you give our GitOps template a try. It also works work existing Kubernetes clusters if just want to use GitOps for some applications. Just make sure that the argocd and glasskube-system namespaces are not yet in use. See: <a href="https://github.com/glasskube/gitops-template/">https://github.com/glasskube/gitops-template/</a>