The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Not My Cows – Save your cows. Blast the meteors. Giddy up

I made this straight vanilla JS game for a game jam a few years ago. Considering coming back to it and fixing the bugs and gameplay.<p><a href="https://github.com/jonfranco224/not-my-cows">https://github.com/jonfranco224/not-my-cows</a> if anyone wants to check the source.<p>Edit: y'all seem to be enjoying this! I spun up a quick Twitter/X for game updates if anyone is interested - <a href="https://twitter.com/notmycowsgame" rel="nofollow noreferrer">https://twitter.com/notmycowsgame</a>

Show HN: Not My Cows – Save your cows. Blast the meteors. Giddy up

I made this straight vanilla JS game for a game jam a few years ago. Considering coming back to it and fixing the bugs and gameplay.<p><a href="https://github.com/jonfranco224/not-my-cows">https://github.com/jonfranco224/not-my-cows</a> if anyone wants to check the source.<p>Edit: y'all seem to be enjoying this! I spun up a quick Twitter/X for game updates if anyone is interested - <a href="https://twitter.com/notmycowsgame" rel="nofollow noreferrer">https://twitter.com/notmycowsgame</a>

Show HN: Strich – Barcode scanning for web apps

Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps. Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.

Show HN: Strich – Barcode scanning for web apps

Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps. Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.

Show HN: Strich – Barcode scanning for web apps

Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps. Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.

Show HN: Repo with a list of 80 decent companies hiring remotely in Europe

Tech-stack included

Show HN: Repo with a list of 80 decent companies hiring remotely in Europe

Tech-stack included

Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2

Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2

Show HN: Layerform – Open-source development environments using Terraform files

Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?

Show HN: Layerform – Open-source development environments using Terraform files

Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?

Show HN: Layerform – Open-source development environments using Terraform files

Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?

Show HN: Servicer, pm2 alternative built on Rust and systemd

Servicer is a CLI to create and manage services on systemd. I have used pm2 in production and find it easy to use. However a lot of its functionality is specific to node.js, and I would prefer not to run my rust server as a fork of a node process. Systemd on the other hand has most of the things I need, but I found it cumbersome to use. There are a bunch of different commands and configurations- the .service file, systemctl to view status, journald to view logs which make systemd more complex to setup. I had to google for the a template and commands every time.<p>Servicer abstracts this setup behind an easy to use CLI, for instance you can use `ser create index.js --interpreter node --enable --start` to create a `.service` file, enable it on boot and start it. Servicer will also help if you wish to write your own custom `.service` files. Run `ser edit foo --editor vi` to create a service file in Vim. Servicer will provide a starting template so you don't need to google it. There are additional utilities like `ser which index.js` to view the path of the service and unit file.<p>``` Paths for index.js.ser.service: +--------------+-----------------------------------------------------------+ | name | path | +--------------+-----------------------------------------------------------+ | Service file | /etc/systemd/system/index.js.ser.service | +--------------+-----------------------------------------------------------+ | Unit file | /org/freedesktop/systemd1/unit/index_2ejs_2eser_2eservice | +--------------+-----------------------------------------------------------+ ```<p>Servicer is daemonless and does not run in the background. It simply sets up systemd and gets out of the way. There are no forked services, everything is natively set up on systemd. You don't need to worry about resource consumption or servicer going down which will cause your app to stop.<p>Do give it a spin and review the codebase. The code is open source and MIT licensed- <a href="https://github.com/servicer-labs/servicer">https://github.com/servicer-labs/servicer</a>

< 1 2 3 ... 364 365 366 367 368 ... 853 854 855 >