The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Online database diagram editor
Hey all! I released drawDB about a month ago and now it's open source. I hope you find it useful.<p>If you want to check out the app you can go to <a href="https://drawdb.vercel.app/" rel="nofollow">https://drawdb.vercel.app/</a> .<p>Thank you:)
Show HN: Online database diagram editor
Hey all! I released drawDB about a month ago and now it's open source. I hope you find it useful.<p>If you want to check out the app you can go to <a href="https://drawdb.vercel.app/" rel="nofollow">https://drawdb.vercel.app/</a> .<p>Thank you:)
Show HN: Fancy-ANSI – Small JavaScript library for converting ANSI to HTML
I made this tool to add support for custom ANSI palettes to kubetail (<a href="https://github.com/kubetail-org/kubetail">https://github.com/kubetail-org/kubetail</a>). Maybe you'll find it useful too.<p>Let me know if you have any suggestions!
Show HN: Fancy-ANSI – Small JavaScript library for converting ANSI to HTML
I made this tool to add support for custom ANSI palettes to kubetail (<a href="https://github.com/kubetail-org/kubetail">https://github.com/kubetail-org/kubetail</a>). Maybe you'll find it useful too.<p>Let me know if you have any suggestions!
Show HN: I made a daily puzzle game about mixing colors
While on paternal leave, I wanted to create a game my newborn daughter could eventually play and learn something from. I was heavily inspired by the daily puzzle games I consistently play such as Wordle and Strands.<p>Refractor is a game about connecting colored tiles together to reach all the goals. Use "refractor" tiles to combine two colors together but be careful not to trap yourself in! The game is won when all goals are fulfilled.<p>The tech stack is: Laravel + Inertia, React + TypeScript, and DaisyUI + TailwindCSS.<p>Open to any questions or feedback. Hope you might find it interesting!
Show HN: CloudTabs Web Browser – a streaming web browser on every website
Show HN: CloudTabs Web Browser – a streaming web browser on every website
Show HN: CloudTabs Web Browser – a streaming web browser on every website
Show HN: Kyoo – Self-hosted media browser (Jellyfin/Plex alternative)
I started working on Kyoo almost 5 years ago because I did not like the options at the time. It started as a "sandbox" project where I could learn about tech I was interested in, and slowly became more than that.
Show HN: Kyoo – Self-hosted media browser (Jellyfin/Plex alternative)
I started working on Kyoo almost 5 years ago because I did not like the options at the time. It started as a "sandbox" project where I could learn about tech I was interested in, and slowly became more than that.
Show HN: Kyoo – Self-hosted media browser (Jellyfin/Plex alternative)
I started working on Kyoo almost 5 years ago because I did not like the options at the time. It started as a "sandbox" project where I could learn about tech I was interested in, and slowly became more than that.
Show HN: DotLottie Player – A New Universal Lottie Player Built with Rust
Hi HN,<p>For the past few months we’ve been building dotlottie-rs, a new Lottie and dotLottie player that aims to run everywhere with smooth, high frame rate rendering and guarantee visual and feature support consistency across a large number of platforms and device types. It is lightweight, has low resource requirements and is performant.<p>It is MIT-licensed and is available at: <a href="https://github.com/LottieFiles/dotlottie-rs">https://github.com/LottieFiles/dotlottie-rs</a><p>The player is written in Rust and uses a minimal number of external dependencies. We utilize uniffi-rs to generate FFI bindings for Kotlin, Swift, and WASM, which are then used in our platform native distributions for Android, iOS and Web while maintaining a consistent API and experience across them. We also provide distributions for React and Vue to make it easy to adopt in many existing web projects. The player is also ideal for use in backend systems and pipelines for high performance server-side rendering of Lottie/dotLottie, and can be used easily in NodeJS projects.<p>The player is named dotlottie-rs because, apart from the first class Lottie support, we aim to have first class support for dotLottie (<a href="https://www.dotlottie.io" rel="nofollow">https://www.dotlottie.io</a>), a superset of Lottie we developed that builds on Lottie to add enhanced features like multi-animation support, improved resource bundling, theming, state machines and interactivity (latter two are currently in development).<p>Under the hood, the player uses the open-source, lightweight, high performance ThorVG library (<a href="https://www.thorvg.org/" rel="nofollow">https://www.thorvg.org/</a>) for vector graphics and Lottie rendering, supporting software, OpenGL, and WebGPU (currently in beta) rasterization backends. We are working towards landing complete support of the Lottie format spec (<a href="https://lottie.github.io/" rel="nofollow">https://lottie.github.io/</a>) as soon as possible.<p>We are starting to test and deploy it across our platform and hope it helps achieve similar improvements in performance and support as we are seeing!<p>There’s a few demos:
Rust project: <a href="https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-player">https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-p...</a>
Web: <a href="https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-file#live-examples">https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-f...</a><p>Would love to hear your thoughts and feedback :)
Show HN: DotLottie Player – A New Universal Lottie Player Built with Rust
Hi HN,<p>For the past few months we’ve been building dotlottie-rs, a new Lottie and dotLottie player that aims to run everywhere with smooth, high frame rate rendering and guarantee visual and feature support consistency across a large number of platforms and device types. It is lightweight, has low resource requirements and is performant.<p>It is MIT-licensed and is available at: <a href="https://github.com/LottieFiles/dotlottie-rs">https://github.com/LottieFiles/dotlottie-rs</a><p>The player is written in Rust and uses a minimal number of external dependencies. We utilize uniffi-rs to generate FFI bindings for Kotlin, Swift, and WASM, which are then used in our platform native distributions for Android, iOS and Web while maintaining a consistent API and experience across them. We also provide distributions for React and Vue to make it easy to adopt in many existing web projects. The player is also ideal for use in backend systems and pipelines for high performance server-side rendering of Lottie/dotLottie, and can be used easily in NodeJS projects.<p>The player is named dotlottie-rs because, apart from the first class Lottie support, we aim to have first class support for dotLottie (<a href="https://www.dotlottie.io" rel="nofollow">https://www.dotlottie.io</a>), a superset of Lottie we developed that builds on Lottie to add enhanced features like multi-animation support, improved resource bundling, theming, state machines and interactivity (latter two are currently in development).<p>Under the hood, the player uses the open-source, lightweight, high performance ThorVG library (<a href="https://www.thorvg.org/" rel="nofollow">https://www.thorvg.org/</a>) for vector graphics and Lottie rendering, supporting software, OpenGL, and WebGPU (currently in beta) rasterization backends. We are working towards landing complete support of the Lottie format spec (<a href="https://lottie.github.io/" rel="nofollow">https://lottie.github.io/</a>) as soon as possible.<p>We are starting to test and deploy it across our platform and hope it helps achieve similar improvements in performance and support as we are seeing!<p>There’s a few demos:
Rust project: <a href="https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-player">https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-p...</a>
Web: <a href="https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-file#live-examples">https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-f...</a><p>Would love to hear your thoughts and feedback :)
Show HN: A universal Helm Chart for deploying applications into K8s/OpenShift
Hey HN! We wanted to share with you nxs-universal-chart - our open-sourced universal Helm chart. You can use it to deploy any of your applications into Kubernetes/OpenShift and other orchestrators compatible with native Kubernetes API.<p>Our team regularly faced the need to create almost identical charts, so when we had 14 identical microservices in one project, we came up with a chart format that essentially became a prototype of nxs-universal-chart. It turned out to be more relevant than we even thought! When we needed to prepare CI/CD for 60 almost identical projects for a customer, we reduced the preparation time for release from 6 hours to 1. Basically, that’s how the idea of nxs-universal-chart became a real thing that everyone can use now!<p>The main advantages of such chart that we would like to highlight:
-Reducing time to prepare deployment
-You’re able to generate any manifests you may need
-It compatible with multiple versions of k8s
-Ability to use go-templates as your values<p>In the latest release we’ve added a few features like cert-manager custom resources support! Any other information and details you can find on GitHub: <a href="https://github.com/nixys/nxs-universal-chart">https://github.com/nixys/nxs-universal-chart</a>
We’re really looking forward to improving our universal-chart so we’d love to see any feedback, contributions or report any issues you encounter!
Please join our chat in telegram if you want to discuss something about this repo or ask any questions: <a href="https://t.me/nxs_universal_chart_chat" rel="nofollow">https://t.me/nxs_universal_chart_chat</a>
Show HN: FizzBee – Formal methods in Python
GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback.
Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a>
Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>
Show HN: FizzBee – Formal methods in Python
GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback.
Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a>
Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>
Show HN: FizzBee – Formal methods in Python
GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback.
Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a>
Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>
Show HN: Managed GitHub Actions Runners for AWS
Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)
Show HN: Managed GitHub Actions Runners for AWS
Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)
Show HN: Managed GitHub Actions Runners for AWS
Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)