The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Kyoo – Self-hosted media browser (Jellyfin/Plex alternative)

I started working on Kyoo almost 5 years ago because I did not like the options at the time. It started as a "sandbox" project where I could learn about tech I was interested in, and slowly became more than that.

Show HN: DotLottie Player – A New Universal Lottie Player Built with Rust

Hi HN,<p>For the past few months we’ve been building dotlottie-rs, a new Lottie and dotLottie player that aims to run everywhere with smooth, high frame rate rendering and guarantee visual and feature support consistency across a large number of platforms and device types. It is lightweight, has low resource requirements and is performant.<p>It is MIT-licensed and is available at: <a href="https://github.com/LottieFiles/dotlottie-rs">https://github.com/LottieFiles/dotlottie-rs</a><p>The player is written in Rust and uses a minimal number of external dependencies. We utilize uniffi-rs to generate FFI bindings for Kotlin, Swift, and WASM, which are then used in our platform native distributions for Android, iOS and Web while maintaining a consistent API and experience across them. We also provide distributions for React and Vue to make it easy to adopt in many existing web projects. The player is also ideal for use in backend systems and pipelines for high performance server-side rendering of Lottie/dotLottie, and can be used easily in NodeJS projects.<p>The player is named dotlottie-rs because, apart from the first class Lottie support, we aim to have first class support for dotLottie (<a href="https://www.dotlottie.io" rel="nofollow">https://www.dotlottie.io</a>), a superset of Lottie we developed that builds on Lottie to add enhanced features like multi-animation support, improved resource bundling, theming, state machines and interactivity (latter two are currently in development).<p>Under the hood, the player uses the open-source, lightweight, high performance ThorVG library (<a href="https://www.thorvg.org/" rel="nofollow">https://www.thorvg.org/</a>) for vector graphics and Lottie rendering, supporting software, OpenGL, and WebGPU (currently in beta) rasterization backends. We are working towards landing complete support of the Lottie format spec (<a href="https://lottie.github.io/" rel="nofollow">https://lottie.github.io/</a>) as soon as possible.<p>We are starting to test and deploy it across our platform and hope it helps achieve similar improvements in performance and support as we are seeing!<p>There’s a few demos: Rust project: <a href="https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-player">https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-p...</a> Web: <a href="https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-file#live-examples">https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-f...</a><p>Would love to hear your thoughts and feedback :)

Show HN: DotLottie Player – A New Universal Lottie Player Built with Rust

Hi HN,<p>For the past few months we’ve been building dotlottie-rs, a new Lottie and dotLottie player that aims to run everywhere with smooth, high frame rate rendering and guarantee visual and feature support consistency across a large number of platforms and device types. It is lightweight, has low resource requirements and is performant.<p>It is MIT-licensed and is available at: <a href="https://github.com/LottieFiles/dotlottie-rs">https://github.com/LottieFiles/dotlottie-rs</a><p>The player is written in Rust and uses a minimal number of external dependencies. We utilize uniffi-rs to generate FFI bindings for Kotlin, Swift, and WASM, which are then used in our platform native distributions for Android, iOS and Web while maintaining a consistent API and experience across them. We also provide distributions for React and Vue to make it easy to adopt in many existing web projects. The player is also ideal for use in backend systems and pipelines for high performance server-side rendering of Lottie/dotLottie, and can be used easily in NodeJS projects.<p>The player is named dotlottie-rs because, apart from the first class Lottie support, we aim to have first class support for dotLottie (<a href="https://www.dotlottie.io" rel="nofollow">https://www.dotlottie.io</a>), a superset of Lottie we developed that builds on Lottie to add enhanced features like multi-animation support, improved resource bundling, theming, state machines and interactivity (latter two are currently in development).<p>Under the hood, the player uses the open-source, lightweight, high performance ThorVG library (<a href="https://www.thorvg.org/" rel="nofollow">https://www.thorvg.org/</a>) for vector graphics and Lottie rendering, supporting software, OpenGL, and WebGPU (currently in beta) rasterization backends. We are working towards landing complete support of the Lottie format spec (<a href="https://lottie.github.io/" rel="nofollow">https://lottie.github.io/</a>) as soon as possible.<p>We are starting to test and deploy it across our platform and hope it helps achieve similar improvements in performance and support as we are seeing!<p>There’s a few demos: Rust project: <a href="https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-player">https://github.com/LottieFiles/dotlottie-rs/tree/main/demo-p...</a> Web: <a href="https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-file#live-examples">https://github.com/LottieFiles/dotlottie-web?tab=readme-ov-f...</a><p>Would love to hear your thoughts and feedback :)

Show HN: A universal Helm Chart for deploying applications into K8s/OpenShift

Hey HN! We wanted to share with you nxs-universal-chart - our open-sourced universal Helm chart. You can use it to deploy any of your applications into Kubernetes/OpenShift and other orchestrators compatible with native Kubernetes API.<p>Our team regularly faced the need to create almost identical charts, so when we had 14 identical microservices in one project, we came up with a chart format that essentially became a prototype of nxs-universal-chart. It turned out to be more relevant than we even thought! When we needed to prepare CI/CD for 60 almost identical projects for a customer, we reduced the preparation time for release from 6 hours to 1. Basically, that’s how the idea of nxs-universal-chart became a real thing that everyone can use now!<p>The main advantages of such chart that we would like to highlight: -Reducing time to prepare deployment -You’re able to generate any manifests you may need -It compatible with multiple versions of k8s -Ability to use go-templates as your values<p>In the latest release we’ve added a few features like cert-manager custom resources support! Any other information and details you can find on GitHub: <a href="https://github.com/nixys/nxs-universal-chart">https://github.com/nixys/nxs-universal-chart</a> We’re really looking forward to improving our universal-chart so we’d love to see any feedback, contributions or report any issues you encounter! Please join our chat in telegram if you want to discuss something about this repo or ask any questions: <a href="https://t.me/nxs_universal_chart_chat" rel="nofollow">https://t.me/nxs_universal_chart_chat</a>

Show HN: FizzBee – Formal methods in Python

GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback. Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a> Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>

Show HN: FizzBee – Formal methods in Python

GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback. Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a> Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>

Show HN: FizzBee – Formal methods in Python

GitHub: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a><p>Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.<p>However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.<p>Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.<p>I've also created an online playground so you can try it without having to install on your machine.<p>In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)<p>Please let me know your feedback. Url: <a href="https://FizzBee.io" rel="nofollow">https://FizzBee.io</a> Git: <a href="https://www.github.com/fizzbee-io/FizzBee">https://www.github.com/fizzbee-io/FizzBee</a>

Show HN: Managed GitHub Actions Runners for AWS

Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)

Show HN: Managed GitHub Actions Runners for AWS

Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)

Show HN: Managed GitHub Actions Runners for AWS

Hey HN! I'm Jacob, one of the founders of Depot (<a href="https://depot.dev">https://depot.dev</a>), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!<p>Here's a video demo: <a href="https://www.youtube.com/watch?v=VX5Z-k1mGc8" rel="nofollow">https://www.youtube.com/watch?v=VX5Z-k1mGc8</a>, and here’s our blog post: <a href="https://depot.dev/blog/depot-github-actions-runners">https://depot.dev/blog/depot-github-actions-runners</a>.<p>While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.<p>Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!<p>We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:<p>- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).<p>- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.<p>- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.<p>We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.<p>Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.<p>We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.<p>There are three alternatives to our managed runners currently:<p>1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.<p>2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.<p>3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.<p>Any feedback is very welcome! You can sign up at <a href="https://depot.dev/sign-up">https://depot.dev/sign-up</a> for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)

Show HN: Goralim - a rate limiting pkg for Go to handle distributed workloads

Show HN: Goralim - a rate limiting pkg for Go to handle distributed workloads

Show HN: Burr – A framework for building and debugging GenAI apps faster

Hey HN, we're developing Burr (github.com/dagworks-inc/burr), an open-source python framework that makes it easier to build and debug GenAI applications.<p>Burr is a lightweight library that can integrate with your favorite tools and comes with a debugging UI. If you prefer a video introduction, you can watch me build a chatbot here: <a href="https://www.youtube.com/watch?v=rEZ4oDN0GdU" rel="nofollow">https://www.youtube.com/watch?v=rEZ4oDN0GdU</a>.<p>Common friction points we’ve seen with GenAI applications include logically modeling application flow, debugging and recreating error cases, and curating data for testing/evaluation (see <a href="https://hamel.dev/blog/posts/evals/" rel="nofollow">https://hamel.dev/blog/posts/evals/</a>). Burr aims to make these easier. You can run Burr locally – see instructions in the repo.<p>We talked to many companies about the pains they felt in building applications on top of LLMs and were surprised how many built bespoke state management layers and used printlines to debug.<p>We found that everyone wanted the ability to pull up the state of an application at a given point, poke at it to debug/tweak code, and use for later testing/evaluation. People integrating with LLMOps tools fared slightly better, but these tend to focus solely on API calls to test & evaluate prompts, and left the problem of logically modeling/checkpointing unsolved.<p>Having platform tooling backgrounds, we felt that a good abstraction would help improve the experience. These problems all got easier to think about when we modeled applications a state machines composed of “actions” designed for introspection (for more read <a href="https://blog.dagworks.io/p/burr-develop-stateful-ai-applications">https://blog.dagworks.io/p/burr-develop-stateful-ai-applicat...</a>). We don’t want to limit what people can write, but we do want to constrain it just enough that the framework provides value and doesn’t get in the way. This led us to design Burr with the following core functionalities:<p>1. BYOF. Burr allows you to bring your own frameworks/delegate to any python code, like LangChain, LlamaIndex, Hamilton, etc. inside of “actions”. This provides you with the flexibility to mix and match so you’re not limited.<p>2. Pluggability. Burr comes with APIs to allow you to save/load (i.e. checkpoint) application state, run custom code before/after action execution, and add in your own telemetry provider (e.g. langfuse, datadog, DAGWorks, etc.).<p>3. UI. Burr comes with its own UI (following the python batteries included ethos) that you can run locally, with the intent to connect with your development/debugging workflow. You can see your application as it progresses and inspect its state at any given point.<p>The above functionalities lend themselves well to building many types of applications quickly and flexibly using the tools you want. E.g. conversational RAG bots, text based games, human in the loop workflows, text to SQL bots, etc. Start with LangChain and then easily transition to your custom code or another framework without having to rewrite much of your application. Side note: we also see Burr as useful outside of interactive GenAI/LLMs applications, e.g. building hyper-parameter optimization routines for chunking and embeddings & orchestrating simulations.<p>We have a swath of improvements planned. We would love feedback, contributions, & help prioritizing. Typescript support, more ergonomic UX + APIs for annotation and test/eval curation, as well as integrations with common telemetry frameworks and capture of finer grained information from frameworks like LangChain, LlamaIndex, Hamilton, etc…<p>Re: the name Burr, you may recognize us as the authors of Hamilton (github.com/dagworks-inc/hamilton), named after Alexander Hamilton (the creator of the federal reserve). While Aaron Burr killed him in a duel, we see Burr being a complement, rather than killer to Hamilton !<p>That’s all for now. Please don’t hesitate to open github issues/discussions or join our discord <a href="https://discord.gg/6Zy2DwP4f3" rel="nofollow">https://discord.gg/6Zy2DwP4f3</a> to chat with us there. We’re still very early and would love to get your feedback!

Show HN: Burr – A framework for building and debugging GenAI apps faster

Hey HN, we're developing Burr (github.com/dagworks-inc/burr), an open-source python framework that makes it easier to build and debug GenAI applications.<p>Burr is a lightweight library that can integrate with your favorite tools and comes with a debugging UI. If you prefer a video introduction, you can watch me build a chatbot here: <a href="https://www.youtube.com/watch?v=rEZ4oDN0GdU" rel="nofollow">https://www.youtube.com/watch?v=rEZ4oDN0GdU</a>.<p>Common friction points we’ve seen with GenAI applications include logically modeling application flow, debugging and recreating error cases, and curating data for testing/evaluation (see <a href="https://hamel.dev/blog/posts/evals/" rel="nofollow">https://hamel.dev/blog/posts/evals/</a>). Burr aims to make these easier. You can run Burr locally – see instructions in the repo.<p>We talked to many companies about the pains they felt in building applications on top of LLMs and were surprised how many built bespoke state management layers and used printlines to debug.<p>We found that everyone wanted the ability to pull up the state of an application at a given point, poke at it to debug/tweak code, and use for later testing/evaluation. People integrating with LLMOps tools fared slightly better, but these tend to focus solely on API calls to test & evaluate prompts, and left the problem of logically modeling/checkpointing unsolved.<p>Having platform tooling backgrounds, we felt that a good abstraction would help improve the experience. These problems all got easier to think about when we modeled applications a state machines composed of “actions” designed for introspection (for more read <a href="https://blog.dagworks.io/p/burr-develop-stateful-ai-applications">https://blog.dagworks.io/p/burr-develop-stateful-ai-applicat...</a>). We don’t want to limit what people can write, but we do want to constrain it just enough that the framework provides value and doesn’t get in the way. This led us to design Burr with the following core functionalities:<p>1. BYOF. Burr allows you to bring your own frameworks/delegate to any python code, like LangChain, LlamaIndex, Hamilton, etc. inside of “actions”. This provides you with the flexibility to mix and match so you’re not limited.<p>2. Pluggability. Burr comes with APIs to allow you to save/load (i.e. checkpoint) application state, run custom code before/after action execution, and add in your own telemetry provider (e.g. langfuse, datadog, DAGWorks, etc.).<p>3. UI. Burr comes with its own UI (following the python batteries included ethos) that you can run locally, with the intent to connect with your development/debugging workflow. You can see your application as it progresses and inspect its state at any given point.<p>The above functionalities lend themselves well to building many types of applications quickly and flexibly using the tools you want. E.g. conversational RAG bots, text based games, human in the loop workflows, text to SQL bots, etc. Start with LangChain and then easily transition to your custom code or another framework without having to rewrite much of your application. Side note: we also see Burr as useful outside of interactive GenAI/LLMs applications, e.g. building hyper-parameter optimization routines for chunking and embeddings & orchestrating simulations.<p>We have a swath of improvements planned. We would love feedback, contributions, & help prioritizing. Typescript support, more ergonomic UX + APIs for annotation and test/eval curation, as well as integrations with common telemetry frameworks and capture of finer grained information from frameworks like LangChain, LlamaIndex, Hamilton, etc…<p>Re: the name Burr, you may recognize us as the authors of Hamilton (github.com/dagworks-inc/hamilton), named after Alexander Hamilton (the creator of the federal reserve). While Aaron Burr killed him in a duel, we see Burr being a complement, rather than killer to Hamilton !<p>That’s all for now. Please don’t hesitate to open github issues/discussions or join our discord <a href="https://discord.gg/6Zy2DwP4f3" rel="nofollow">https://discord.gg/6Zy2DwP4f3</a> to chat with us there. We’re still very early and would love to get your feedback!

Show HN: Portr – open-source ngrok alternative designed for teams

Show HN: Portr – open-source ngrok alternative designed for teams

Show HN: Portr – open-source ngrok alternative designed for teams

Show HN: Plandex – an AI coding engine for complex tasks

Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts </code></pre> You can load entire directories recursively:<p><pre><code> plandex load src/lib -r </code></pre> You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx} </code></pre> You can load directory layouts (file names only):<p><pre><code> plandex load src --tree </code></pre> Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks </code></pre> Or pipe data in:<p><pre><code> cargo test | plandex load </code></pre> For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt" </code></pre> Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell </code></pre> For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white" </code></pre> You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg </code></pre> You can list all running or recently finished tasks:<p><pre><code> plandex ps </code></pre> And connect to any running task to start streaming it:<p><pre><code> plandex connect </code></pre> For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash </code></pre> It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!

Show HN: Plandex – an AI coding engine for complex tasks

Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts </code></pre> You can load entire directories recursively:<p><pre><code> plandex load src/lib -r </code></pre> You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx} </code></pre> You can load directory layouts (file names only):<p><pre><code> plandex load src --tree </code></pre> Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks </code></pre> Or pipe data in:<p><pre><code> cargo test | plandex load </code></pre> For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt" </code></pre> Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell </code></pre> For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white" </code></pre> You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg </code></pre> You can list all running or recently finished tasks:<p><pre><code> plandex ps </code></pre> And connect to any running task to start streaming it:<p><pre><code> plandex connect </code></pre> For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash </code></pre> It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!

Show HN: Plandex – an AI coding engine for complex tasks

Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts </code></pre> You can load entire directories recursively:<p><pre><code> plandex load src/lib -r </code></pre> You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx} </code></pre> You can load directory layouts (file names only):<p><pre><code> plandex load src --tree </code></pre> Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks </code></pre> Or pipe data in:<p><pre><code> cargo test | plandex load </code></pre> For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt" </code></pre> Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell </code></pre> For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white" </code></pre> You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg </code></pre> You can list all running or recently finished tasks:<p><pre><code> plandex ps </code></pre> And connect to any running task to start streaming it:<p><pre><code> plandex connect </code></pre> For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash </code></pre> It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!

< 1 2 3 ... 332 333 334 335 336 ... 936 937 938 >