The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Goralim - a rate limiting pkg for Go to handle distributed workloads
Show HN: Goralim - a rate limiting pkg for Go to handle distributed workloads
Show HN: Burr – A framework for building and debugging GenAI apps faster
Hey HN, we're developing Burr (github.com/dagworks-inc/burr), an open-source python framework that makes it easier to build and debug GenAI applications.<p>Burr is a lightweight library that can integrate with your favorite tools and comes with a debugging UI. If you prefer a video introduction, you can watch me build a chatbot here: <a href="https://www.youtube.com/watch?v=rEZ4oDN0GdU" rel="nofollow">https://www.youtube.com/watch?v=rEZ4oDN0GdU</a>.<p>Common friction points we’ve seen with GenAI applications include logically modeling application flow, debugging and recreating error cases, and curating data for testing/evaluation (see <a href="https://hamel.dev/blog/posts/evals/" rel="nofollow">https://hamel.dev/blog/posts/evals/</a>). Burr aims to make these easier. You can run Burr locally – see instructions in the repo.<p>We talked to many companies about the pains they felt in building applications on top of LLMs and were surprised how many built bespoke state management layers and used printlines to debug.<p>We found that everyone wanted the ability to pull up the state of an application at a given point, poke at it to debug/tweak code, and use for later testing/evaluation. People integrating with LLMOps tools fared slightly better, but these tend to focus solely on API calls to test & evaluate prompts, and left the problem of logically modeling/checkpointing unsolved.<p>Having platform tooling backgrounds, we felt that a good abstraction would help improve the experience. These problems all got easier to think about when we modeled applications a state machines composed of “actions” designed for introspection (for more read <a href="https://blog.dagworks.io/p/burr-develop-stateful-ai-applications">https://blog.dagworks.io/p/burr-develop-stateful-ai-applicat...</a>). We don’t want to limit what people can write, but we do want to constrain it just enough that the framework provides value and doesn’t get in the way. This led us to design Burr with the following core functionalities:<p>1. BYOF. Burr allows you to bring your own frameworks/delegate to any python code, like LangChain, LlamaIndex, Hamilton, etc. inside of “actions”. This provides you with the flexibility to mix and match so you’re not limited.<p>2. Pluggability. Burr comes with APIs to allow you to save/load (i.e. checkpoint) application state, run custom code before/after action execution, and add in your own telemetry provider (e.g. langfuse, datadog, DAGWorks, etc.).<p>3. UI. Burr comes with its own UI (following the python batteries included ethos) that you can run locally, with the intent to connect with your development/debugging workflow. You can see your application as it progresses and inspect its state at any given point.<p>The above functionalities lend themselves well to building many types of applications quickly and flexibly using the tools you want. E.g. conversational RAG bots, text based games, human in the loop workflows, text to SQL bots, etc. Start with LangChain and then easily transition to your custom code or another framework without having to rewrite much of your application. Side note: we also see Burr as useful outside of interactive GenAI/LLMs applications, e.g. building hyper-parameter optimization routines for chunking and embeddings & orchestrating simulations.<p>We have a swath of improvements planned. We would love feedback, contributions, & help prioritizing. Typescript support, more ergonomic UX + APIs for annotation and test/eval curation, as well as integrations with common telemetry frameworks and capture of finer grained information from frameworks like LangChain, LlamaIndex, Hamilton, etc…<p>Re: the name Burr, you may recognize us as the authors of Hamilton (github.com/dagworks-inc/hamilton), named after Alexander Hamilton (the creator of the federal reserve). While Aaron Burr killed him in a duel, we see Burr being a complement, rather than killer to Hamilton !<p>That’s all for now. Please don’t hesitate to open github issues/discussions or join our discord <a href="https://discord.gg/6Zy2DwP4f3" rel="nofollow">https://discord.gg/6Zy2DwP4f3</a> to chat with us there. We’re still very early and would love to get your feedback!
Show HN: Burr – A framework for building and debugging GenAI apps faster
Hey HN, we're developing Burr (github.com/dagworks-inc/burr), an open-source python framework that makes it easier to build and debug GenAI applications.<p>Burr is a lightweight library that can integrate with your favorite tools and comes with a debugging UI. If you prefer a video introduction, you can watch me build a chatbot here: <a href="https://www.youtube.com/watch?v=rEZ4oDN0GdU" rel="nofollow">https://www.youtube.com/watch?v=rEZ4oDN0GdU</a>.<p>Common friction points we’ve seen with GenAI applications include logically modeling application flow, debugging and recreating error cases, and curating data for testing/evaluation (see <a href="https://hamel.dev/blog/posts/evals/" rel="nofollow">https://hamel.dev/blog/posts/evals/</a>). Burr aims to make these easier. You can run Burr locally – see instructions in the repo.<p>We talked to many companies about the pains they felt in building applications on top of LLMs and were surprised how many built bespoke state management layers and used printlines to debug.<p>We found that everyone wanted the ability to pull up the state of an application at a given point, poke at it to debug/tweak code, and use for later testing/evaluation. People integrating with LLMOps tools fared slightly better, but these tend to focus solely on API calls to test & evaluate prompts, and left the problem of logically modeling/checkpointing unsolved.<p>Having platform tooling backgrounds, we felt that a good abstraction would help improve the experience. These problems all got easier to think about when we modeled applications a state machines composed of “actions” designed for introspection (for more read <a href="https://blog.dagworks.io/p/burr-develop-stateful-ai-applications">https://blog.dagworks.io/p/burr-develop-stateful-ai-applicat...</a>). We don’t want to limit what people can write, but we do want to constrain it just enough that the framework provides value and doesn’t get in the way. This led us to design Burr with the following core functionalities:<p>1. BYOF. Burr allows you to bring your own frameworks/delegate to any python code, like LangChain, LlamaIndex, Hamilton, etc. inside of “actions”. This provides you with the flexibility to mix and match so you’re not limited.<p>2. Pluggability. Burr comes with APIs to allow you to save/load (i.e. checkpoint) application state, run custom code before/after action execution, and add in your own telemetry provider (e.g. langfuse, datadog, DAGWorks, etc.).<p>3. UI. Burr comes with its own UI (following the python batteries included ethos) that you can run locally, with the intent to connect with your development/debugging workflow. You can see your application as it progresses and inspect its state at any given point.<p>The above functionalities lend themselves well to building many types of applications quickly and flexibly using the tools you want. E.g. conversational RAG bots, text based games, human in the loop workflows, text to SQL bots, etc. Start with LangChain and then easily transition to your custom code or another framework without having to rewrite much of your application. Side note: we also see Burr as useful outside of interactive GenAI/LLMs applications, e.g. building hyper-parameter optimization routines for chunking and embeddings & orchestrating simulations.<p>We have a swath of improvements planned. We would love feedback, contributions, & help prioritizing. Typescript support, more ergonomic UX + APIs for annotation and test/eval curation, as well as integrations with common telemetry frameworks and capture of finer grained information from frameworks like LangChain, LlamaIndex, Hamilton, etc…<p>Re: the name Burr, you may recognize us as the authors of Hamilton (github.com/dagworks-inc/hamilton), named after Alexander Hamilton (the creator of the federal reserve). While Aaron Burr killed him in a duel, we see Burr being a complement, rather than killer to Hamilton !<p>That’s all for now. Please don’t hesitate to open github issues/discussions or join our discord <a href="https://discord.gg/6Zy2DwP4f3" rel="nofollow">https://discord.gg/6Zy2DwP4f3</a> to chat with us there. We’re still very early and would love to get your feedback!
Show HN: Portr – open-source ngrok alternative designed for teams
Show HN: Portr – open-source ngrok alternative designed for teams
Show HN: Portr – open-source ngrok alternative designed for teams
Show HN: Plandex – an AI coding engine for complex tasks
Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts
</code></pre>
You can load entire directories recursively:<p><pre><code> plandex load src/lib -r
</code></pre>
You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx}
</code></pre>
You can load directory layouts (file names only):<p><pre><code> plandex load src --tree
</code></pre>
Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks
</code></pre>
Or pipe data in:<p><pre><code> cargo test | plandex load
</code></pre>
For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt"
</code></pre>
Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell
</code></pre>
For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white"
</code></pre>
You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg
</code></pre>
You can list all running or recently finished tasks:<p><pre><code> plandex ps
</code></pre>
And connect to any running task to start streaming it:<p><pre><code> plandex connect
</code></pre>
For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash
</code></pre>
It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!
Show HN: Plandex – an AI coding engine for complex tasks
Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts
</code></pre>
You can load entire directories recursively:<p><pre><code> plandex load src/lib -r
</code></pre>
You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx}
</code></pre>
You can load directory layouts (file names only):<p><pre><code> plandex load src --tree
</code></pre>
Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks
</code></pre>
Or pipe data in:<p><pre><code> cargo test | plandex load
</code></pre>
For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt"
</code></pre>
Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell
</code></pre>
For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white"
</code></pre>
You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg
</code></pre>
You can list all running or recently finished tasks:<p><pre><code> plandex ps
</code></pre>
And connect to any running task to start streaming it:<p><pre><code> plandex connect
</code></pre>
For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash
</code></pre>
It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!
Show HN: Plandex – an AI coding engine for complex tasks
Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts
</code></pre>
You can load entire directories recursively:<p><pre><code> plandex load src/lib -r
</code></pre>
You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx}
</code></pre>
You can load directory layouts (file names only):<p><pre><code> plandex load src --tree
</code></pre>
Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks
</code></pre>
Or pipe data in:<p><pre><code> cargo test | plandex load
</code></pre>
For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt"
</code></pre>
Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell
</code></pre>
For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white"
</code></pre>
You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg
</code></pre>
You can list all running or recently finished tasks:<p><pre><code> plandex ps
</code></pre>
And connect to any running task to start streaming it:<p><pre><code> plandex connect
</code></pre>
For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash
</code></pre>
It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!
Show HN: Plandex – an AI coding engine for complex tasks
Hey HN, I'm building Plandex (<a href="https://plandex.ai" rel="nofollow">https://plandex.ai</a>), an open source, terminal-based AI coding engine for complex tasks.<p>I built Plandex because I was tired of copying and pasting code back and forth between ChatGPT and my projects. It can complete tasks that span multiple files and require many steps. It uses the OpenAI API with your API key (support for other models, including Claude, Gemini, and open source models is on the roadmap). You can watch a 2 minute demo here: <a href="https://player.vimeo.com/video/926634577" rel="nofollow">https://player.vimeo.com/video/926634577</a><p>Here's a prompt I used to build the AWS infrastructure for Plandex Cloud (Plandex can be self-hosted or cloud-hosted): <a href="https://github.com/plandex-ai/plandex/blob/main/test/test_prompts/aws-infra.txt">https://github.com/plandex-ai/plandex/blob/main/test/test_pr...</a><p>Something I think sets Plandex apart is a focus on working around bad outputs and iterating on tasks systematically. It's relatively easy to make a great looking demo for any tool, but the day-to-day of working with it has a lot more to do with how it handles edge cases and failures. Plandex tries to tighten the feedback loop between developer and LLM:<p>- Every aspect of a Plandex plan is version-controlled, from the context to the conversation itself to model settings. As soon as things start to go off the rails, you can use the `plandex rewind` command to back up and add more context or iterate on the prompt. Git-style branches allow you to test and compare multiple approaches.<p>- As a plan proceeds, tentative updates are accumulated in a protected sandbox (also version-controlled), preventing any wayward edits to your project files.<p>- The `plandex changes` command opens a diff review TUI that lets you review pending changes side-by-side like the GitHub PR review UI. Just hit the 'r' key to reject any change that doesn’t look right. Once you’re satisfied, either press ctrl+a from the changes TUI or run `plandex apply` to apply the changes.<p>- If you work on files you’ve loaded into context outside of Plandex, your changes are pulled in automatically so that the model always uses the latest state of your project.<p>Plandex makes it easy to load files and directories in the terminal. You can load multiple paths:<p><pre><code> plandex load components/some-component.ts lib/api.ts ../sibling-dir/another-file.ts
</code></pre>
You can load entire directories recursively:<p><pre><code> plandex load src/lib -r
</code></pre>
You can use glob patterns:<p><pre><code> plandex load src/**/*.{ts,tsx}
</code></pre>
You can load directory layouts (file names only):<p><pre><code> plandex load src --tree
</code></pre>
Text content of urls:<p><pre><code> plandex load https://react.dev/reference/react/hooks
</code></pre>
Or pipe data in:<p><pre><code> cargo test | plandex load
</code></pre>
For sending prompts, you can pass in a file:<p><pre><code> plandex tell -f "prompts/stripe/add-webhooks.txt"
</code></pre>
Or you can pop up vim and write your prompt there:<p><pre><code> plandex tell
</code></pre>
For shorter prompts you can pass them inline:<p><pre><code> plandex tell "set the header's background to #222 and text to white"
</code></pre>
You can run tasks in the background:<p><pre><code> plandex tell "write tests for all functions in lib/math/math.go. put them in lib/math_tests." --bg
</code></pre>
You can list all running or recently finished tasks:<p><pre><code> plandex ps
</code></pre>
And connect to any running task to start streaming it:<p><pre><code> plandex connect
</code></pre>
For more details, here’s a quick overview of commands and functionality: <a href="https://github.com/plandex-ai/plandex/blob/main/guides/USAGE.md">https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...</a><p>Plandex is written in Go and is statically compiled, so it runs from a single small binary with no dependencies on any package managers or language runtimes. There’s a 1-line quick install:<p><pre><code> curl -sL https://plandex.ai/install.sh | bash
</code></pre>
It's early days, but Plandex is working well and is legitimately the tool I reach for first when I want to do something that is too large or complex for ChatGPT or GH Copilot. I would love to get your feedback. Feel free to hop into the Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>) and let me know how it goes. PRs are also welcome!
Show HN: I've built a locally running Perplexity clone
The video demo runs a 7b Model on a normal gaming GPU. I think it already works quite well (accounting for the limited hardware power). :)
Show HN: I've built a locally running Perplexity clone
The video demo runs a 7b Model on a normal gaming GPU. I think it already works quite well (accounting for the limited hardware power). :)
Show HN: I've built a locally running Perplexity clone
The video demo runs a 7b Model on a normal gaming GPU. I think it already works quite well (accounting for the limited hardware power). :)
Show HN: I built an API for Google autocomplete
Show HN: Faster-than-light ping (April Fools')
A server that replies to your pings faster than ever! Now with IPv6 support. Give it a minute and the response time will go down tremendously:<p><pre><code> ping ftlping.net
</code></pre>
On Windows, add an option not to stop after 4 pings:<p><pre><code> ping -t ftlping.net
</code></pre>
Works best on MacOS and FreeBSD. Not impressive on Windows but you can see something interesting if you run a packet analyser.<p>Report issues here: https://github.com/plingbang/ftlping/issues The code may be published when my future self passes it to me.
Show HN: Exponentile – A match 3 game mixed with 2048
Hi HN,<p>I made this small game in my spare time. It's quite addictive, I spent a lot of hours playing instead of polishing it.
My own highscore is 96,792.
Source code is at <a href="https://github.com/MikeBellika/tile-game">https://github.com/MikeBellika/tile-game</a>. It's made with react, tailwind, and framer motion.<p>Hope you enjoy!
Show HN: Floro – Visual Version Control
Hi HN, we’re building Floro (<a href="https://floro.io" rel="nofollow">https://floro.io</a>), an open-source, offline-first visual version control system that allows you to merge and diff graphical content. Demo video here: <a href="https://www.youtube.com/watch?v=5fjixBNKUbM" rel="nofollow">https://www.youtube.com/watch?v=5fjixBNKUbM</a>.<p>We were a YC W21 startup (Cheqout) that originally worked on restaurant tech. We ended up pivoting after the pandemic and decided to build Floro due to a lot of problems we encountered while working on Cheqout.<p>We struggled a lot with string content and dark mode, especially when it came to static assets. We were a cross-platform product, so everything was an SVG. What we wanted was a way for our designer (who cannot use a command line) to be able to check static assets into our codebase and be able to manage our color themes without us having to manually edit the assets ourselves. We also faced a lot of problems with i18n and keeping translations up to date. Both problems felt kind of similar, they’re both essentially just key value solutions that require people who aren’t software engineers to edit them.<p>This led us down a path of searching for a way to incorporate something like Dropbox into git. We didn’t want a content management system, we wanted something where we could diff and merge our static assets without requiring the user to know how to fix conflicts in plain-text or binary. We wanted a way to reference something like a snapshot of a tar of our static assets so we could idempotently rebuild our application and add type-safety to our assets. Eventually, we found a trick to diff and merge a certain type of tree structure that fit these problems well. After a bit more experimenting around, we figured out how we could write an interface to make UIs that could be diffed and display merge conflicts.<p>To show what visual version control could be applied to, we decided to build four plugins (applications that can be diffed and merged in Floro).<p>1. Text - This plugin is basically a replacement for i18n strings. It supports rich text, typed variable interpolations, conditional logic (for things like pluralization), links, ML translations, and a plethora of other features. It’s also sort of an IDE/TMS for translators & copy editors.<p>2. Icons - This plugin allows you to recolor an SVG asset so that all the colors are consistent with your color palette. You can automatically generate dark mode (and any other themed) versions of your assets, as well as versions of your asset that change with state (e.g. hovered, etc.) by applying themes to the asset.<p>3. Themes - This plugin allows you to create themes from your color palette.<p>4. Palette - This plugin allows you to define colors and shades that can be consumed by other plugins or used in your code for managing your app’s colors.<p>Since Floro is an offline-first desktop application we realized that we could allow users to test their local content out by building a browser extension that would allow them to override the state of their production websites and apps with the local content from floro. Floro essentially creates a localhost environment out of production apps, which allows non-technical users to treat content similarly to how engineers manage code. The demo video (shown above) does a good job of showing how this works. We also have a demo of how this works for mobile apps (<a href="https://www.youtube.com/watch?v=Om-k08GDoZ4" rel="nofollow">https://www.youtube.com/watch?v=Om-k08GDoZ4</a>).
We are fully open source (MIT licensed). We intend to monetize with consulting and private hosting. Users are more than welcome to self-host and build their own distributions of the desktop application and all the plugins we have created.<p>This is really our launch (anticipate some bugs but nothing serious, restarting the app(s) takes care of most things). We’ve now built four applications with Floro (including our website) and feel confident that it’s ready. We’ve spent 18 months getting here, we hope some of you like it!<p>Thanks!<p>Jamie & Jacqueline
Show HN: Floro – Visual Version Control
Hi HN, we’re building Floro (<a href="https://floro.io" rel="nofollow">https://floro.io</a>), an open-source, offline-first visual version control system that allows you to merge and diff graphical content. Demo video here: <a href="https://www.youtube.com/watch?v=5fjixBNKUbM" rel="nofollow">https://www.youtube.com/watch?v=5fjixBNKUbM</a>.<p>We were a YC W21 startup (Cheqout) that originally worked on restaurant tech. We ended up pivoting after the pandemic and decided to build Floro due to a lot of problems we encountered while working on Cheqout.<p>We struggled a lot with string content and dark mode, especially when it came to static assets. We were a cross-platform product, so everything was an SVG. What we wanted was a way for our designer (who cannot use a command line) to be able to check static assets into our codebase and be able to manage our color themes without us having to manually edit the assets ourselves. We also faced a lot of problems with i18n and keeping translations up to date. Both problems felt kind of similar, they’re both essentially just key value solutions that require people who aren’t software engineers to edit them.<p>This led us down a path of searching for a way to incorporate something like Dropbox into git. We didn’t want a content management system, we wanted something where we could diff and merge our static assets without requiring the user to know how to fix conflicts in plain-text or binary. We wanted a way to reference something like a snapshot of a tar of our static assets so we could idempotently rebuild our application and add type-safety to our assets. Eventually, we found a trick to diff and merge a certain type of tree structure that fit these problems well. After a bit more experimenting around, we figured out how we could write an interface to make UIs that could be diffed and display merge conflicts.<p>To show what visual version control could be applied to, we decided to build four plugins (applications that can be diffed and merged in Floro).<p>1. Text - This plugin is basically a replacement for i18n strings. It supports rich text, typed variable interpolations, conditional logic (for things like pluralization), links, ML translations, and a plethora of other features. It’s also sort of an IDE/TMS for translators & copy editors.<p>2. Icons - This plugin allows you to recolor an SVG asset so that all the colors are consistent with your color palette. You can automatically generate dark mode (and any other themed) versions of your assets, as well as versions of your asset that change with state (e.g. hovered, etc.) by applying themes to the asset.<p>3. Themes - This plugin allows you to create themes from your color palette.<p>4. Palette - This plugin allows you to define colors and shades that can be consumed by other plugins or used in your code for managing your app’s colors.<p>Since Floro is an offline-first desktop application we realized that we could allow users to test their local content out by building a browser extension that would allow them to override the state of their production websites and apps with the local content from floro. Floro essentially creates a localhost environment out of production apps, which allows non-technical users to treat content similarly to how engineers manage code. The demo video (shown above) does a good job of showing how this works. We also have a demo of how this works for mobile apps (<a href="https://www.youtube.com/watch?v=Om-k08GDoZ4" rel="nofollow">https://www.youtube.com/watch?v=Om-k08GDoZ4</a>).
We are fully open source (MIT licensed). We intend to monetize with consulting and private hosting. Users are more than welcome to self-host and build their own distributions of the desktop application and all the plugins we have created.<p>This is really our launch (anticipate some bugs but nothing serious, restarting the app(s) takes care of most things). We’ve now built four applications with Floro (including our website) and feel confident that it’s ready. We’ve spent 18 months getting here, we hope some of you like it!<p>Thanks!<p>Jamie & Jacqueline
Show HN: OneUptime – open-source Datadog Alternative