The best Hacker News stories from Show from the past day
Latest posts:
I'm 17 and wrote this guide on how CPUs run programs
Show HN: Agentflow – Run Complex LLM Workflows from Simple JSON
So, it feels like this should exist. But I couldn't find it. So I tried to build it.<p>Agentflow lets you run complex LLM workflows from a simple JSON file. This can be as little as a list of tasks. Tasks can include variables, so you can reuse workflows for different outputs by providing different variable values. They can also include custom functions, so you can go beyond text generation to do anything you want to write a function for.<p>Someone might say: "Why not just use ChatGPT?" Among other reasons, I'd say that you can't template a workflow with ChatGPT, trigger it with different variable values, easily add in custom functions, or force the use of custom functions for steps in the workflow.<p>Someone might also say: "Then why not use Auto-GPT or BabyAGI?" Among other reasons, I'd say you can't if you want consistency because these tools operate autonomously, creating and executing their own tasks. Agentflow, on the other and, lets you define a step-by-step workflow to give you more control.<p>I'd like to do more with this, including adding more custom functions, and more examples, and more ways to trigger workflows (such as in response to events). But first, I want to make sure I'm not wasting my time! For starters, if something like this already exists, please tell me.
Show HN: Agentflow – Run Complex LLM Workflows from Simple JSON
So, it feels like this should exist. But I couldn't find it. So I tried to build it.<p>Agentflow lets you run complex LLM workflows from a simple JSON file. This can be as little as a list of tasks. Tasks can include variables, so you can reuse workflows for different outputs by providing different variable values. They can also include custom functions, so you can go beyond text generation to do anything you want to write a function for.<p>Someone might say: "Why not just use ChatGPT?" Among other reasons, I'd say that you can't template a workflow with ChatGPT, trigger it with different variable values, easily add in custom functions, or force the use of custom functions for steps in the workflow.<p>Someone might also say: "Then why not use Auto-GPT or BabyAGI?" Among other reasons, I'd say you can't if you want consistency because these tools operate autonomously, creating and executing their own tasks. Agentflow, on the other and, lets you define a step-by-step workflow to give you more control.<p>I'd like to do more with this, including adding more custom functions, and more examples, and more ways to trigger workflows (such as in response to events). But first, I want to make sure I'm not wasting my time! For starters, if something like this already exists, please tell me.
Show HN: Travel site made with Midjourney, GPT4 and Svelte
Show HN: Travel site made with Midjourney, GPT4 and Svelte
Blueprint for a distributed multi-region IAM with Go and CockroachDB
Show HN: Easyful – A Free Gumroad Alternative
Hi HN,<p>If you’re selling templates or digital assets online, platforms like Gumroad have a ton of amazing features . . . but they’re also expensive. It’s not uncommon to be paying 10%, 20% or even 30% of your revenue just to host and deliver some digital content to customers.<p>Instead, we think most creators should own their own Stripe account and use a lightweight fulfillment layer to send customers their orders.<p>So we built Easyful, a platform built on Stripe to email your content to customers when they buy it. And it’s free!<p>We’ve been using Easyful ourselves for a few months now. Try it out and let us know what you think!
Show HN: Easyful – A Free Gumroad Alternative
Hi HN,<p>If you’re selling templates or digital assets online, platforms like Gumroad have a ton of amazing features . . . but they’re also expensive. It’s not uncommon to be paying 10%, 20% or even 30% of your revenue just to host and deliver some digital content to customers.<p>Instead, we think most creators should own their own Stripe account and use a lightweight fulfillment layer to send customers their orders.<p>So we built Easyful, a platform built on Stripe to email your content to customers when they buy it. And it’s free!<p>We’ve been using Easyful ourselves for a few months now. Try it out and let us know what you think!
Show HN: Chat with your data using LangChain, Pinecone, and Airbyte
Hi HN,<p>A few of our team members at Airbyte (and Joe, who killed it!) recently played with building our own internal support chat bot, using Airbyte, Langchain, Pinecone and OpenAI, that would answer any questions we ask when developing a new connector on Airbyte.<p>As we prototyped it, we realized that it could be applied for many other use cases and sources of data, so... we created a tutorial that other community members can leverage [<a href="http://airbyte.com/tutorials/chat-with-your-data-using-openai-pinecone-airbyte-and-langchain">http://airbyte.com/tutorials/chat-with-your-data-using-opena...</a>] and the Github repo to run it [<a href="https://github.com/airbytehq/tutorial-connector-dev-bot">https://github.com/airbytehq/tutorial-connector-dev-bot</a>]<p>The tutorial shows:
- How to extract unstructured data from a variety of sources using Airbyte Open Source
- How to load data into a vector database (here Pinecone), preparing the data for LLM usage along the way
- How to integrate a vector database into ChatGPT to ask questions about your proprietary data<p>I hope some of it is useful, and would love your feedback!
Show HN: Chat with your data using LangChain, Pinecone, and Airbyte
Hi HN,<p>A few of our team members at Airbyte (and Joe, who killed it!) recently played with building our own internal support chat bot, using Airbyte, Langchain, Pinecone and OpenAI, that would answer any questions we ask when developing a new connector on Airbyte.<p>As we prototyped it, we realized that it could be applied for many other use cases and sources of data, so... we created a tutorial that other community members can leverage [<a href="http://airbyte.com/tutorials/chat-with-your-data-using-openai-pinecone-airbyte-and-langchain">http://airbyte.com/tutorials/chat-with-your-data-using-opena...</a>] and the Github repo to run it [<a href="https://github.com/airbytehq/tutorial-connector-dev-bot">https://github.com/airbytehq/tutorial-connector-dev-bot</a>]<p>The tutorial shows:
- How to extract unstructured data from a variety of sources using Airbyte Open Source
- How to load data into a vector database (here Pinecone), preparing the data for LLM usage along the way
- How to integrate a vector database into ChatGPT to ask questions about your proprietary data<p>I hope some of it is useful, and would love your feedback!
Show HN: Chat with your data using LangChain, Pinecone, and Airbyte
Hi HN,<p>A few of our team members at Airbyte (and Joe, who killed it!) recently played with building our own internal support chat bot, using Airbyte, Langchain, Pinecone and OpenAI, that would answer any questions we ask when developing a new connector on Airbyte.<p>As we prototyped it, we realized that it could be applied for many other use cases and sources of data, so... we created a tutorial that other community members can leverage [<a href="http://airbyte.com/tutorials/chat-with-your-data-using-openai-pinecone-airbyte-and-langchain">http://airbyte.com/tutorials/chat-with-your-data-using-opena...</a>] and the Github repo to run it [<a href="https://github.com/airbytehq/tutorial-connector-dev-bot">https://github.com/airbytehq/tutorial-connector-dev-bot</a>]<p>The tutorial shows:
- How to extract unstructured data from a variety of sources using Airbyte Open Source
- How to load data into a vector database (here Pinecone), preparing the data for LLM usage along the way
- How to integrate a vector database into ChatGPT to ask questions about your proprietary data<p>I hope some of it is useful, and would love your feedback!
Show HN: Medical LLM API on par with Google Med-PaLM 2. 92% USMLE accuracy
Hello HN!<p>I’d like to share a medical question-answering API which has state-of-the-art performance on the USMLE self-assessment exam. You can try out the MediSearch API in 10 seconds in this Colab [<a href="https://tinyurl.com/medisearch-colab" rel="nofollow noreferrer">https://tinyurl.com/medisearch-colab</a>], or test the search engine live at MediSearch [<a href="https://medisearch.io">https://medisearch.io</a>]. See our API page [<a href="https://medisearch.io/api">https://medisearch.io/api</a>] to apply for access. Some technical highlights:<p>1. State-of-the-art accuracy of 92% on the USMLE self-assessment exam [<a href="https://tinyurl.com/medisearch-eval" rel="nofollow noreferrer">https://tinyurl.com/medisearch-eval</a>].
2. Ranked 2nd on MedQA benchmark, closely tailing Google's Med-PaLM 2 (within ~2%).
3. The API provides article references as an output.<p>We’d love to hear feedback from this community! We’ve seen people use the MediSearch API to implement a chat bot or search field within their app.<p>Please note that MediSearch does not have a 100% accuracy. Therefore, a human professional should always make a final decision on the information suggested by MediSearch.<p>You can use the MediSearch API similarly to the OpenAI API [<a href="https://pypi.org/project/medisearch-client/" rel="nofollow noreferrer">https://pypi.org/project/medisearch-client/</a>]. If you’d like access to the MediSearch API, please email michal@medisearch.io or visit our API page [<a href="https://medisearch.io/api">https://medisearch.io/api</a>]. Please specify your use case in the application.
Show HN: Doculite – Use SQLite as a Document Database
Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server.
So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks
Show HN: Doculite – Use SQLite as a Document Database
Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server.
So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks
Show HN: Doculite – Use SQLite as a Document Database
Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server.
So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)
Show HN: Axilla – Open-source TypeScript framework for LLM apps
Hi HN, we are Nick and Ben, creators of Axilla - an open source TypeScript framework to develop LLM applications. It’s in the early stages but you can use it today: we’ve already published 2 modules and have more coming soon.<p>Ben and I met while working at Cruise on the ML platform for self-driving cars. We spent many years there and learned the hard way that shipping AI is not quite the same as shipping regular code. There are many parts of the ML lifecycle, e.g., mining, processing, and labeling data and training, evaluating, and deploying models. Although none of them are rocket science, most of the inefficiencies tend to come from integrating them together. At Cruise, we built an integrated framework that accelerated the speed of shipping models to the car by 80%.<p>With the explosion of generative AI, we are seeing software teams building applications and features with the same inefficiencies we experienced at Cruise.<p>This got us excited about building an opinionated, end-to-end platform. We started building in Python but quickly noticed that most of the teams we talked to weren’t using Python, but instead building in TypeScript. This is because most teams are not training their own models, but rather using foundational ones served by third parties over HTTP, like openAI, anthropic or even OSS ones from hugging face.<p>Because of this, we’ve decided to build Axilla as a TypeScript first library.<p>Our goal is to build a modular framework that can be adopted incrementally yet benefits from full integration. For example, the production responses coming from the LLM should be able to be sent — with all necessary metadata — to the eval module or the labeling tooling.<p>So far, we’ve shipped 2 modules, that are available to use today on npm:<p>* *axgen*: focused on RAG type workflows. Useful if you want to ingest data, get the embeddings, store it in a vector store and then do similarity search retrieval. It’s how you give LLMs memory or more context about private data sources.<p>* *axeval*: a lightweight evaluation library, that feels like jest (so, like unit tests). In our experience, evaluation should be really easy to setup, to encourage continuous quality monitoring, and slowly build ground truth datasets of edge cases that can be used for regression testing, and fine-tuning.<p>We are working on a serving module and a data processing one next and would love to hear what functionality you need us to prioritize!<p>We built an open-source demo UI for you to discover the framework more: <a href="https://github.com/axilla-io/demo-ui">https://github.com/axilla-io/demo-ui</a><p>And here's a video of Nicholas walking through the UI that gives an idea of what axgen can do: <a href="https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc" rel="nofollow noreferrer">https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc</a><p>We’d love to hear your feedback on the framework, you can let us know here, create an issue on the GitHub repo or send me an email at nicholas@axilla.io<p>And of course, contributions welcome!
Show HN: Axilla – Open-source TypeScript framework for LLM apps
Hi HN, we are Nick and Ben, creators of Axilla - an open source TypeScript framework to develop LLM applications. It’s in the early stages but you can use it today: we’ve already published 2 modules and have more coming soon.<p>Ben and I met while working at Cruise on the ML platform for self-driving cars. We spent many years there and learned the hard way that shipping AI is not quite the same as shipping regular code. There are many parts of the ML lifecycle, e.g., mining, processing, and labeling data and training, evaluating, and deploying models. Although none of them are rocket science, most of the inefficiencies tend to come from integrating them together. At Cruise, we built an integrated framework that accelerated the speed of shipping models to the car by 80%.<p>With the explosion of generative AI, we are seeing software teams building applications and features with the same inefficiencies we experienced at Cruise.<p>This got us excited about building an opinionated, end-to-end platform. We started building in Python but quickly noticed that most of the teams we talked to weren’t using Python, but instead building in TypeScript. This is because most teams are not training their own models, but rather using foundational ones served by third parties over HTTP, like openAI, anthropic or even OSS ones from hugging face.<p>Because of this, we’ve decided to build Axilla as a TypeScript first library.<p>Our goal is to build a modular framework that can be adopted incrementally yet benefits from full integration. For example, the production responses coming from the LLM should be able to be sent — with all necessary metadata — to the eval module or the labeling tooling.<p>So far, we’ve shipped 2 modules, that are available to use today on npm:<p>* *axgen*: focused on RAG type workflows. Useful if you want to ingest data, get the embeddings, store it in a vector store and then do similarity search retrieval. It’s how you give LLMs memory or more context about private data sources.<p>* *axeval*: a lightweight evaluation library, that feels like jest (so, like unit tests). In our experience, evaluation should be really easy to setup, to encourage continuous quality monitoring, and slowly build ground truth datasets of edge cases that can be used for regression testing, and fine-tuning.<p>We are working on a serving module and a data processing one next and would love to hear what functionality you need us to prioritize!<p>We built an open-source demo UI for you to discover the framework more: <a href="https://github.com/axilla-io/demo-ui">https://github.com/axilla-io/demo-ui</a><p>And here's a video of Nicholas walking through the UI that gives an idea of what axgen can do: <a href="https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc" rel="nofollow noreferrer">https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc</a><p>We’d love to hear your feedback on the framework, you can let us know here, create an issue on the GitHub repo or send me an email at nicholas@axilla.io<p>And of course, contributions welcome!