The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Chat with your data using LangChain, Pinecone, and Airbyte

Hi HN,<p>A few of our team members at Airbyte (and Joe, who killed it!) recently played with building our own internal support chat bot, using Airbyte, Langchain, Pinecone and OpenAI, that would answer any questions we ask when developing a new connector on Airbyte.<p>As we prototyped it, we realized that it could be applied for many other use cases and sources of data, so... we created a tutorial that other community members can leverage [<a href="http://airbyte.com/tutorials/chat-with-your-data-using-openai-pinecone-airbyte-and-langchain">http://airbyte.com/tutorials/chat-with-your-data-using-opena...</a>] and the Github repo to run it [<a href="https://github.com/airbytehq/tutorial-connector-dev-bot">https://github.com/airbytehq/tutorial-connector-dev-bot</a>]<p>The tutorial shows: - How to extract unstructured data from a variety of sources using Airbyte Open Source - How to load data into a vector database (here Pinecone), preparing the data for LLM usage along the way - How to integrate a vector database into ChatGPT to ask questions about your proprietary data<p>I hope some of it is useful, and would love your feedback!

Show HN: Medical LLM API on par with Google Med-PaLM 2. 92% USMLE accuracy

Hello HN!<p>I’d like to share a medical question-answering API which has state-of-the-art performance on the USMLE self-assessment exam. You can try out the MediSearch API in 10 seconds in this Colab [<a href="https://tinyurl.com/medisearch-colab" rel="nofollow noreferrer">https://tinyurl.com/medisearch-colab</a>], or test the search engine live at MediSearch [<a href="https://medisearch.io">https://medisearch.io</a>]. See our API page [<a href="https://medisearch.io/api">https://medisearch.io/api</a>] to apply for access. Some technical highlights:<p>1. State-of-the-art accuracy of 92% on the USMLE self-assessment exam [<a href="https://tinyurl.com/medisearch-eval" rel="nofollow noreferrer">https://tinyurl.com/medisearch-eval</a>]. 2. Ranked 2nd on MedQA benchmark, closely tailing Google's Med-PaLM 2 (within ~2%). 3. The API provides article references as an output.<p>We’d love to hear feedback from this community! We’ve seen people use the MediSearch API to implement a chat bot or search field within their app.<p>Please note that MediSearch does not have a 100% accuracy. Therefore, a human professional should always make a final decision on the information suggested by MediSearch.<p>You can use the MediSearch API similarly to the OpenAI API [<a href="https://pypi.org/project/medisearch-client/" rel="nofollow noreferrer">https://pypi.org/project/medisearch-client/</a>]. If you’d like access to the MediSearch API, please email michal@medisearch.io or visit our API page [<a href="https://medisearch.io/api">https://medisearch.io/api</a>]. Please specify your use case in the application.

Show HN: Doculite – Use SQLite as a Document Database

Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server. So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks

Show HN: Doculite – Use SQLite as a Document Database

Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server. So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks

Show HN: Doculite – Use SQLite as a Document Database

Hi!<p>As I was working on a side project, I noticed I wanted to use SQLite like a Document Database on the server. So I built Doculite. DocuLite lets you use SQLite like Firebase Firestore. It's written in Typescript and an adapter on top of sqlite3 and sqlite.<p>Reasons:<p>1) Using an SQL Database meant having less flexibility and iterating slower.<p>2) Alternative, proven Document Databases only offered client/server support.<p>3) No network. Having SQLite server-side next to the application is extremely fast.<p>4) Replicating Firestore's API makes it easy to use.<p>5) Listeners and real-time updates enhance UX greatly.<p>6) SQLite is a proven, stable, and well-liked standard. And, apparently one of the most deployed software modules right now. (src: <a href="https://www.sqlite.org/mostdeployed.html" rel="nofollow noreferrer">https://www.sqlite.org/mostdeployed.html</a>)<p>What do you think? Feel free to comment with questions, remarks, and thoughts.<p>Happy to hear them.<p>Thanks

Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation

Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)

Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation

Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)

Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation

Hi HN! We’re been working hard on this low-code tool for rapid prompt discovery, robustness testing and LLM evaluation. We’ve just released documentation to help new users learn how to use it and what it can already do. Let us know what you think! :)

Show HN: Axilla – Open-source TypeScript framework for LLM apps

Hi HN, we are Nick and Ben, creators of Axilla - an open source TypeScript framework to develop LLM applications. It’s in the early stages but you can use it today: we’ve already published 2 modules and have more coming soon.<p>Ben and I met while working at Cruise on the ML platform for self-driving cars. We spent many years there and learned the hard way that shipping AI is not quite the same as shipping regular code. There are many parts of the ML lifecycle, e.g., mining, processing, and labeling data and training, evaluating, and deploying models. Although none of them are rocket science, most of the inefficiencies tend to come from integrating them together. At Cruise, we built an integrated framework that accelerated the speed of shipping models to the car by 80%.<p>With the explosion of generative AI, we are seeing software teams building applications and features with the same inefficiencies we experienced at Cruise.<p>This got us excited about building an opinionated, end-to-end platform. We started building in Python but quickly noticed that most of the teams we talked to weren’t using Python, but instead building in TypeScript. This is because most teams are not training their own models, but rather using foundational ones served by third parties over HTTP, like openAI, anthropic or even OSS ones from hugging face.<p>Because of this, we’ve decided to build Axilla as a TypeScript first library.<p>Our goal is to build a modular framework that can be adopted incrementally yet benefits from full integration. For example, the production responses coming from the LLM should be able to be sent — with all necessary metadata — to the eval module or the labeling tooling.<p>So far, we’ve shipped 2 modules, that are available to use today on npm:<p>* *axgen*: focused on RAG type workflows. Useful if you want to ingest data, get the embeddings, store it in a vector store and then do similarity search retrieval. It’s how you give LLMs memory or more context about private data sources.<p>* *axeval*: a lightweight evaluation library, that feels like jest (so, like unit tests). In our experience, evaluation should be really easy to setup, to encourage continuous quality monitoring, and slowly build ground truth datasets of edge cases that can be used for regression testing, and fine-tuning.<p>We are working on a serving module and a data processing one next and would love to hear what functionality you need us to prioritize!<p>We built an open-source demo UI for you to discover the framework more: <a href="https://github.com/axilla-io/demo-ui">https://github.com/axilla-io/demo-ui</a><p>And here's a video of Nicholas walking through the UI that gives an idea of what axgen can do: <a href="https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc" rel="nofollow noreferrer">https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc</a><p>We’d love to hear your feedback on the framework, you can let us know here, create an issue on the GitHub repo or send me an email at nicholas@axilla.io<p>And of course, contributions welcome!

Show HN: Axilla – Open-source TypeScript framework for LLM apps

Hi HN, we are Nick and Ben, creators of Axilla - an open source TypeScript framework to develop LLM applications. It’s in the early stages but you can use it today: we’ve already published 2 modules and have more coming soon.<p>Ben and I met while working at Cruise on the ML platform for self-driving cars. We spent many years there and learned the hard way that shipping AI is not quite the same as shipping regular code. There are many parts of the ML lifecycle, e.g., mining, processing, and labeling data and training, evaluating, and deploying models. Although none of them are rocket science, most of the inefficiencies tend to come from integrating them together. At Cruise, we built an integrated framework that accelerated the speed of shipping models to the car by 80%.<p>With the explosion of generative AI, we are seeing software teams building applications and features with the same inefficiencies we experienced at Cruise.<p>This got us excited about building an opinionated, end-to-end platform. We started building in Python but quickly noticed that most of the teams we talked to weren’t using Python, but instead building in TypeScript. This is because most teams are not training their own models, but rather using foundational ones served by third parties over HTTP, like openAI, anthropic or even OSS ones from hugging face.<p>Because of this, we’ve decided to build Axilla as a TypeScript first library.<p>Our goal is to build a modular framework that can be adopted incrementally yet benefits from full integration. For example, the production responses coming from the LLM should be able to be sent — with all necessary metadata — to the eval module or the labeling tooling.<p>So far, we’ve shipped 2 modules, that are available to use today on npm:<p>* *axgen*: focused on RAG type workflows. Useful if you want to ingest data, get the embeddings, store it in a vector store and then do similarity search retrieval. It’s how you give LLMs memory or more context about private data sources.<p>* *axeval*: a lightweight evaluation library, that feels like jest (so, like unit tests). In our experience, evaluation should be really easy to setup, to encourage continuous quality monitoring, and slowly build ground truth datasets of edge cases that can be used for regression testing, and fine-tuning.<p>We are working on a serving module and a data processing one next and would love to hear what functionality you need us to prioritize!<p>We built an open-source demo UI for you to discover the framework more: <a href="https://github.com/axilla-io/demo-ui">https://github.com/axilla-io/demo-ui</a><p>And here's a video of Nicholas walking through the UI that gives an idea of what axgen can do: <a href="https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc" rel="nofollow noreferrer">https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc</a><p>We’d love to hear your feedback on the framework, you can let us know here, create an issue on the GitHub repo or send me an email at nicholas@axilla.io<p>And of course, contributions welcome!

Show HN: Axilla – Open-source TypeScript framework for LLM apps

Hi HN, we are Nick and Ben, creators of Axilla - an open source TypeScript framework to develop LLM applications. It’s in the early stages but you can use it today: we’ve already published 2 modules and have more coming soon.<p>Ben and I met while working at Cruise on the ML platform for self-driving cars. We spent many years there and learned the hard way that shipping AI is not quite the same as shipping regular code. There are many parts of the ML lifecycle, e.g., mining, processing, and labeling data and training, evaluating, and deploying models. Although none of them are rocket science, most of the inefficiencies tend to come from integrating them together. At Cruise, we built an integrated framework that accelerated the speed of shipping models to the car by 80%.<p>With the explosion of generative AI, we are seeing software teams building applications and features with the same inefficiencies we experienced at Cruise.<p>This got us excited about building an opinionated, end-to-end platform. We started building in Python but quickly noticed that most of the teams we talked to weren’t using Python, but instead building in TypeScript. This is because most teams are not training their own models, but rather using foundational ones served by third parties over HTTP, like openAI, anthropic or even OSS ones from hugging face.<p>Because of this, we’ve decided to build Axilla as a TypeScript first library.<p>Our goal is to build a modular framework that can be adopted incrementally yet benefits from full integration. For example, the production responses coming from the LLM should be able to be sent — with all necessary metadata — to the eval module or the labeling tooling.<p>So far, we’ve shipped 2 modules, that are available to use today on npm:<p>* *axgen*: focused on RAG type workflows. Useful if you want to ingest data, get the embeddings, store it in a vector store and then do similarity search retrieval. It’s how you give LLMs memory or more context about private data sources.<p>* *axeval*: a lightweight evaluation library, that feels like jest (so, like unit tests). In our experience, evaluation should be really easy to setup, to encourage continuous quality monitoring, and slowly build ground truth datasets of edge cases that can be used for regression testing, and fine-tuning.<p>We are working on a serving module and a data processing one next and would love to hear what functionality you need us to prioritize!<p>We built an open-source demo UI for you to discover the framework more: <a href="https://github.com/axilla-io/demo-ui">https://github.com/axilla-io/demo-ui</a><p>And here's a video of Nicholas walking through the UI that gives an idea of what axgen can do: <a href="https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc" rel="nofollow noreferrer">https://www.loom.com/share/458f9b6679b740f0a5c78a33fffee3dc</a><p>We’d love to hear your feedback on the framework, you can let us know here, create an issue on the GitHub repo or send me an email at nicholas@axilla.io<p>And of course, contributions welcome!

Show HN: Phind V2 – A GPT-4 agent that’s connected to the internet and your code

Hi HN - Today we’re launching V2 of Phind.com, an assistant for programmers that combines GPT-4 with the ability to search the web and your codebase to solve nearly any technical problem – no matter how difficult.<p>We’re incredibly grateful for the feedback we received when we first launched GPT-4 answers back in April (<a href="https://news.ycombinator.com/item?id=35543668">https://news.ycombinator.com/item?id=35543668</a>). As Phind has gotten better at complex programming tasks, the questions it gets asked have gotten more complex as well. In the past, we would always perform a web search for every input. This limitation constrained Phind’s answers to what was present in the search results, preventing us from making Phind a more powerful debugger and making it challenging to integrate Phind with your codebase.<p>We’ve addressed all these shortcomings in Phind V2. This release has three major updates: (1) Phind is now a pair programming agent that knows when to browse the web, ask clarifying questions, and call itself recursively; (2) the answering engine defaults to GPT-4, and you can use it without a login; (3) we integrate with your codebase via our new VS Code extension.<p>We realized that search is only one of many tools that Phind should be able to use. As such, Phind has been re-engineered to be an agent that can dynamically choose whatever tool best helps the user – it’s now smart enough to decide when to search and when to enter a specialized debug mode. Instead of making assumptions about your code and proceeding blindly, Phind can ask you questions and clarify its assumptions. When a problem requires multiple searches or logical steps to solve, Phind can call itself recursively and perform multi-step reasoning without user input.<p>We’ve heard from you that switching between your IDE and Phind in the browser has been a major pain point. No longer – we’re launching a VS Code extension that brings Phind into the IDE and finally connects Phind with the context of your codebase. Phind in VS Code automatically determines which parts of your code are relevant to your search and can help you squash bugs in a single click.<p>To maximize Phind’s alignment with your preferred answer style, we’ve also added a feature called Answer Profile where you can tell the AI about yourself. Phind will apply this answering style across the board, automatically.<p>Here are some examples of the new Phind answering questions it could not before:<p>Clarifying assumptions to help a user with debugging: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Designing a highly specific and custom database schema: <a href="https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6">https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6</a><p>Splitting a Wordpress theme across multiple files: <a href="https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p">https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p</a><p>Phind’s asking clarification questions in debug mode: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Phind extension answering questions about a local codebase: <a href="https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4">https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4</a><p>Answering questions about a local codebase using the web: <a href="https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20">https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20</a><p>We are incredibly grateful for the feedback the HN community has given us and are excited to hear your thoughts about this release!<p>Cheers, Michael and Justin

Show HN: Phind V2 – A GPT-4 agent that’s connected to the internet and your code

Hi HN - Today we’re launching V2 of Phind.com, an assistant for programmers that combines GPT-4 with the ability to search the web and your codebase to solve nearly any technical problem – no matter how difficult.<p>We’re incredibly grateful for the feedback we received when we first launched GPT-4 answers back in April (<a href="https://news.ycombinator.com/item?id=35543668">https://news.ycombinator.com/item?id=35543668</a>). As Phind has gotten better at complex programming tasks, the questions it gets asked have gotten more complex as well. In the past, we would always perform a web search for every input. This limitation constrained Phind’s answers to what was present in the search results, preventing us from making Phind a more powerful debugger and making it challenging to integrate Phind with your codebase.<p>We’ve addressed all these shortcomings in Phind V2. This release has three major updates: (1) Phind is now a pair programming agent that knows when to browse the web, ask clarifying questions, and call itself recursively; (2) the answering engine defaults to GPT-4, and you can use it without a login; (3) we integrate with your codebase via our new VS Code extension.<p>We realized that search is only one of many tools that Phind should be able to use. As such, Phind has been re-engineered to be an agent that can dynamically choose whatever tool best helps the user – it’s now smart enough to decide when to search and when to enter a specialized debug mode. Instead of making assumptions about your code and proceeding blindly, Phind can ask you questions and clarify its assumptions. When a problem requires multiple searches or logical steps to solve, Phind can call itself recursively and perform multi-step reasoning without user input.<p>We’ve heard from you that switching between your IDE and Phind in the browser has been a major pain point. No longer – we’re launching a VS Code extension that brings Phind into the IDE and finally connects Phind with the context of your codebase. Phind in VS Code automatically determines which parts of your code are relevant to your search and can help you squash bugs in a single click.<p>To maximize Phind’s alignment with your preferred answer style, we’ve also added a feature called Answer Profile where you can tell the AI about yourself. Phind will apply this answering style across the board, automatically.<p>Here are some examples of the new Phind answering questions it could not before:<p>Clarifying assumptions to help a user with debugging: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Designing a highly specific and custom database schema: <a href="https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6">https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6</a><p>Splitting a Wordpress theme across multiple files: <a href="https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p">https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p</a><p>Phind’s asking clarification questions in debug mode: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Phind extension answering questions about a local codebase: <a href="https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4">https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4</a><p>Answering questions about a local codebase using the web: <a href="https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20">https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20</a><p>We are incredibly grateful for the feedback the HN community has given us and are excited to hear your thoughts about this release!<p>Cheers, Michael and Justin

Show HN: Phind V2 – A GPT-4 agent that’s connected to the internet and your code

Hi HN - Today we’re launching V2 of Phind.com, an assistant for programmers that combines GPT-4 with the ability to search the web and your codebase to solve nearly any technical problem – no matter how difficult.<p>We’re incredibly grateful for the feedback we received when we first launched GPT-4 answers back in April (<a href="https://news.ycombinator.com/item?id=35543668">https://news.ycombinator.com/item?id=35543668</a>). As Phind has gotten better at complex programming tasks, the questions it gets asked have gotten more complex as well. In the past, we would always perform a web search for every input. This limitation constrained Phind’s answers to what was present in the search results, preventing us from making Phind a more powerful debugger and making it challenging to integrate Phind with your codebase.<p>We’ve addressed all these shortcomings in Phind V2. This release has three major updates: (1) Phind is now a pair programming agent that knows when to browse the web, ask clarifying questions, and call itself recursively; (2) the answering engine defaults to GPT-4, and you can use it without a login; (3) we integrate with your codebase via our new VS Code extension.<p>We realized that search is only one of many tools that Phind should be able to use. As such, Phind has been re-engineered to be an agent that can dynamically choose whatever tool best helps the user – it’s now smart enough to decide when to search and when to enter a specialized debug mode. Instead of making assumptions about your code and proceeding blindly, Phind can ask you questions and clarify its assumptions. When a problem requires multiple searches or logical steps to solve, Phind can call itself recursively and perform multi-step reasoning without user input.<p>We’ve heard from you that switching between your IDE and Phind in the browser has been a major pain point. No longer – we’re launching a VS Code extension that brings Phind into the IDE and finally connects Phind with the context of your codebase. Phind in VS Code automatically determines which parts of your code are relevant to your search and can help you squash bugs in a single click.<p>To maximize Phind’s alignment with your preferred answer style, we’ve also added a feature called Answer Profile where you can tell the AI about yourself. Phind will apply this answering style across the board, automatically.<p>Here are some examples of the new Phind answering questions it could not before:<p>Clarifying assumptions to help a user with debugging: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Designing a highly specific and custom database schema: <a href="https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6">https://www.phind.com/agent?cache=clkwpprz600g4jt08dl21e7r6</a><p>Splitting a Wordpress theme across multiple files: <a href="https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p">https://www.phind.com/agent?cache=clknqywuq001pji083sdacf9p</a><p>Phind’s asking clarification questions in debug mode: <a href="https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f">https://www.phind.com/agent?cache=cljmjzjgn0000jo085otq111f</a><p>Phind extension answering questions about a local codebase: <a href="https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4">https://www.phind.com/agent?cache=ra4kh2v3epgv5iw7z6dlzuo4</a><p>Answering questions about a local codebase using the web: <a href="https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20">https://www.phind.com/agent?cache=ztiaju6xwtpi39l2kjdnwh20</a><p>We are incredibly grateful for the feedback the HN community has given us and are excited to hear your thoughts about this release!<p>Cheers, Michael and Justin

Show HN: Api2ai – create an API agent from any OpenAPI Spec

api2ai parses OpenAPI Spec to generate an agent that can make API calls. For context, I recently need to explore a handful of API suites and thought LLMs can help expedite this process. After some digging, I found OpenAPI Specs are perfect fit for function calling.<p>Based on a text prompt, api2ai can select the right endpoint and properly parse request params and make api calls. It also handles authentication, currently it supports basic auth, api keys, and bearer token schemes. The tool has helped me explore and see APIs in action without a deep dive into the docs or using postman. It’s open source and hopefully can be useful for you, too.<p>Please let me know if you have any questions or feedback.

Show HN: Api2ai – create an API agent from any OpenAPI Spec

api2ai parses OpenAPI Spec to generate an agent that can make API calls. For context, I recently need to explore a handful of API suites and thought LLMs can help expedite this process. After some digging, I found OpenAPI Specs are perfect fit for function calling.<p>Based on a text prompt, api2ai can select the right endpoint and properly parse request params and make api calls. It also handles authentication, currently it supports basic auth, api keys, and bearer token schemes. The tool has helped me explore and see APIs in action without a deep dive into the docs or using postman. It’s open source and hopefully can be useful for you, too.<p>Please let me know if you have any questions or feedback.

Show HN: Tarot Arcana—AI tarot card readings

On device LLM generated tarot card readings.

Show HN: Tarot Arcana—AI tarot card readings

On device LLM generated tarot card readings.

Show HN: Archsense – Accurately generated architecture from the source code

Show HN: Archsense – Accurately generated architecture from the source code

< 1 2 3 ... 369 370 371 372 373 ... 853 854 855 >