The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I made a website that makes you cry
Show HN: MoebiusXBIN – ASCII and text-mode art editor with custom font support
Show HN: The Aria Programming Language
Aria is a modern, dynamic scripting language. It is meant to be a "sweet spot" language, easy to pick-up and enjoyable to use.<p>It comes with a familiar C-style syntax, and draws inspiration from a variety of languages. It has a small but usable standard library and strives to be a low-ceremony-get-stuff-done kind of language.<p>It is currently at version 0.9 and I would love feedback as I work towards getting it to 1.0.
Show HN: AgentGuard – Auto-kill AI agents before they burn through your budget
Your AI agent hits an infinite loop and racks up $2000 in API charges overnight. This happens weekly to AI developers.<p>AgentGuard monitors API calls in real-time and automatically kills your process when it hits your budget limit.<p>How it works:<p>Add 2 lines to any AI project:<p><pre><code> const agentGuard = require('agent-guard');
await agentGuard.init({ limit: 50 }); // $50 budget
// Your existing code runs unchanged
const response = await openai.chat.completions.create({...});
// AgentGuard tracks costs automatically
</code></pre>
When your code hits $50 in API costs, AgentGuard stops execution and shows you exactly what happened.<p>Why I built this:<p>I got tired of seeing "I accidentally spent $500 on OpenAI" posts. Existing tools like tokencost help you <i>measure</i> costs after the fact, but nothing prevents runaway spending in real-time.<p>AgentGuard is essentially a circuit breaker for AI API costs. It's saved me from several costly bugs during development.<p>Limitations: Only works with OpenAI and Anthropic APIs currently. Cost calculations are estimates based on documented pricing.<p>Source: <a href="https://github.com/dipampaul17/AgentGuard">https://github.com/dipampaul17/AgentGuard</a><p>Install: npm i agent-guard
Show HN: Sourcebot – Self-hosted Perplexity for your codebase
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://www.sourcebot.dev/" rel="nofollow">https://www.sourcebot.dev/</a>), a self-hosted code understanding tool for large codebases. We originally launched on HN 9 months ago with code search (<a href="https://news.ycombinator.com/item?id=41711032">https://news.ycombinator.com/item?id=41711032</a>), and we’re excited to share our newest feature: Ask Sourcebot.<p>Ask Sourcebot is an agentic search tool that lets you ask complex questions about your entire codebase in natural language, and returns a structured response with inline citations back to your code. Some types of questions you might ask:<p>- “How does authentication work in this codebase? What library is being used? What providers can a user log in with?” (<a href="https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11</a>)<p>- “When should I use channels vs. mutexes in go? Find real usages of both and include them in your answer” (<a href="https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w</a>)<p>- “How are shards laid out in memory in the Zoekt code search engine?” (<a href="https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb</a>)<p>- "How do I call C from Rust?" (<a href="https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k</a>)<p>You can try it yourself here on our demo site (<a href="https://demo.sourcebot.dev/~" rel="nofollow">https://demo.sourcebot.dev/~</a>) or checkout our demo video (<a href="https://youtu.be/olc2lyUeB-Q" rel="nofollow">https://youtu.be/olc2lyUeB-Q</a>).<p>How is this any different from existing tools like Cursor or Claude code?<p>- Sourcebot solely focuses on <i>code understanding</i>. We believe that, more than ever, the main bottleneck development teams face is not writing code, it’s acquiring the necessary context to make quality changes that are cohesive within the wider codebase. This is true regardless if the author is a human or an LLM.<p>- As opposed to being in your IDE or terminal, Sourcebot is a web app. This allows us to play to the strengths of the web: rich UX and ubiquitous access. We put a ton of work into taking the best parts of IDEs (code navigation, file explorer, syntax highlighting) and packaging them with a custom UX (rich Markdown rendering, inline citations, @ mentions) that is easily shareable between team members.<p>- Sourcebot can maintain an up-to date index of thousands of repos hosted on GitHub, GitLab, Bitbucket, Gerrit, and other hosts. This allows you to ask questions about repositories without checking them out locally. This is especially helpful when ramping up on unfamiliar parts of the codebase or working with systems that are typically spread across multiple repositories, e.g., micro services.<p>- You can BYOK (Bring Your Own API Key) to any supported reasoning model. We currently support 11 different model providers (like Amazon Bedrock and Google Vertex), and plan to add more.<p>- Sourcebot is self-hosted, fair source, and free to use.<p>Under the hood, we expose our existing regular expression search, code navigation, and file reading APIs to a LLM as tool calls. We instruct the LLM via a system prompt to gather the necessary context via these tools to sufficiently answer the users question, and then to provide a concise, structured response. This includes inline citations, which are just structured data that the LLM can embed into it’s response and can then be identified on the client and rendered appropriately. We built this on some amazing libraries like the Vercel AI SDK v5, CodeMirror, react-markdown, and Slate.js, among others.<p>This architecture is intentionally simple. We decided not to introduce any additional techniques like vector embeddings, multi-agent graphs, etc. since we wanted to push the limits of what we could do with what we had on hand. We plan on revisiting our approach as we get user feedback on what works (and what doesn’t).<p>We are really excited about pushing the envelope of code understanding. Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: Sourcebot – Self-hosted Perplexity for your codebase
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://www.sourcebot.dev/" rel="nofollow">https://www.sourcebot.dev/</a>), a self-hosted code understanding tool for large codebases. We originally launched on HN 9 months ago with code search (<a href="https://news.ycombinator.com/item?id=41711032">https://news.ycombinator.com/item?id=41711032</a>), and we’re excited to share our newest feature: Ask Sourcebot.<p>Ask Sourcebot is an agentic search tool that lets you ask complex questions about your entire codebase in natural language, and returns a structured response with inline citations back to your code. Some types of questions you might ask:<p>- “How does authentication work in this codebase? What library is being used? What providers can a user log in with?” (<a href="https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11</a>)<p>- “When should I use channels vs. mutexes in go? Find real usages of both and include them in your answer” (<a href="https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w</a>)<p>- “How are shards laid out in memory in the Zoekt code search engine?” (<a href="https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb</a>)<p>- "How do I call C from Rust?" (<a href="https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k</a>)<p>You can try it yourself here on our demo site (<a href="https://demo.sourcebot.dev/~" rel="nofollow">https://demo.sourcebot.dev/~</a>) or checkout our demo video (<a href="https://youtu.be/olc2lyUeB-Q" rel="nofollow">https://youtu.be/olc2lyUeB-Q</a>).<p>How is this any different from existing tools like Cursor or Claude code?<p>- Sourcebot solely focuses on <i>code understanding</i>. We believe that, more than ever, the main bottleneck development teams face is not writing code, it’s acquiring the necessary context to make quality changes that are cohesive within the wider codebase. This is true regardless if the author is a human or an LLM.<p>- As opposed to being in your IDE or terminal, Sourcebot is a web app. This allows us to play to the strengths of the web: rich UX and ubiquitous access. We put a ton of work into taking the best parts of IDEs (code navigation, file explorer, syntax highlighting) and packaging them with a custom UX (rich Markdown rendering, inline citations, @ mentions) that is easily shareable between team members.<p>- Sourcebot can maintain an up-to date index of thousands of repos hosted on GitHub, GitLab, Bitbucket, Gerrit, and other hosts. This allows you to ask questions about repositories without checking them out locally. This is especially helpful when ramping up on unfamiliar parts of the codebase or working with systems that are typically spread across multiple repositories, e.g., micro services.<p>- You can BYOK (Bring Your Own API Key) to any supported reasoning model. We currently support 11 different model providers (like Amazon Bedrock and Google Vertex), and plan to add more.<p>- Sourcebot is self-hosted, fair source, and free to use.<p>Under the hood, we expose our existing regular expression search, code navigation, and file reading APIs to a LLM as tool calls. We instruct the LLM via a system prompt to gather the necessary context via these tools to sufficiently answer the users question, and then to provide a concise, structured response. This includes inline citations, which are just structured data that the LLM can embed into it’s response and can then be identified on the client and rendered appropriately. We built this on some amazing libraries like the Vercel AI SDK v5, CodeMirror, react-markdown, and Slate.js, among others.<p>This architecture is intentionally simple. We decided not to introduce any additional techniques like vector embeddings, multi-agent graphs, etc. since we wanted to push the limits of what we could do with what we had on hand. We plan on revisiting our approach as we get user feedback on what works (and what doesn’t).<p>We are really excited about pushing the envelope of code understanding. Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: Sourcebot – Self-hosted Perplexity for your codebase
Hi HN,<p>We’re Brendan and Michael, the creators of Sourcebot (<a href="https://www.sourcebot.dev/" rel="nofollow">https://www.sourcebot.dev/</a>), a self-hosted code understanding tool for large codebases. We originally launched on HN 9 months ago with code search (<a href="https://news.ycombinator.com/item?id=41711032">https://news.ycombinator.com/item?id=41711032</a>), and we’re excited to share our newest feature: Ask Sourcebot.<p>Ask Sourcebot is an agentic search tool that lets you ask complex questions about your entire codebase in natural language, and returns a structured response with inline citations back to your code. Some types of questions you might ask:<p>- “How does authentication work in this codebase? What library is being used? What providers can a user log in with?” (<a href="https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjkrbw000bnn7s8of2dm11</a>)<p>- “When should I use channels vs. mutexes in go? Find real usages of both and include them in your answer” (<a href="https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpiuqhu000bpg7s9hprio4w</a>)<p>- “How are shards laid out in memory in the Zoekt code search engine?” (<a href="https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdm9nkck000bod7sqy7c1efb</a>)<p>- "How do I call C from Rust?" (<a href="https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k" rel="nofollow">https://demo.sourcebot.dev/~/chat/cmdpjy06g000pnn7ssf4nk60k</a>)<p>You can try it yourself here on our demo site (<a href="https://demo.sourcebot.dev/~" rel="nofollow">https://demo.sourcebot.dev/~</a>) or checkout our demo video (<a href="https://youtu.be/olc2lyUeB-Q" rel="nofollow">https://youtu.be/olc2lyUeB-Q</a>).<p>How is this any different from existing tools like Cursor or Claude code?<p>- Sourcebot solely focuses on <i>code understanding</i>. We believe that, more than ever, the main bottleneck development teams face is not writing code, it’s acquiring the necessary context to make quality changes that are cohesive within the wider codebase. This is true regardless if the author is a human or an LLM.<p>- As opposed to being in your IDE or terminal, Sourcebot is a web app. This allows us to play to the strengths of the web: rich UX and ubiquitous access. We put a ton of work into taking the best parts of IDEs (code navigation, file explorer, syntax highlighting) and packaging them with a custom UX (rich Markdown rendering, inline citations, @ mentions) that is easily shareable between team members.<p>- Sourcebot can maintain an up-to date index of thousands of repos hosted on GitHub, GitLab, Bitbucket, Gerrit, and other hosts. This allows you to ask questions about repositories without checking them out locally. This is especially helpful when ramping up on unfamiliar parts of the codebase or working with systems that are typically spread across multiple repositories, e.g., micro services.<p>- You can BYOK (Bring Your Own API Key) to any supported reasoning model. We currently support 11 different model providers (like Amazon Bedrock and Google Vertex), and plan to add more.<p>- Sourcebot is self-hosted, fair source, and free to use.<p>Under the hood, we expose our existing regular expression search, code navigation, and file reading APIs to a LLM as tool calls. We instruct the LLM via a system prompt to gather the necessary context via these tools to sufficiently answer the users question, and then to provide a concise, structured response. This includes inline citations, which are just structured data that the LLM can embed into it’s response and can then be identified on the client and rendered appropriately. We built this on some amazing libraries like the Vercel AI SDK v5, CodeMirror, react-markdown, and Slate.js, among others.<p>This architecture is intentionally simple. We decided not to introduce any additional techniques like vector embeddings, multi-agent graphs, etc. since we wanted to push the limits of what we could do with what we had on hand. We plan on revisiting our approach as we get user feedback on what works (and what doesn’t).<p>We are really excited about pushing the envelope of code understanding. Give it a try: <a href="https://github.com/sourcebot-dev/sourcebot">https://github.com/sourcebot-dev/sourcebot</a>. Cheers!
Show HN: AgentMail – Email infra for AI agents
Hey HN, we're Haakam, Michael, and Adi. We're building AgentMail (<a href="https://agentmail.to/">https://agentmail.to/</a>), an API to give AI agents their own email inboxes. We’re not talking about AI for your email, this is email for your AI.<p>We started building email agents because they can converse with users in their inboxes, automate email-based workflows, and authenticate with third-party applications. Given these unique capabilities, we think email will be a core interface for agents.<p>But we were building on top of Gmail, which was a struggle: poor API support, expensive subscriptions, rate limits, sending limits, GCP Pub/Sub, OAuth, crappy keyword search, and an overall terrible developer experience.<p>Gmail and other providers didn’t work for us. So we decided to bite the bullet and build our own.<p>AgentMail is like Gmail, but API-first, with programmatic inbox creation, events over webhooks and websockets, simple API key auth, organization-wide semantic search, structured data extraction, and usage-based pricing that scales with emails sent/received.<p>Here’s a demo of building an email agent: <a href="https://youtu.be/1V7BISeFUTM" rel="nofollow">https://youtu.be/1V7BISeFUTM</a>, and here’s a demo of a voice agent with its own email inbox: <a href="https://youtu.be/eG2fCsRK4RY" rel="nofollow">https://youtu.be/eG2fCsRK4RY</a><p>So far AgentMail has been deployed to use cases such as apps with dedicated inboxes for each user, voice agents that receive documents in real time, automated account provisioning and QA testing, cold outbound platforms with thousands of inboxes, automations for processing invoices, and agents that coordinate work with humans and other agents.<p>We would love to hear your thoughts and feedback. You can try our playground at <a href="https://chat.agentmail.to">https://chat.agentmail.to</a>
Show HN: AgentMail – Email infra for AI agents
Hey HN, we're Haakam, Michael, and Adi. We're building AgentMail (<a href="https://agentmail.to/">https://agentmail.to/</a>), an API to give AI agents their own email inboxes. We’re not talking about AI for your email, this is email for your AI.<p>We started building email agents because they can converse with users in their inboxes, automate email-based workflows, and authenticate with third-party applications. Given these unique capabilities, we think email will be a core interface for agents.<p>But we were building on top of Gmail, which was a struggle: poor API support, expensive subscriptions, rate limits, sending limits, GCP Pub/Sub, OAuth, crappy keyword search, and an overall terrible developer experience.<p>Gmail and other providers didn’t work for us. So we decided to bite the bullet and build our own.<p>AgentMail is like Gmail, but API-first, with programmatic inbox creation, events over webhooks and websockets, simple API key auth, organization-wide semantic search, structured data extraction, and usage-based pricing that scales with emails sent/received.<p>Here’s a demo of building an email agent: <a href="https://youtu.be/1V7BISeFUTM" rel="nofollow">https://youtu.be/1V7BISeFUTM</a>, and here’s a demo of a voice agent with its own email inbox: <a href="https://youtu.be/eG2fCsRK4RY" rel="nofollow">https://youtu.be/eG2fCsRK4RY</a><p>So far AgentMail has been deployed to use cases such as apps with dedicated inboxes for each user, voice agents that receive documents in real time, automated account provisioning and QA testing, cold outbound platforms with thousands of inboxes, automations for processing invoices, and agents that coordinate work with humans and other agents.<p>We would love to hear your thoughts and feedback. You can try our playground at <a href="https://chat.agentmail.to">https://chat.agentmail.to</a>
Show HN: AgentMail – Email infra for AI agents
Hey HN, we're Haakam, Michael, and Adi. We're building AgentMail (<a href="https://agentmail.to/">https://agentmail.to/</a>), an API to give AI agents their own email inboxes. We’re not talking about AI for your email, this is email for your AI.<p>We started building email agents because they can converse with users in their inboxes, automate email-based workflows, and authenticate with third-party applications. Given these unique capabilities, we think email will be a core interface for agents.<p>But we were building on top of Gmail, which was a struggle: poor API support, expensive subscriptions, rate limits, sending limits, GCP Pub/Sub, OAuth, crappy keyword search, and an overall terrible developer experience.<p>Gmail and other providers didn’t work for us. So we decided to bite the bullet and build our own.<p>AgentMail is like Gmail, but API-first, with programmatic inbox creation, events over webhooks and websockets, simple API key auth, organization-wide semantic search, structured data extraction, and usage-based pricing that scales with emails sent/received.<p>Here’s a demo of building an email agent: <a href="https://youtu.be/1V7BISeFUTM" rel="nofollow">https://youtu.be/1V7BISeFUTM</a>, and here’s a demo of a voice agent with its own email inbox: <a href="https://youtu.be/eG2fCsRK4RY" rel="nofollow">https://youtu.be/eG2fCsRK4RY</a><p>So far AgentMail has been deployed to use cases such as apps with dedicated inboxes for each user, voice agents that receive documents in real time, automated account provisioning and QA testing, cold outbound platforms with thousands of inboxes, automations for processing invoices, and agents that coordinate work with humans and other agents.<p>We would love to hear your thoughts and feedback. You can try our playground at <a href="https://chat.agentmail.to">https://chat.agentmail.to</a>
Show HN: AgentMail – Email infra for AI agents
Hey HN, we're Haakam, Michael, and Adi. We're building AgentMail (<a href="https://agentmail.to/">https://agentmail.to/</a>), an API to give AI agents their own email inboxes. We’re not talking about AI for your email, this is email for your AI.<p>We started building email agents because they can converse with users in their inboxes, automate email-based workflows, and authenticate with third-party applications. Given these unique capabilities, we think email will be a core interface for agents.<p>But we were building on top of Gmail, which was a struggle: poor API support, expensive subscriptions, rate limits, sending limits, GCP Pub/Sub, OAuth, crappy keyword search, and an overall terrible developer experience.<p>Gmail and other providers didn’t work for us. So we decided to bite the bullet and build our own.<p>AgentMail is like Gmail, but API-first, with programmatic inbox creation, events over webhooks and websockets, simple API key auth, organization-wide semantic search, structured data extraction, and usage-based pricing that scales with emails sent/received.<p>Here’s a demo of building an email agent: <a href="https://youtu.be/1V7BISeFUTM" rel="nofollow">https://youtu.be/1V7BISeFUTM</a>, and here’s a demo of a voice agent with its own email inbox: <a href="https://youtu.be/eG2fCsRK4RY" rel="nofollow">https://youtu.be/eG2fCsRK4RY</a><p>So far AgentMail has been deployed to use cases such as apps with dedicated inboxes for each user, voice agents that receive documents in real time, automated account provisioning and QA testing, cold outbound platforms with thousands of inboxes, automations for processing invoices, and agents that coordinate work with humans and other agents.<p>We would love to hear your thoughts and feedback. You can try our playground at <a href="https://chat.agentmail.to">https://chat.agentmail.to</a>
Show HN: Mcp-use – Connect any LLM to any MCP
Hey Pietro and Luigi here, we are the authors of mcp-use (<a href="https://github.com/mcp-use/mcp-use">https://github.com/mcp-use/mcp-use</a>).<p>When the first MCP servers came out we were very excited about the technology, but as soon as we wanted to get our hands dirty, we found out that MCP could be used only through Claude Desktop or Cursor. As engineers, we did not like that. MCP seemed like something you wanted to use to build products and applications yourself, not something to hide behind a closed source application.<p>So we approached the SDK but were pretty dissatisfied with the developer experience (double async loops, lots of boilerplate). We decided to write mcp-use to make our lives easier.<p>mcp-use lets you connect any LLM to any MCP server in just 6 lines of code. We provide a high level abstraction over the official MCP SDK that makes your life easier and supports all the functionalities of the protocol.<p>Demo video here: <a href="https://www.youtube.com/watch?v=nL_B6LZAsp4" rel="nofollow">https://www.youtube.com/watch?v=nL_B6LZAsp4</a>.<p>The key abstractions we provide are called MCPClient and MCPAgent.<p>MCPClient takes in a set of server configurations, automatically detects the transport type and creates a background task which handles the stream from/to the server.<p>MCPAgent is a combination of the MCPClient, an LLM, and a custom system prompt. It consumes the MCP client by transforming the tools, resources and prompts into model agnostic tools that can be called by the LLM.<p>The library also contains some cool utilities:<p>- secure sandboxed execution of MCP servers (we know the protocol doesn't shine for security)<p>- meta-tools that allow the agent to search over available servers and tools (to avoid context flooding) and connect dynamically to the server it needs (you could create the omnipotent agent with this).<p>Some cool things we did with this:
- write an agent that can use a browser and create/read linear tickets updated with latest information on the internet<p>- write an agent that has access to the metrics of our company to automatically create weekly reports.<p>- I connected an agent to an IKEA curtain I hacked an MCP on to adapt the lighting of my room from images of the lighting situation.<p>- recreated am open source claude code like CLI, with full MCP capability but with custom models and BYOK (<a href="https://github.com/mcp-use/mcp-use-cli">https://github.com/mcp-use/mcp-use-cli</a>).<p>We recently crossed 100,000 download and we are used by many organizations, including NASA!<p>We’d love to hear what you think of it, most importantly how we can improve it! We are happy to answer any questions and look forward to your comments.
Show HN: Mcp-use – Connect any LLM to any MCP
Hey Pietro and Luigi here, we are the authors of mcp-use (<a href="https://github.com/mcp-use/mcp-use">https://github.com/mcp-use/mcp-use</a>).<p>When the first MCP servers came out we were very excited about the technology, but as soon as we wanted to get our hands dirty, we found out that MCP could be used only through Claude Desktop or Cursor. As engineers, we did not like that. MCP seemed like something you wanted to use to build products and applications yourself, not something to hide behind a closed source application.<p>So we approached the SDK but were pretty dissatisfied with the developer experience (double async loops, lots of boilerplate). We decided to write mcp-use to make our lives easier.<p>mcp-use lets you connect any LLM to any MCP server in just 6 lines of code. We provide a high level abstraction over the official MCP SDK that makes your life easier and supports all the functionalities of the protocol.<p>Demo video here: <a href="https://www.youtube.com/watch?v=nL_B6LZAsp4" rel="nofollow">https://www.youtube.com/watch?v=nL_B6LZAsp4</a>.<p>The key abstractions we provide are called MCPClient and MCPAgent.<p>MCPClient takes in a set of server configurations, automatically detects the transport type and creates a background task which handles the stream from/to the server.<p>MCPAgent is a combination of the MCPClient, an LLM, and a custom system prompt. It consumes the MCP client by transforming the tools, resources and prompts into model agnostic tools that can be called by the LLM.<p>The library also contains some cool utilities:<p>- secure sandboxed execution of MCP servers (we know the protocol doesn't shine for security)<p>- meta-tools that allow the agent to search over available servers and tools (to avoid context flooding) and connect dynamically to the server it needs (you could create the omnipotent agent with this).<p>Some cool things we did with this:
- write an agent that can use a browser and create/read linear tickets updated with latest information on the internet<p>- write an agent that has access to the metrics of our company to automatically create weekly reports.<p>- I connected an agent to an IKEA curtain I hacked an MCP on to adapt the lighting of my room from images of the lighting situation.<p>- recreated am open source claude code like CLI, with full MCP capability but with custom models and BYOK (<a href="https://github.com/mcp-use/mcp-use-cli">https://github.com/mcp-use/mcp-use-cli</a>).<p>We recently crossed 100,000 download and we are used by many organizations, including NASA!<p>We’d love to hear what you think of it, most importantly how we can improve it! We are happy to answer any questions and look forward to your comments.
Show HN: Mcp-use – Connect any LLM to any MCP
Hey Pietro and Luigi here, we are the authors of mcp-use (<a href="https://github.com/mcp-use/mcp-use">https://github.com/mcp-use/mcp-use</a>).<p>When the first MCP servers came out we were very excited about the technology, but as soon as we wanted to get our hands dirty, we found out that MCP could be used only through Claude Desktop or Cursor. As engineers, we did not like that. MCP seemed like something you wanted to use to build products and applications yourself, not something to hide behind a closed source application.<p>So we approached the SDK but were pretty dissatisfied with the developer experience (double async loops, lots of boilerplate). We decided to write mcp-use to make our lives easier.<p>mcp-use lets you connect any LLM to any MCP server in just 6 lines of code. We provide a high level abstraction over the official MCP SDK that makes your life easier and supports all the functionalities of the protocol.<p>Demo video here: <a href="https://www.youtube.com/watch?v=nL_B6LZAsp4" rel="nofollow">https://www.youtube.com/watch?v=nL_B6LZAsp4</a>.<p>The key abstractions we provide are called MCPClient and MCPAgent.<p>MCPClient takes in a set of server configurations, automatically detects the transport type and creates a background task which handles the stream from/to the server.<p>MCPAgent is a combination of the MCPClient, an LLM, and a custom system prompt. It consumes the MCP client by transforming the tools, resources and prompts into model agnostic tools that can be called by the LLM.<p>The library also contains some cool utilities:<p>- secure sandboxed execution of MCP servers (we know the protocol doesn't shine for security)<p>- meta-tools that allow the agent to search over available servers and tools (to avoid context flooding) and connect dynamically to the server it needs (you could create the omnipotent agent with this).<p>Some cool things we did with this:
- write an agent that can use a browser and create/read linear tickets updated with latest information on the internet<p>- write an agent that has access to the metrics of our company to automatically create weekly reports.<p>- I connected an agent to an IKEA curtain I hacked an MCP on to adapt the lighting of my room from images of the lighting situation.<p>- recreated am open source claude code like CLI, with full MCP capability but with custom models and BYOK (<a href="https://github.com/mcp-use/mcp-use-cli">https://github.com/mcp-use/mcp-use-cli</a>).<p>We recently crossed 100,000 download and we are used by many organizations, including NASA!<p>We’d love to hear what you think of it, most importantly how we can improve it! We are happy to answer any questions and look forward to your comments.
Show HN: Mcp-use – Connect any LLM to any MCP
Hey Pietro and Luigi here, we are the authors of mcp-use (<a href="https://github.com/mcp-use/mcp-use">https://github.com/mcp-use/mcp-use</a>).<p>When the first MCP servers came out we were very excited about the technology, but as soon as we wanted to get our hands dirty, we found out that MCP could be used only through Claude Desktop or Cursor. As engineers, we did not like that. MCP seemed like something you wanted to use to build products and applications yourself, not something to hide behind a closed source application.<p>So we approached the SDK but were pretty dissatisfied with the developer experience (double async loops, lots of boilerplate). We decided to write mcp-use to make our lives easier.<p>mcp-use lets you connect any LLM to any MCP server in just 6 lines of code. We provide a high level abstraction over the official MCP SDK that makes your life easier and supports all the functionalities of the protocol.<p>Demo video here: <a href="https://www.youtube.com/watch?v=nL_B6LZAsp4" rel="nofollow">https://www.youtube.com/watch?v=nL_B6LZAsp4</a>.<p>The key abstractions we provide are called MCPClient and MCPAgent.<p>MCPClient takes in a set of server configurations, automatically detects the transport type and creates a background task which handles the stream from/to the server.<p>MCPAgent is a combination of the MCPClient, an LLM, and a custom system prompt. It consumes the MCP client by transforming the tools, resources and prompts into model agnostic tools that can be called by the LLM.<p>The library also contains some cool utilities:<p>- secure sandboxed execution of MCP servers (we know the protocol doesn't shine for security)<p>- meta-tools that allow the agent to search over available servers and tools (to avoid context flooding) and connect dynamically to the server it needs (you could create the omnipotent agent with this).<p>Some cool things we did with this:
- write an agent that can use a browser and create/read linear tickets updated with latest information on the internet<p>- write an agent that has access to the metrics of our company to automatically create weekly reports.<p>- I connected an agent to an IKEA curtain I hacked an MCP on to adapt the lighting of my room from images of the lighting situation.<p>- recreated am open source claude code like CLI, with full MCP capability but with custom models and BYOK (<a href="https://github.com/mcp-use/mcp-use-cli">https://github.com/mcp-use/mcp-use-cli</a>).<p>We recently crossed 100,000 download and we are used by many organizations, including NASA!<p>We’d love to hear what you think of it, most importantly how we can improve it! We are happy to answer any questions and look forward to your comments.
Show HN: A high-altitude low-power flight computer for high-altitude balloons
I've been working on this for a while now, and I'm happy to share!<p>I've been into launching weather balloons for a few years. One aspect of the hobby that really drew me in was the tracking side of things. Tracking systems let you follow the balloon's position throughout the flight and, most importantly, know exactly where it lands so you can recover the instrumentation. APRS is what I started out using during my first few years in the hobby, after I got my amateur radio license in 2020 (W0MXX). I designed a few small boards using the trackuino (<a href="https://github.com/trackuino/trackuino">https://github.com/trackuino/trackuino</a>) firmware (while breaking 3 $70 radio modules along the way).<p>I then got into recovering radiosondes, which are launched twice per day by the NWS and can be reprogrammed using RS41ng (<a href="https://github.com/mikaelnousiainen/RS41ng">https://github.com/mikaelnousiainen/RS41ng</a>) to run many amateur radio tracking protocols. I was a bit dissatisfied with how large and heavy the radiosonde trackers were, so I designed my own tracking system, called Tiny4FSK.<p>Tiny4FSK is a flight computer with built-in tracking using the Horus Binary v2 tracking system. This protocol was developed by the Project Horus team specifically for high-altitude balloons, and it brings features like high transmit rates, forward error correction, and excellent weak-signal performance in an open source package. It's designed to be as compact as possible and can run on a single AA battery for upwards of 17 hours.<p>The main board comes with header rows that allow for out-of-the-box expansion. I developed a shield that supports the BME280 environmental sensor, the ICM-20948 9-axis IMU, and more via the Qwiic connector. It also features an OLED display for basic diagnostics.<p>While I've pretty much polished the main tracking procedures (and have tested on multiple flights), I'm still developing the IMU code using a lightweight Kalman filter. Additionally, there isn't yet a wide network of Horus Binary decoding stations like the APRS network has (I-gates), but I hope that by promoting this protocol, more stations will pop up. This means that if you're not in an area with many receive stations, you'll need to set up your own using either Horus-GUI (<a href="https://github.com/projecthorus/horus-gui">https://github.com/projecthorus/horus-gui</a>) or horusdemodlib (<a href="https://github.com/projecthorus/horusdemodlib">https://github.com/projecthorus/horusdemodlib</a>).<p>One issue I’m still working on is improving RF signal strength. Although the protocol is decodable in very low-noise environments, the transmit power appears to be lower than that of a typical radiosonde. This could be due to several factors: limited current on a weak power source (signal is stronger when powered from a bench supply), off-tuned filtering/matching, or not paying enough attention to the antenna. I'm planning to run more simulations to figure this out. That said, the signal is still decodable from the ground even at max altitude (~100,000 feet).<p>On the more technical side, Tiny4FSK uses:
- the SAMD21 microcontroller, which is an ARM Cortex-M0+ MCU
- the TPS61200 boost converter, which is adjusted to output 3.3v
- Si4063 radio module, which I use on the 70cm band
- ATGM336H gps module - pretty cheap GPS module which works in airborne mode (>18km)
- integrated BME280 temperature, pressure, and humidity sensor
The code uses the Arduino framework to make it accessible to beginners.<p>All flights using Horus Binary v2, including reprogrammed radiosondes, other custom trackers, and Tiny4FSK show up in real-time on Sondehub Amateur (<a href="https://amateur.sondehub.org" rel="nofollow">https://amateur.sondehub.org</a>). Flight data can be found in the /Media/Data folder on Github (there's several missing flights on there though).<p>Thanks for reading, hope I didn’t mess anything up too badly in the post!
-Max
Show HN: A high-altitude low-power flight computer for high-altitude balloons
I've been working on this for a while now, and I'm happy to share!<p>I've been into launching weather balloons for a few years. One aspect of the hobby that really drew me in was the tracking side of things. Tracking systems let you follow the balloon's position throughout the flight and, most importantly, know exactly where it lands so you can recover the instrumentation. APRS is what I started out using during my first few years in the hobby, after I got my amateur radio license in 2020 (W0MXX). I designed a few small boards using the trackuino (<a href="https://github.com/trackuino/trackuino">https://github.com/trackuino/trackuino</a>) firmware (while breaking 3 $70 radio modules along the way).<p>I then got into recovering radiosondes, which are launched twice per day by the NWS and can be reprogrammed using RS41ng (<a href="https://github.com/mikaelnousiainen/RS41ng">https://github.com/mikaelnousiainen/RS41ng</a>) to run many amateur radio tracking protocols. I was a bit dissatisfied with how large and heavy the radiosonde trackers were, so I designed my own tracking system, called Tiny4FSK.<p>Tiny4FSK is a flight computer with built-in tracking using the Horus Binary v2 tracking system. This protocol was developed by the Project Horus team specifically for high-altitude balloons, and it brings features like high transmit rates, forward error correction, and excellent weak-signal performance in an open source package. It's designed to be as compact as possible and can run on a single AA battery for upwards of 17 hours.<p>The main board comes with header rows that allow for out-of-the-box expansion. I developed a shield that supports the BME280 environmental sensor, the ICM-20948 9-axis IMU, and more via the Qwiic connector. It also features an OLED display for basic diagnostics.<p>While I've pretty much polished the main tracking procedures (and have tested on multiple flights), I'm still developing the IMU code using a lightweight Kalman filter. Additionally, there isn't yet a wide network of Horus Binary decoding stations like the APRS network has (I-gates), but I hope that by promoting this protocol, more stations will pop up. This means that if you're not in an area with many receive stations, you'll need to set up your own using either Horus-GUI (<a href="https://github.com/projecthorus/horus-gui">https://github.com/projecthorus/horus-gui</a>) or horusdemodlib (<a href="https://github.com/projecthorus/horusdemodlib">https://github.com/projecthorus/horusdemodlib</a>).<p>One issue I’m still working on is improving RF signal strength. Although the protocol is decodable in very low-noise environments, the transmit power appears to be lower than that of a typical radiosonde. This could be due to several factors: limited current on a weak power source (signal is stronger when powered from a bench supply), off-tuned filtering/matching, or not paying enough attention to the antenna. I'm planning to run more simulations to figure this out. That said, the signal is still decodable from the ground even at max altitude (~100,000 feet).<p>On the more technical side, Tiny4FSK uses:
- the SAMD21 microcontroller, which is an ARM Cortex-M0+ MCU
- the TPS61200 boost converter, which is adjusted to output 3.3v
- Si4063 radio module, which I use on the 70cm band
- ATGM336H gps module - pretty cheap GPS module which works in airborne mode (>18km)
- integrated BME280 temperature, pressure, and humidity sensor
The code uses the Arduino framework to make it accessible to beginners.<p>All flights using Horus Binary v2, including reprogrammed radiosondes, other custom trackers, and Tiny4FSK show up in real-time on Sondehub Amateur (<a href="https://amateur.sondehub.org" rel="nofollow">https://amateur.sondehub.org</a>). Flight data can be found in the /Media/Data folder on Github (there's several missing flights on there though).<p>Thanks for reading, hope I didn’t mess anything up too badly in the post!
-Max
Show HN: An AI agent that learns your product and guides your users
Hey HN! My name is Christian, and I’m the co-founder of <a href="https://frigade.ai">https://frigade.ai</a>. We’ve built an AI agent that automatically learns how to use any web-based product, and in turn guides users directly in the UI, automatically generates documentation, and even takes actions on a user’s behalf. Think of it as Clippy from the old MS Office. But on steroids. And actually helpful.<p>You can see the agent and tool-calling SDK in action here: <a href="https://www.youtube.com/watch?v=UPe0t3A1Vpg" rel="nofollow">https://www.youtube.com/watch?v=UPe0t3A1Vpg</a><p>How is this different from other AI customer support products?<p>Most AI "copilots" are really just glorified chatbots. They skim your help center and spit out some nonspecific bullet points. Basically some ‘hopes and prayers’ that your users will figure it out. Ultimately, this puts the burden on the user to follow through. And assumes companies are keeping their help center up-to-date with every product change. That means constant screenshots of new product UI or features for accurate instructions.These solutions leverage only a fraction of what’s possible with AI, which can now reason about software interfaces extensively.<p>With Frigade AI, we guide the user directly in the product and build on-demand tours based on the current user’s state and context. The agents can also take actions immediately on a user’s behalf, e.g. inviting a colleague to a workspace or retrieving billing information (via our tool calling SDK).<p>This was only made possible recently. The latest frontier models (GPT 4.1, Claude 4, Gemini 2.5, etc.) are able to reason about UIs and workflows in a way that simply didn’t work just 6 months ago. That’s why we’re so excited to bring this technology to the forefront of complex legacy SaaS applications that are not yet AI enabled.<p>How does it work?<p>1. Invite agent@frigade.ai to your product. You can send multiple invitations based on distinct roles.<p>2. Our agent automatically explores and reasons about your application.<p>3. Attach any existing help center resources or training documentation to supplement the agent’s understanding. Totally optional.<p>4. Install the agent assistant Javascript snippet (just a few lines).<p>5. That’s it. Your users can now start asking questions and get on demand product tours and questions answered in real time without any overhead.<p>This process takes only a few minutes. Once running, you can improve the agent by rating and providing feedback to the responses it provides. If you want to integrate further, you can also hook up your own code to our tool calling SDK to enable the agent to look up customer info, issue refunds, etc. directly. These calls can be made with just a few lines of code by describing the tool and its parameters in natural language and passing a single Javascript promise (e.g. make an API call, call a function in your app, etc.).<p>Would love to hear what the HN crowd thinks about this approach! Are you building your own AI agent from scratch, or looking to embed one off the shelf?
Show HN: An AI agent that learns your product and guides your users
Hey HN! My name is Christian, and I’m the co-founder of <a href="https://frigade.ai">https://frigade.ai</a>. We’ve built an AI agent that automatically learns how to use any web-based product, and in turn guides users directly in the UI, automatically generates documentation, and even takes actions on a user’s behalf. Think of it as Clippy from the old MS Office. But on steroids. And actually helpful.<p>You can see the agent and tool-calling SDK in action here: <a href="https://www.youtube.com/watch?v=UPe0t3A1Vpg" rel="nofollow">https://www.youtube.com/watch?v=UPe0t3A1Vpg</a><p>How is this different from other AI customer support products?<p>Most AI "copilots" are really just glorified chatbots. They skim your help center and spit out some nonspecific bullet points. Basically some ‘hopes and prayers’ that your users will figure it out. Ultimately, this puts the burden on the user to follow through. And assumes companies are keeping their help center up-to-date with every product change. That means constant screenshots of new product UI or features for accurate instructions.These solutions leverage only a fraction of what’s possible with AI, which can now reason about software interfaces extensively.<p>With Frigade AI, we guide the user directly in the product and build on-demand tours based on the current user’s state and context. The agents can also take actions immediately on a user’s behalf, e.g. inviting a colleague to a workspace or retrieving billing information (via our tool calling SDK).<p>This was only made possible recently. The latest frontier models (GPT 4.1, Claude 4, Gemini 2.5, etc.) are able to reason about UIs and workflows in a way that simply didn’t work just 6 months ago. That’s why we’re so excited to bring this technology to the forefront of complex legacy SaaS applications that are not yet AI enabled.<p>How does it work?<p>1. Invite agent@frigade.ai to your product. You can send multiple invitations based on distinct roles.<p>2. Our agent automatically explores and reasons about your application.<p>3. Attach any existing help center resources or training documentation to supplement the agent’s understanding. Totally optional.<p>4. Install the agent assistant Javascript snippet (just a few lines).<p>5. That’s it. Your users can now start asking questions and get on demand product tours and questions answered in real time without any overhead.<p>This process takes only a few minutes. Once running, you can improve the agent by rating and providing feedback to the responses it provides. If you want to integrate further, you can also hook up your own code to our tool calling SDK to enable the agent to look up customer info, issue refunds, etc. directly. These calls can be made with just a few lines of code by describing the tool and its parameters in natural language and passing a single Javascript promise (e.g. make an API call, call a function in your app, etc.).<p>Would love to hear what the HN crowd thinks about this approach! Are you building your own AI agent from scratch, or looking to embed one off the shelf?
Show HN: An AI agent that learns your product and guides your users
Hey HN! My name is Christian, and I’m the co-founder of <a href="https://frigade.ai">https://frigade.ai</a>. We’ve built an AI agent that automatically learns how to use any web-based product, and in turn guides users directly in the UI, automatically generates documentation, and even takes actions on a user’s behalf. Think of it as Clippy from the old MS Office. But on steroids. And actually helpful.<p>You can see the agent and tool-calling SDK in action here: <a href="https://www.youtube.com/watch?v=UPe0t3A1Vpg" rel="nofollow">https://www.youtube.com/watch?v=UPe0t3A1Vpg</a><p>How is this different from other AI customer support products?<p>Most AI "copilots" are really just glorified chatbots. They skim your help center and spit out some nonspecific bullet points. Basically some ‘hopes and prayers’ that your users will figure it out. Ultimately, this puts the burden on the user to follow through. And assumes companies are keeping their help center up-to-date with every product change. That means constant screenshots of new product UI or features for accurate instructions.These solutions leverage only a fraction of what’s possible with AI, which can now reason about software interfaces extensively.<p>With Frigade AI, we guide the user directly in the product and build on-demand tours based on the current user’s state and context. The agents can also take actions immediately on a user’s behalf, e.g. inviting a colleague to a workspace or retrieving billing information (via our tool calling SDK).<p>This was only made possible recently. The latest frontier models (GPT 4.1, Claude 4, Gemini 2.5, etc.) are able to reason about UIs and workflows in a way that simply didn’t work just 6 months ago. That’s why we’re so excited to bring this technology to the forefront of complex legacy SaaS applications that are not yet AI enabled.<p>How does it work?<p>1. Invite agent@frigade.ai to your product. You can send multiple invitations based on distinct roles.<p>2. Our agent automatically explores and reasons about your application.<p>3. Attach any existing help center resources or training documentation to supplement the agent’s understanding. Totally optional.<p>4. Install the agent assistant Javascript snippet (just a few lines).<p>5. That’s it. Your users can now start asking questions and get on demand product tours and questions answered in real time without any overhead.<p>This process takes only a few minutes. Once running, you can improve the agent by rating and providing feedback to the responses it provides. If you want to integrate further, you can also hook up your own code to our tool calling SDK to enable the agent to look up customer info, issue refunds, etc. directly. These calls can be made with just a few lines of code by describing the tool and its parameters in natural language and passing a single Javascript promise (e.g. make an API call, call a function in your app, etc.).<p>Would love to hear what the HN crowd thinks about this approach! Are you building your own AI agent from scratch, or looking to embed one off the shelf?