The best Hacker News stories from Show from the past day
Latest posts:
Show HN: FlakeUI
Show HN: FlakeUI
Show HN: Tangled – Git collaboration platform built on atproto
Show HN: Tangled – Git collaboration platform built on atproto
Show HN: Sonauto API – Generative music for developers
Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.
Show HN: Sonauto API – Generative music for developers
Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.
Show HN: Sonauto API – Generative music for developers
Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.
Show HN: Knowledge graph of restaurants and chefs, built using LLMs
Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub:
- Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a>
- Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!
Show HN: Knowledge graph of restaurants and chefs, built using LLMs
Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub:
- Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a>
- Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!
Show HN: Knowledge graph of restaurants and chefs, built using LLMs
Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub:
- Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a>
- Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!
Show HN: Knowledge graph of restaurants and chefs, built using LLMs
Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub:
- Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a>
- Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!
Show HN: Agents.json – OpenAPI Specification for LLMs
Hey HN, we’re building an open specification that lets agents discover and invoke APIs with natural language, built on the OpenAPI standard. agents.json clearly defines the contract between LLMs and API as a standard that's open, observable, and replicable.
Here’s a walkthrough of how it works: <a href="https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND" rel="nofollow">https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND</a>.<p>There’s 2 parts to this:<p>1. An agents.json file describes how to link API calls together into outcome-based tools for LLMs. This file sits alongside an OpenAPI file.<p>2. The agents.json SDK loads agents.json files as tools for an LLM that can then be executed as a series of API calls.<p>Why is this worth building?
Developers are realizing that to use tools with their LLMs in a stateless way, they have to implement an API manually to work with LLMs. We see devs sacrifice agentic, non-deterministic behavior for hard-coded workflows to create outcomes that can work. agents.json lets LLMs be non-deterministic for the outcomes they want to achieve and deterministic for the API calls it takes to get there.<p>We’ve put together some real examples if you're curious what the final output looks like. Under the hood, these LLMs have the same system prompt and we plug in a different agents.json to give access to different APIs. It’s all templatized.<p>- Resend (<a href="https://demo.wild-card.ai/resend">https://demo.wild-card.ai/resend</a>)<p>- Google Sheets (<a href="https://demo.wild-card.ai/googlesheets">https://demo.wild-card.ai/googlesheets</a>)<p>- Slack (<a href="https://demo.wild-card.ai/slack">https://demo.wild-card.ai/slack</a>)<p>- Stripe (<a href="https://demo.wild-card.ai/stripe">https://demo.wild-card.ai/stripe</a>)<p>We really wanted to solve real production use cases, and knew this couldn’t just be a proxy. Our approach allows you to make API calls from your own infrastructure. The open-source specification + runner package make this paradigm possible. Agents.json is truly stateless; the client manages all memory/state and it can be deployed on existing infra like serverless environments.<p>You might be wondering - <i>isn’t OpenAPI enough?</i> Why can’t I just put that in the LLM’s context?<p>We thought so too, at first, when building an agent with access to Gmail. But putting the API spec into LLM context gave us poor accuracy in tool selection and in tool calling. Even with cutting down our output space to 5-10 endpoints, we’d see the LLMs fail to select the right tool. We wanted the LLM to just work given an outcome rather than having it reason each time which series of API calls to make.<p>The Gmail API, for example, has endpoints to search for threads, list the emails in a thread, and reply with an email given base64 RFC 822 content. All that has to happen in order with the right arguments for our agent to reply to a thread. We found that APIs are designed for developers, not for LLMs.<p>So we implemented agents.json. It started off as a config file we were using internally that we slowly started adding features to like auth registration, tool search, and multiple API sources. 3 weeks ago, Dharmesh (CTO of Hubspot) posted about the concept of a specification that could translate APIs for LLMs. It sounded a lot like what we already had working internally and we decided to make it open source. We built agents.json for ourselves but we’re excited to share it.<p>In the weeks since we’ve put it out there, agents.json has 10 vetted API integrations (some of them official) and more are being added every day. We recently made the tool search and custom collection platform free for everyone so it’s even easier for devs to scale the number of tools. (<a href="https://wild-card.ai">https://wild-card.ai</a>)<p>Please tell us what you think! Especially if you’re building agents or creating APIs!
Show HN: Agents.json – OpenAPI Specification for LLMs
Hey HN, we’re building an open specification that lets agents discover and invoke APIs with natural language, built on the OpenAPI standard. agents.json clearly defines the contract between LLMs and API as a standard that's open, observable, and replicable.
Here’s a walkthrough of how it works: <a href="https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND" rel="nofollow">https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND</a>.<p>There’s 2 parts to this:<p>1. An agents.json file describes how to link API calls together into outcome-based tools for LLMs. This file sits alongside an OpenAPI file.<p>2. The agents.json SDK loads agents.json files as tools for an LLM that can then be executed as a series of API calls.<p>Why is this worth building?
Developers are realizing that to use tools with their LLMs in a stateless way, they have to implement an API manually to work with LLMs. We see devs sacrifice agentic, non-deterministic behavior for hard-coded workflows to create outcomes that can work. agents.json lets LLMs be non-deterministic for the outcomes they want to achieve and deterministic for the API calls it takes to get there.<p>We’ve put together some real examples if you're curious what the final output looks like. Under the hood, these LLMs have the same system prompt and we plug in a different agents.json to give access to different APIs. It’s all templatized.<p>- Resend (<a href="https://demo.wild-card.ai/resend">https://demo.wild-card.ai/resend</a>)<p>- Google Sheets (<a href="https://demo.wild-card.ai/googlesheets">https://demo.wild-card.ai/googlesheets</a>)<p>- Slack (<a href="https://demo.wild-card.ai/slack">https://demo.wild-card.ai/slack</a>)<p>- Stripe (<a href="https://demo.wild-card.ai/stripe">https://demo.wild-card.ai/stripe</a>)<p>We really wanted to solve real production use cases, and knew this couldn’t just be a proxy. Our approach allows you to make API calls from your own infrastructure. The open-source specification + runner package make this paradigm possible. Agents.json is truly stateless; the client manages all memory/state and it can be deployed on existing infra like serverless environments.<p>You might be wondering - <i>isn’t OpenAPI enough?</i> Why can’t I just put that in the LLM’s context?<p>We thought so too, at first, when building an agent with access to Gmail. But putting the API spec into LLM context gave us poor accuracy in tool selection and in tool calling. Even with cutting down our output space to 5-10 endpoints, we’d see the LLMs fail to select the right tool. We wanted the LLM to just work given an outcome rather than having it reason each time which series of API calls to make.<p>The Gmail API, for example, has endpoints to search for threads, list the emails in a thread, and reply with an email given base64 RFC 822 content. All that has to happen in order with the right arguments for our agent to reply to a thread. We found that APIs are designed for developers, not for LLMs.<p>So we implemented agents.json. It started off as a config file we were using internally that we slowly started adding features to like auth registration, tool search, and multiple API sources. 3 weeks ago, Dharmesh (CTO of Hubspot) posted about the concept of a specification that could translate APIs for LLMs. It sounded a lot like what we already had working internally and we decided to make it open source. We built agents.json for ourselves but we’re excited to share it.<p>In the weeks since we’ve put it out there, agents.json has 10 vetted API integrations (some of them official) and more are being added every day. We recently made the tool search and custom collection platform free for everyone so it’s even easier for devs to scale the number of tools. (<a href="https://wild-card.ai">https://wild-card.ai</a>)<p>Please tell us what you think! Especially if you’re building agents or creating APIs!
Show HN: Agents.json – OpenAPI Specification for LLMs
Hey HN, we’re building an open specification that lets agents discover and invoke APIs with natural language, built on the OpenAPI standard. agents.json clearly defines the contract between LLMs and API as a standard that's open, observable, and replicable.
Here’s a walkthrough of how it works: <a href="https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND" rel="nofollow">https://youtu.be/kby2Wdt2Dtk?si=59xGCDy48Zzwr7ND</a>.<p>There’s 2 parts to this:<p>1. An agents.json file describes how to link API calls together into outcome-based tools for LLMs. This file sits alongside an OpenAPI file.<p>2. The agents.json SDK loads agents.json files as tools for an LLM that can then be executed as a series of API calls.<p>Why is this worth building?
Developers are realizing that to use tools with their LLMs in a stateless way, they have to implement an API manually to work with LLMs. We see devs sacrifice agentic, non-deterministic behavior for hard-coded workflows to create outcomes that can work. agents.json lets LLMs be non-deterministic for the outcomes they want to achieve and deterministic for the API calls it takes to get there.<p>We’ve put together some real examples if you're curious what the final output looks like. Under the hood, these LLMs have the same system prompt and we plug in a different agents.json to give access to different APIs. It’s all templatized.<p>- Resend (<a href="https://demo.wild-card.ai/resend">https://demo.wild-card.ai/resend</a>)<p>- Google Sheets (<a href="https://demo.wild-card.ai/googlesheets">https://demo.wild-card.ai/googlesheets</a>)<p>- Slack (<a href="https://demo.wild-card.ai/slack">https://demo.wild-card.ai/slack</a>)<p>- Stripe (<a href="https://demo.wild-card.ai/stripe">https://demo.wild-card.ai/stripe</a>)<p>We really wanted to solve real production use cases, and knew this couldn’t just be a proxy. Our approach allows you to make API calls from your own infrastructure. The open-source specification + runner package make this paradigm possible. Agents.json is truly stateless; the client manages all memory/state and it can be deployed on existing infra like serverless environments.<p>You might be wondering - <i>isn’t OpenAPI enough?</i> Why can’t I just put that in the LLM’s context?<p>We thought so too, at first, when building an agent with access to Gmail. But putting the API spec into LLM context gave us poor accuracy in tool selection and in tool calling. Even with cutting down our output space to 5-10 endpoints, we’d see the LLMs fail to select the right tool. We wanted the LLM to just work given an outcome rather than having it reason each time which series of API calls to make.<p>The Gmail API, for example, has endpoints to search for threads, list the emails in a thread, and reply with an email given base64 RFC 822 content. All that has to happen in order with the right arguments for our agent to reply to a thread. We found that APIs are designed for developers, not for LLMs.<p>So we implemented agents.json. It started off as a config file we were using internally that we slowly started adding features to like auth registration, tool search, and multiple API sources. 3 weeks ago, Dharmesh (CTO of Hubspot) posted about the concept of a specification that could translate APIs for LLMs. It sounded a lot like what we already had working internally and we decided to make it open source. We built agents.json for ourselves but we’re excited to share it.<p>In the weeks since we’ve put it out there, agents.json has 10 vetted API integrations (some of them official) and more are being added every day. We recently made the tool search and custom collection platform free for everyone so it’s even easier for devs to scale the number of tools. (<a href="https://wild-card.ai">https://wild-card.ai</a>)<p>Please tell us what you think! Especially if you’re building agents or creating APIs!
Show HN: Recommendarr – AI Driven Recommendations Based on Sonarr/Radarr Media
Hello HN!<p>I've built a web app that helps you discover new shows and movies you'll actually enjoy by:<p>- Connecting to your Sonarr/Radarr/Plex instances to understand your media library<p>- Leveraging your Plex watch history for personalized recommendations<p>- Using the LLM of your choice to generate intelligent suggestions<p>- Simple setup: Easy integration with your existing media stack<p>- Flexible AI options: Works with OpenAI-compatible APIs like OpenRouter, or run locally via LM Studio, Ollama, etc.<p>- Personalized recommendations: Based on what you actually watch.<p>While it's still a work in progress, it's already quite functional and I'd love your feedback!
Show HN: Recommendarr – AI Driven Recommendations Based on Sonarr/Radarr Media
Hello HN!<p>I've built a web app that helps you discover new shows and movies you'll actually enjoy by:<p>- Connecting to your Sonarr/Radarr/Plex instances to understand your media library<p>- Leveraging your Plex watch history for personalized recommendations<p>- Using the LLM of your choice to generate intelligent suggestions<p>- Simple setup: Easy integration with your existing media stack<p>- Flexible AI options: Works with OpenAI-compatible APIs like OpenRouter, or run locally via LM Studio, Ollama, etc.<p>- Personalized recommendations: Based on what you actually watch.<p>While it's still a work in progress, it's already quite functional and I'd love your feedback!
Show HN: Recommendarr – AI Driven Recommendations Based on Sonarr/Radarr Media
Hello HN!<p>I've built a web app that helps you discover new shows and movies you'll actually enjoy by:<p>- Connecting to your Sonarr/Radarr/Plex instances to understand your media library<p>- Leveraging your Plex watch history for personalized recommendations<p>- Using the LLM of your choice to generate intelligent suggestions<p>- Simple setup: Easy integration with your existing media stack<p>- Flexible AI options: Works with OpenAI-compatible APIs like OpenRouter, or run locally via LM Studio, Ollama, etc.<p>- Personalized recommendations: Based on what you actually watch.<p>While it's still a work in progress, it's already quite functional and I'd love your feedback!
Show HN: Robyn – “Batman Inspired” Python Web Framework Built with Rust
Show HN: I built a memory-safe web server in Rust
The web server that I am building is currently in beta, so any feedback is welcome.
Show HN: I built a memory-safe web server in Rust
The web server that I am building is currently in beta, so any feedback is welcome.