The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Open-source Deep Research across workplace applications

I’ve been using deep research on OpenAI and Perplexity and it’s been just amazing at gathering data across a lot of related and chained searches. Just earlier today, I asked “What are some marquee tech companies / hot startups (not including the giants like FAAMG, Samsung, Nvidia etc.)”. It’s a pretty involved question and looking up “marquee tech startups” or "hot tech startups" on Google gave me nothing useful. Deep research on both ChatGPT and Perplexity gave really high quality responses with ChatGPT siding on slightly larger scaleups and Perplexity siding more on up and coming companies.<p>Given how useful AI research agents are across the internet, we decided to build an open-source equivalent for the workplace since a ton of questions at work also cannot be easily resolved with a single search. Onyx supports deep research connected to company applications like Google Drive, Salesforce, Sharepoint, GitHub, Slack, and 30+ others.<p>For example, an engineer may want to know “What’s happening with the verification email failure?” Onyx’s AI agent would first figure out what it needs to answer this question: What is the cause of the failure, what has been done to address it, has this come up before, and what’s the latest status on the issue. The agent would run parallel searches through Confluence, email, Slack, and GitHub to get the answers to these then combine them to build a coherent overview. If the agent finds that there was a technical blocker that will delay the resolution, it will adjust mid-flight and research to get more context on the blocker.<p>Here’s a video demo I recorded: <a href="https://www.youtube.com/watch?v=drvC0fWG4hE" rel="nofollow">https://www.youtube.com/watch?v=drvC0fWG4hE</a><p>If you want to get started with the GitHub repo, you can check out our guides at <a href="https://docs.onyx.app">https://docs.onyx.app</a>. Or to play with it without needing to deploy anything, you can go to <a href="https://cloud.onyx.app/signup">https://cloud.onyx.app/signup</a><p>P.S. There’s a lot of cool technical details behind building a system like this so I’ll continue the conversation in the comments.

Show HN: Time travel debugging AI for more reliable vibe coding

Hi HN, I'm the CEO at <a href="https://replay.io" rel="nofollow">https://replay.io</a>. We've been building a time travel debugger for web apps for several years now (previous HN post: <a href="https://news.ycombinator.com/item?id=28539247">https://news.ycombinator.com/item?id=28539247</a>) and are combining our tech with AI to automate the debugging process.<p>AIs are really good at writing code but really bad at debugging -- it's amazing to use Claude to prompt an app into existence, and pretty frustrating when that app doesn't work right and Claude is all thumbs fixing the problem.<p>The basic reason for this is a lack of context. People can use devtools to understand what's going on in the app, but AIs struggle here. With a recording of the app its behavior becomes a giant database for querying using RAG. We've been giving Claude tools to explore and understand what happens in a Replay recording, from basic stuff like seeing console messages to more advanced analysis of React, control dependencies, and dataflow. For now this is behind a chat API (<a href="https://blog.replay.io/the-nut-api" rel="nofollow">https://blog.replay.io/the-nut-api</a>).<p>We recently launched Nut (<a href="https://nut.new" rel="nofollow">https://nut.new</a>) as an open source project which uses this tech for building apps through prompting (vibe coding), similar to e.g. <a href="https://bolt.new" rel="nofollow">https://bolt.new</a> and <a href="https://v0.dev" rel="nofollow">https://v0.dev</a>. We want Nut to fix bugs effectively (cracking nuts, so to speak) and are working to make it a reliable tool for building complete production grade apps.<p>It's been pretty neat to see Nut fixing bugs that totally stump the AI otherwise. Each of the problems below has a short video but you can also load the associated project and try it yourself.<p>- Exception thrown from a catch block unmounts the entire app: <a href="https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5" rel="nofollow">https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5</a><p>- A settings button doesn't work because its modal component isn't always created: <a href="https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674" rel="nofollow">https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674</a><p>- An icon is really tiny due to sizing constraints imposed by other elements: <a href="https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b" rel="nofollow">https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b</a><p>- Loading doesn't finish due to a problem initializing responsive UI state: <a href="https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4" rel="nofollow">https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4</a><p>- Infinite rendering loop caused by a missing useCallback: <a href="https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e" rel="nofollow">https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e</a><p>Nut is completely free. You get some free uses or can add an API key, and we're also offering unlimited free access for folks who can give us feedback we'll use to improve Nut. Email me at hi@replay.io if you're interested.<p>For now Nut is best suited for building frontends but we'll be rolling out more full stack features in the next few weeks. I'd love to know what you think!

Show HN: Time travel debugging AI for more reliable vibe coding

Hi HN, I'm the CEO at <a href="https://replay.io" rel="nofollow">https://replay.io</a>. We've been building a time travel debugger for web apps for several years now (previous HN post: <a href="https://news.ycombinator.com/item?id=28539247">https://news.ycombinator.com/item?id=28539247</a>) and are combining our tech with AI to automate the debugging process.<p>AIs are really good at writing code but really bad at debugging -- it's amazing to use Claude to prompt an app into existence, and pretty frustrating when that app doesn't work right and Claude is all thumbs fixing the problem.<p>The basic reason for this is a lack of context. People can use devtools to understand what's going on in the app, but AIs struggle here. With a recording of the app its behavior becomes a giant database for querying using RAG. We've been giving Claude tools to explore and understand what happens in a Replay recording, from basic stuff like seeing console messages to more advanced analysis of React, control dependencies, and dataflow. For now this is behind a chat API (<a href="https://blog.replay.io/the-nut-api" rel="nofollow">https://blog.replay.io/the-nut-api</a>).<p>We recently launched Nut (<a href="https://nut.new" rel="nofollow">https://nut.new</a>) as an open source project which uses this tech for building apps through prompting (vibe coding), similar to e.g. <a href="https://bolt.new" rel="nofollow">https://bolt.new</a> and <a href="https://v0.dev" rel="nofollow">https://v0.dev</a>. We want Nut to fix bugs effectively (cracking nuts, so to speak) and are working to make it a reliable tool for building complete production grade apps.<p>It's been pretty neat to see Nut fixing bugs that totally stump the AI otherwise. Each of the problems below has a short video but you can also load the associated project and try it yourself.<p>- Exception thrown from a catch block unmounts the entire app: <a href="https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5" rel="nofollow">https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5</a><p>- A settings button doesn't work because its modal component isn't always created: <a href="https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674" rel="nofollow">https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674</a><p>- An icon is really tiny due to sizing constraints imposed by other elements: <a href="https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b" rel="nofollow">https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b</a><p>- Loading doesn't finish due to a problem initializing responsive UI state: <a href="https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4" rel="nofollow">https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4</a><p>- Infinite rendering loop caused by a missing useCallback: <a href="https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e" rel="nofollow">https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e</a><p>Nut is completely free. You get some free uses or can add an API key, and we're also offering unlimited free access for folks who can give us feedback we'll use to improve Nut. Email me at hi@replay.io if you're interested.<p>For now Nut is best suited for building frontends but we'll be rolling out more full stack features in the next few weeks. I'd love to know what you think!

Show HN: Time travel debugging AI for more reliable vibe coding

Hi HN, I'm the CEO at <a href="https://replay.io" rel="nofollow">https://replay.io</a>. We've been building a time travel debugger for web apps for several years now (previous HN post: <a href="https://news.ycombinator.com/item?id=28539247">https://news.ycombinator.com/item?id=28539247</a>) and are combining our tech with AI to automate the debugging process.<p>AIs are really good at writing code but really bad at debugging -- it's amazing to use Claude to prompt an app into existence, and pretty frustrating when that app doesn't work right and Claude is all thumbs fixing the problem.<p>The basic reason for this is a lack of context. People can use devtools to understand what's going on in the app, but AIs struggle here. With a recording of the app its behavior becomes a giant database for querying using RAG. We've been giving Claude tools to explore and understand what happens in a Replay recording, from basic stuff like seeing console messages to more advanced analysis of React, control dependencies, and dataflow. For now this is behind a chat API (<a href="https://blog.replay.io/the-nut-api" rel="nofollow">https://blog.replay.io/the-nut-api</a>).<p>We recently launched Nut (<a href="https://nut.new" rel="nofollow">https://nut.new</a>) as an open source project which uses this tech for building apps through prompting (vibe coding), similar to e.g. <a href="https://bolt.new" rel="nofollow">https://bolt.new</a> and <a href="https://v0.dev" rel="nofollow">https://v0.dev</a>. We want Nut to fix bugs effectively (cracking nuts, so to speak) and are working to make it a reliable tool for building complete production grade apps.<p>It's been pretty neat to see Nut fixing bugs that totally stump the AI otherwise. Each of the problems below has a short video but you can also load the associated project and try it yourself.<p>- Exception thrown from a catch block unmounts the entire app: <a href="https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5" rel="nofollow">https://nut.new/problem/57a0b3d7-42ed-4db0-bc7d-9dfec8e3b3a5</a><p>- A settings button doesn't work because its modal component isn't always created: <a href="https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674" rel="nofollow">https://nut.new/problem/bae8c208-31a1-4ec1-960f-3afa18514674</a><p>- An icon is really tiny due to sizing constraints imposed by other elements: <a href="https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b" rel="nofollow">https://nut.new/problem/9bb4e5f6-ea21-4b4c-b969-9e7ff4f00f5b</a><p>- Loading doesn't finish due to a problem initializing responsive UI state: <a href="https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4" rel="nofollow">https://nut.new/problem/486bc534-0c0e-4b2a-bb64-bfe985e623f4</a><p>- Infinite rendering loop caused by a missing useCallback: <a href="https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e" rel="nofollow">https://nut.new/problem/496f6944-419d-4f38-91b4-20d2aa698a5e</a><p>Nut is completely free. You get some free uses or can add an API key, and we're also offering unlimited free access for folks who can give us feedback we'll use to improve Nut. Email me at hi@replay.io if you're interested.<p>For now Nut is best suited for building frontends but we'll be rolling out more full stack features in the next few weeks. I'd love to know what you think!

Show HN: Fork of Claude-code working with local and other LLM providers

Show HN: Fork of Claude-code working with local and other LLM providers

Show HN: Fork of Claude-code working with local and other LLM providers

Show HN: Bayleaf – Building a low-profile wireless split keyboard

Hey HN,<p>I built a wireless, split, ultra-low profile keyboard from scratch called Bayleaf. As a beginner I learned all things electronics, PCB-building, designing for manufacturing, and many other hardware-related skills to put this together.<p>This case study dives into the build process and of course the final result, hope you enjoy!

Show HN: Bayleaf – Building a low-profile wireless split keyboard

Hey HN,<p>I built a wireless, split, ultra-low profile keyboard from scratch called Bayleaf. As a beginner I learned all things electronics, PCB-building, designing for manufacturing, and many other hardware-related skills to put this together.<p>This case study dives into the build process and of course the final result, hope you enjoy!

Show HN: Bayleaf – Building a low-profile wireless split keyboard

Hey HN,<p>I built a wireless, split, ultra-low profile keyboard from scratch called Bayleaf. As a beginner I learned all things electronics, PCB-building, designing for manufacturing, and many other hardware-related skills to put this together.<p>This case study dives into the build process and of course the final result, hope you enjoy!

Show HN: FlakeUI

Show HN: FlakeUI

Show HN: Tangled – Git collaboration platform built on atproto

Show HN: Tangled – Git collaboration platform built on atproto

Show HN: Sonauto API – Generative music for developers

Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.

Show HN: Sonauto API – Generative music for developers

Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.

Show HN: Sonauto API – Generative music for developers

Hello again HN,<p>Since our launch ten months ago, my cofounder and I have continued to improve our music model significantly. You can listen to some cool Staff Picks songs from the latest version here <a href="https://sonauto.ai/">https://sonauto.ai/</a> , listen to an acapella song I made for my housemate here <a href="https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b">https://sonauto.ai/song/8a20210c-563e-491b-bb11-f8c6db92ee9b</a> , or try the free and unlimited generations yourself.<p>However, given there are only two of us right now competing in the "best model and average user UI" race we haven't had the time to build some of the really neat ideas our users and pro musicians have been dreaming up (e..g, DAW plugins, live performance transition generators, etc). The hacker musician community has a rich history of taking new tech and doing really cool and unexpected stuff with it, too.<p>As such, we're opening up an API that gives full access to the features of our underlying diffusion model (e.g., generation, inpainting, extensions, transition generation, inverse sampling). Here are some things our early test users are already doing with it:<p>- A cool singing-to-video model by our friends at Lemon Slice: <a href="https://x.com/LemonSliceAI/status/1894084856889430147" rel="nofollow">https://x.com/LemonSliceAI/status/1894084856889430147</a> (try it yourself here <a href="https://lemonslice.com/studio">https://lemonslice.com/studio</a>)<p>- Open source wrapper written by one of our musician users: <a href="https://github.com/OlaFosheimGrostad/networkmusic">https://github.com/OlaFosheimGrostad/networkmusic</a><p>- You can also play with all the API features via our consumer UI here: <a href="https://sonauto.ai/create">https://sonauto.ai/create</a><p>We also have some examples written in Python here: <a href="https://github.com/Sonauto/sonauto-api-examples">https://github.com/Sonauto/sonauto-api-examples</a><p>- Generate a rock song: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/rock_song_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/ro...</a><p>- Download two songs from YouTube (e.g., Smash Mouth to Rick Astley) and generate a transition between them: <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/transition_generator.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/tr...</a><p>- Generate a singing telegram video (powered by ours and also Lemon Slice's API): <a href="https://github.com/Sonauto/sonauto-api-examples/blob/main/singing_telegram.py">https://github.com/Sonauto/sonauto-api-examples/blob/main/si...</a><p>You can check out the full docs/get your key here: <a href="https://sonauto.ai/developers">https://sonauto.ai/developers</a><p>We'd love to hear what you think, and are open to answering any tech questions about our model too! It's still a latent diffusion model, but much larger and with a much better GAN decoder.

Show HN: Knowledge graph of restaurants and chefs, built using LLMs

Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub: - Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a> - Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!

Show HN: Knowledge graph of restaurants and chefs, built using LLMs

Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub: - Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a> - Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!

Show HN: Knowledge graph of restaurants and chefs, built using LLMs

Hi HN!<p>My latest side project is knowledge graph that maps the French culinary network using data extracted from restaurant reviews from LeFooding.com. The project uses LLMs to extract structured information from unstructured text.<p>Some technical aspects you may be interested in:<p>- Used structured generation to reliably parse unstructured text into a consistent schema<p>- Tested multiple models (Mistral-7B-v0.3, Llama3.2-3B, gpt4o-mini) for information extraction<p>- Created an interactive visualization using gephi-lite and Retina (WebGL)<p>- Built (with Claude) a simple Flask web app to clean and deduplicate the data<p>- Total cost for inferencing 2000 reviews with gpt4o-mini: less than 1€!<p>You can explore the visualization here: [Interactive Culinary Network](<a href="https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url=https://gist.githubusercontent.com/theophilec/351f17ece36477bc48438d5ec6d14b5a/raw/fa85a89541c953e8f00d6774fe42f8c4bd30fa47/graph.gexf&r=x&sa=re&ca[]=t&ca[]=ra-s&st[]=u&st[]=re&ed=u" rel="nofollow">https://ouestware.gitlab.io/retina/1.0.0-beta.4/#/graph/?url...</a>)<p>The code for the project is available on GitHub: - Main project: <a href="https://github.com/theophilec/foudinge">https://github.com/theophilec/foudinge</a> - Data cleaning tool: <a href="https://github.com/theophilec/foudinge-scrub">https://github.com/theophilec/foudinge-scrub</a><p>Happy to get feedback!

< 1 2 3 ... 55 56 57 58 59 ... 825 826 827 >