The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I built Mailhub – A scalable API for sending emails with ease not tears

Show HN: I built Mailhub – A scalable API for sending emails with ease not tears

Show HN: Nous – Open-Source Agent Framework with Autonomous, SWE Agents, WebUI

Hello HN! The day has finally come to stop adding features and start sharing what I've been building the last 5-6 months.<p>It's a bit of CrewAI, OpenDevon, LangFuse/Cloud all in one, providing devs who prefer TypeScript an integrated framework thats provides a lot out of the box to start experimenting and building agents with.<p>It started after peeking at the LangChain docs a few times and never liking the example code. I began experimenting with automating a simple Jira request from the engineering team to add an index to one of our Google Spanner databases (for context I'm the DevOps/SRE lead for an AdTech company).<p>It incudes the tooling we're building out to automate processes from a DevOps/SRE perspective, which initially includes a configurable GitLab merge request AI reviewer.<p>The initial layer above Aider (<a href="https://aider.chat/" rel="nofollow">https://aider.chat/</a>) grew into coding agent and an autonomous agent with LLM-independent function calling with auto-generated function schemas.<p>And as testing via the CLI became unwieldy soon grew database persistence, tracing, a Web UI and human-in-the-loop functionality.<p>One of the more interesting additions is the new autonomous agent which generates Python code that can call the available functions. Using the pyodide library the tool objects are proxied into the Python scope and executed in a WebAssembly sandbox.<p>As its able to perform multiple calls and validation logic in a single control loop, it can reduce the cost and latency, getting the most out of the frontier LLMs calls with better reasoning.<p>Benchmark runners for the autonomous agent and coding benchmarks are in the works to get some numbers on the capabilities so far. I'm looking forward to getting back to implementing all the ideas around improving the code and autonomous agents from a metacognitive perspective after spending time on docs, refactorings and tidying up recently.<p>Check it out at <a href="https://github.com/trafficguard/nous">https://github.com/trafficguard/nous</a>

Show HN: Nous – Open-Source Agent Framework with Autonomous, SWE Agents, WebUI

Hello HN! The day has finally come to stop adding features and start sharing what I've been building the last 5-6 months.<p>It's a bit of CrewAI, OpenDevon, LangFuse/Cloud all in one, providing devs who prefer TypeScript an integrated framework thats provides a lot out of the box to start experimenting and building agents with.<p>It started after peeking at the LangChain docs a few times and never liking the example code. I began experimenting with automating a simple Jira request from the engineering team to add an index to one of our Google Spanner databases (for context I'm the DevOps/SRE lead for an AdTech company).<p>It incudes the tooling we're building out to automate processes from a DevOps/SRE perspective, which initially includes a configurable GitLab merge request AI reviewer.<p>The initial layer above Aider (<a href="https://aider.chat/" rel="nofollow">https://aider.chat/</a>) grew into coding agent and an autonomous agent with LLM-independent function calling with auto-generated function schemas.<p>And as testing via the CLI became unwieldy soon grew database persistence, tracing, a Web UI and human-in-the-loop functionality.<p>One of the more interesting additions is the new autonomous agent which generates Python code that can call the available functions. Using the pyodide library the tool objects are proxied into the Python scope and executed in a WebAssembly sandbox.<p>As its able to perform multiple calls and validation logic in a single control loop, it can reduce the cost and latency, getting the most out of the frontier LLMs calls with better reasoning.<p>Benchmark runners for the autonomous agent and coding benchmarks are in the works to get some numbers on the capabilities so far. I'm looking forward to getting back to implementing all the ideas around improving the code and autonomous agents from a metacognitive perspective after spending time on docs, refactorings and tidying up recently.<p>Check it out at <a href="https://github.com/trafficguard/nous">https://github.com/trafficguard/nous</a>

Show HN: Nous – Open-Source Agent Framework with Autonomous, SWE Agents, WebUI

Hello HN! The day has finally come to stop adding features and start sharing what I've been building the last 5-6 months.<p>It's a bit of CrewAI, OpenDevon, LangFuse/Cloud all in one, providing devs who prefer TypeScript an integrated framework thats provides a lot out of the box to start experimenting and building agents with.<p>It started after peeking at the LangChain docs a few times and never liking the example code. I began experimenting with automating a simple Jira request from the engineering team to add an index to one of our Google Spanner databases (for context I'm the DevOps/SRE lead for an AdTech company).<p>It incudes the tooling we're building out to automate processes from a DevOps/SRE perspective, which initially includes a configurable GitLab merge request AI reviewer.<p>The initial layer above Aider (<a href="https://aider.chat/" rel="nofollow">https://aider.chat/</a>) grew into coding agent and an autonomous agent with LLM-independent function calling with auto-generated function schemas.<p>And as testing via the CLI became unwieldy soon grew database persistence, tracing, a Web UI and human-in-the-loop functionality.<p>One of the more interesting additions is the new autonomous agent which generates Python code that can call the available functions. Using the pyodide library the tool objects are proxied into the Python scope and executed in a WebAssembly sandbox.<p>As its able to perform multiple calls and validation logic in a single control loop, it can reduce the cost and latency, getting the most out of the frontier LLMs calls with better reasoning.<p>Benchmark runners for the autonomous agent and coding benchmarks are in the works to get some numbers on the capabilities so far. I'm looking forward to getting back to implementing all the ideas around improving the code and autonomous agents from a metacognitive perspective after spending time on docs, refactorings and tidying up recently.<p>Check it out at <a href="https://github.com/trafficguard/nous">https://github.com/trafficguard/nous</a>

Show HN: Attaching to a virtual GPU over TCP

We developed a tool to trick your computer into thinking it’s attached to a GPU which actually sits across a network. This allows you to switch the number or type of GPUs you’re using with a single command.

Show HN: Attaching to a virtual GPU over TCP

We developed a tool to trick your computer into thinking it’s attached to a GPU which actually sits across a network. This allows you to switch the number or type of GPUs you’re using with a single command.

Show HN: Attaching to a virtual GPU over TCP

We developed a tool to trick your computer into thinking it’s attached to a GPU which actually sits across a network. This allows you to switch the number or type of GPUs you’re using with a single command.

Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs

Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>

Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs

Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>

Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs

Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>

Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs

Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output: <a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>

Show HN: The First Non-Smart AI Pendant (NotFriend)

Show HN: Orbit – A CSS radial UI composer framework

Show HN: Orbit – A CSS radial UI composer framework

Show HN: Orbit – A CSS radial UI composer framework

Show HN: I built interactive map of active and decommissioned nuclear stations

Hi all,<p>I am not an expert in nuclear energy but I've always wondered and found it difficult to get a clear picture about the amount of nuclear stations located in a specific region. So I built this tool that shows all the nuclear plants in the world, scaled by their capacity and with indication of their status. Clustering is enabled by default and allows to see the sum potential capacity of a region.<p>It's a fun tool for me: e.g. disable clustering, scale circle radius to 70%, go to EU, and you'll see Germany has shutdown all of the stations. Ofc it's a widely known fact, but what came to my surprise is that Poland, Turkey, Scandinavian countries, Africa have literally 1 to none nuclear stations. Which is kinda strange because some of these regions are modern, well-developed, and Africa specifically was sourcing lots of nuclear fuel for other countries other the years.<p>idk what to do with it yet, but I think I'll come up with ideas for future improvements as I believe nuclear sector will grow drastically.

Show HN: I built interactive map of active and decommissioned nuclear stations

Hi all,<p>I am not an expert in nuclear energy but I've always wondered and found it difficult to get a clear picture about the amount of nuclear stations located in a specific region. So I built this tool that shows all the nuclear plants in the world, scaled by their capacity and with indication of their status. Clustering is enabled by default and allows to see the sum potential capacity of a region.<p>It's a fun tool for me: e.g. disable clustering, scale circle radius to 70%, go to EU, and you'll see Germany has shutdown all of the stations. Ofc it's a widely known fact, but what came to my surprise is that Poland, Turkey, Scandinavian countries, Africa have literally 1 to none nuclear stations. Which is kinda strange because some of these regions are modern, well-developed, and Africa specifically was sourcing lots of nuclear fuel for other countries other the years.<p>idk what to do with it yet, but I think I'll come up with ideas for future improvements as I believe nuclear sector will grow drastically.

Show HN: I built interactive map of active and decommissioned nuclear stations

Hi all,<p>I am not an expert in nuclear energy but I've always wondered and found it difficult to get a clear picture about the amount of nuclear stations located in a specific region. So I built this tool that shows all the nuclear plants in the world, scaled by their capacity and with indication of their status. Clustering is enabled by default and allows to see the sum potential capacity of a region.<p>It's a fun tool for me: e.g. disable clustering, scale circle radius to 70%, go to EU, and you'll see Germany has shutdown all of the stations. Ofc it's a widely known fact, but what came to my surprise is that Poland, Turkey, Scandinavian countries, Africa have literally 1 to none nuclear stations. Which is kinda strange because some of these regions are modern, well-developed, and Africa specifically was sourcing lots of nuclear fuel for other countries other the years.<p>idk what to do with it yet, but I think I'll come up with ideas for future improvements as I believe nuclear sector will grow drastically.

Show HN: I built interactive map of active and decommissioned nuclear stations

Hi all,<p>I am not an expert in nuclear energy but I've always wondered and found it difficult to get a clear picture about the amount of nuclear stations located in a specific region. So I built this tool that shows all the nuclear plants in the world, scaled by their capacity and with indication of their status. Clustering is enabled by default and allows to see the sum potential capacity of a region.<p>It's a fun tool for me: e.g. disable clustering, scale circle radius to 70%, go to EU, and you'll see Germany has shutdown all of the stations. Ofc it's a widely known fact, but what came to my surprise is that Poland, Turkey, Scandinavian countries, Africa have literally 1 to none nuclear stations. Which is kinda strange because some of these regions are modern, well-developed, and Africa specifically was sourcing lots of nuclear fuel for other countries other the years.<p>idk what to do with it yet, but I think I'll come up with ideas for future improvements as I believe nuclear sector will grow drastically.

< 1 2 3 ... 51 52 53 54 55 ... 718 719 720 >