The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Learn Blender shortcuts with lots of tiny videos
I've used blender for more than a decade and now ask myself what the best way to teach it would look like.<p>Video generally seems the best format to explain how to solve a specific problem, but its not exactly great for larger collections of small bits of information – like blender shortcuts.<p>This is why I made this video/text hybrid website from scratch. If you're a blender user or have experience in teaching others, I'd be very happy to hear your thoughts on it.<p>I'm also posting this here because I assume many of your are professional web developers. I'm a learning hobbyist and wondering if there are any issues with the way I built the website.<p>Thanks!<p>Github: <a href="https://github.com/hollisbrown/blendershortcuts">https://github.com/hollisbrown/blendershortcuts</a>
Show HN: High-precision date/time in SQLite
Show HN: I've open sourced DD Poker
I'm the original author of DD Poker, a Java-based computer game that ran on Mac, Linux and Windows and originally sold in stores in physical boxes.<p>I shut down the backend servers in 2017 but the game is still functional and people can still play each other online even though the central lobby and find-a-game functionality no longer work.<p>I've been asked over the years to release the source code, especially during the pandemic, and again this year. I finally got motivated to clean up the code and put it out there.<p>The code is 20 years old and uses some ancient Spring, log4j, Wicket and other dependencies, but it still works on Java 1.8.
Show HN: If YouTube had actual channels
Show HN: Simple Mbtiles Server – Self-host the entire planet of OpenStreetMaps
Show HN: I built an animated 3D bookshelf for ebooks
Show HN: PGlite – in-browser WASM Postgres with pgvector and live sync
Hey, Sam and the team from ElectricSQL here.<p>PGlite is a WASM Postgres build packaged into a TypeScript/JavaScript client library, that enables you to run Postgres in the browser, Node.js and Bun, with no need to install any other dependencies. It's 3mb Gzipped, now has support for many Postgres extensions, including pgvector, and it has a reactive "live query" API. It's also fast, with CRUD style queries executing in under 0.3 ms, and larger, multi-row select queries occurring within a fraction of a single frame.<p>PGlite started as an experimental project we shared on X, and the response to it was incredible, encouraging us to see how far we could take it. Since then we have been working to get it to a point where people can use it to build real things. We are incredibly excited as today, with the release of v0.2, the Supabase team has released the amazing <a href="http://postgres.new" rel="nofollow">http://postgres.new</a> site built on top of it. Working with them to deliver both PGlite and postgres.new has been a joy.<p>- <a href="https://pglite.dev" rel="nofollow">https://pglite.dev</a> - PGlite website<p>- <a href="https://github.com/electric-sql/pglite">https://github.com/electric-sql/pglite</a> - GitHub repo<p>- <a href="https://pglite.dev/docs" rel="nofollow">https://pglite.dev/docs</a> - Docs on how to use PGlite<p>- <a href="https://pglite.dev/extensions" rel="nofollow">https://pglite.dev/extensions</a> - Extensions catalog<p>- <a href="https://pglite.dev/benchmarks" rel="nofollow">https://pglite.dev/benchmarks</a> - Early micro-benchmarks<p>- <a href="https://pglite.dev/repl" rel="nofollow">https://pglite.dev/repl</a> - An online REPL so that you can try it in the browser<p>We would love you to try it out, and we will be around to answer any questions.
Show HN: Rust GUI Library via Flutter
Hi, I made a bridge (<a href="https://github.com/fzyzcjy/flutter_rust_bridge">https://github.com/fzyzcjy/flutter_rust_bridge</a> v2.0.0) between Flutter and Rust, which auto translates syntaxes like arbitrary types, &mut, async, traits, results, closure (callback), lifetimes, etc. The goal is to make a bridge between the two, seamlessly as if working in one single language.<p>Then, as an example, I showed how to write Rust applications with GUI by utilizing Flutter. That is discussed in the link in details.<p>To play with it, please visit the GitHub repo, or refer to the end of the article for detailed folders and commands.<p>When I first released 1.0.0 years ago, it only contained few features compared to today. It is the result of the hard work of contributors and me, and many thanks to all the contributors!
Show HN: Rust GUI Library via Flutter
Hi, I made a bridge (<a href="https://github.com/fzyzcjy/flutter_rust_bridge">https://github.com/fzyzcjy/flutter_rust_bridge</a> v2.0.0) between Flutter and Rust, which auto translates syntaxes like arbitrary types, &mut, async, traits, results, closure (callback), lifetimes, etc. The goal is to make a bridge between the two, seamlessly as if working in one single language.<p>Then, as an example, I showed how to write Rust applications with GUI by utilizing Flutter. That is discussed in the link in details.<p>To play with it, please visit the GitHub repo, or refer to the end of the article for detailed folders and commands.<p>When I first released 1.0.0 years ago, it only contained few features compared to today. It is the result of the hard work of contributors and me, and many thanks to all the contributors!
Show HN: My 70 year old grandma is learning to code and made a word game
Show HN: My 70 year old grandma is learning to code and made a word game
Show HN: Attaching to a virtual GPU over TCP
We developed a tool to trick your computer into thinking it’s attached to a GPU which actually sits across a network. This allows you to switch the number or type of GPUs you’re using with a single command.
Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs
Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>
Show HN: LLM-aided OCR – Correcting Tesseract OCR errors with LLMs
Almost exactly 1 year ago, I submitted something to HN about using Llama2 (which had just come out) to improve the output of Tesseract OCR by correcting obvious OCR errors [0]. That was exciting at the time because OpenAI's API calls were still quite expensive for GPT4, and the cost of running it on a book-length PDF would just be prohibitive. In contrast, you could run Llama2 locally on a machine with just a CPU, and it would be extremely slow, but "free" if you had a spare machine lying around.<p>Well, it's amazing how things have changed since then. Not only have models gotten a lot better, but the latest "low tier" offerings from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku) are incredibly cheap and incredibly fast. So cheap and fast, in fact, that you can now break the document up into little chunks and submit them to the API concurrently (where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage) and assemble it all in a shockingly short amount of time, and for basically a rounding error in terms of cost.<p>My original project had all sorts of complex stuff for detecting hallucinations and incorrect, spurious additions to the text (like "Here is the corrected text" preambles). But the newer models are already good enough to eliminate most of that stuff. And you can get very impressive results with the multi-stage approach. In this case, the first pass asks it to correct OCR errors and to remove line breaks in the middle of a word and things like that. The next stage takes that as the input and asks the model to do things like reformat the text using markdown, to suppress page numbers and repeated page headers, etc. Anyway, I think the samples (which take less than 1-2 minutes to generate) show the power of the approach:<p>Original PDF:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter.pdf">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>Raw OCR Output:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter__raw_ocr_output.txt">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>LLM-Corrected Markdown Output:
<a href="https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main/160301289-Warren-Buffett-Katharine-Graham-Letter_llm_corrected.md">https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...</a><p>One interesting thing I found was that almost all my attempts to fix/improve things using "classical" methods like regex and other rule based things made everything worse and more brittle, and the real improvements came from adjusting the prompts to make things clearer for the model, and not asking the model to do too much in a single pass (like fixing OCR mistakes AND converting to markdown format).<p>Anyway, this project is very handy if you have some old scanned books you want to read from Archive.org or Google Books on a Kindle or other ereader device and want things to be re-flowable and clear. It's still not perfect, but I bet within the next year the models will improve even more that it will get closer to 100%. Hope you like it!<p>[0] <a href="https://news.ycombinator.com/item?id=36976333">https://news.ycombinator.com/item?id=36976333</a>
Show HN: BudgetFlow – Budget planning using interactive Sankey diagrams
Show HN: I've spent nearly 5y on a web app that creates 3D apartments
Show HN: 1-FPS encrypted screen sharing for introverts
I wanted to show you something I was hacking on for the last few weeks.<p>I tired of sharing screen via Google Meet with 1-hour limitation, with Zoom and 40-minute limitation, etc. With paid Slack subscription. And often times I just needed to screenshare with no audio.<p>So I ended up with my own solution - no registration, low memory, low CPU, low tek 1 fps encrypted screen sharing. Currently sharing only the main screen (good for laptop users).<p>It's very raw in terms of infrastructure, since I'm not counting bytes (yikes!), everything works on my own dedicated server. But the service itself has been tested, we've been sharing screens for countless hours. All sessions last for 48 hours, then it gets removed with all remaining info.<p>Every new frame replaces the other, and everything is end-to-end encrypted so even server owners and operators won't be able to see what are you sharing.<p>There is also no tracking, except the main page - and I use my own analytics. Sessions are not getting tracked and never will be, and observability currently is not in place.<p>Again, this is a true one-person side hacking project I hope (but I have serious doubts) I might need to scale if it's getting traction to support more users.
Show HN: Iso20022.js – Create payments in 3 lines of code
Show HN: Pie Menu – a radial menu for macOS
Hi everyone! I'm Marius Hauken, an indie developer, and I'm excited to share my app: Pie Menu. It offers a fresh way to access your favorite menu bar commands and keyboard shortcuts on macOS. By simply pressing a hotkey you choose during setup, a radial menu appears around your cursor, customized to the current active application. This allows you to quickly select commands without having to remember complex shortcuts across different applications.<p>Pie Menu comes with a library of preprogrammed commands for popular apps, but you can easily add any app on your computer. We've also created an extensive database at <a href="https://www.pie-menu.com/shortcuts" rel="nofollow">https://www.pie-menu.com/shortcuts</a> where you can quickly add shortcuts for different programs. If a command lacks a keyboard shortcut, you can always create one through System Preferences > Keyboard > Application Shortcuts.<p>For now, you can use Apple’s SF Symbols to label your commands, but we plan to include custom symbol sets in the future. You can see and vote on our roadmap at <a href="https://www.pie-menu.com/help/roadmap" rel="nofollow">https://www.pie-menu.com/help/roadmap</a>.<p>I hope you give Pie Menu a try and find it as useful as I intended!
Show HN: Free e-book about WebGPU Programming
I am excited to announce the launch of my e-book on Graphics/WebGPU programming! This project has consumed much of my spare time, during which I developed various tools to facilitate the publishing process, including a code playground and a static site generator that can reference sample codes.<p>However, I'm feeling burnt out and ready to call it finished, even though it may not feel completely done. Avoiding another abandoned side project has been my primary motivation in reaching this point.