The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I made a free app to calibrate your turntable by simply playing a song

Hey there!<p>I made a little app that lets you to calibrate your turntable by putting on any record and tapping a button. It's called Grooved and it uses your phone's microphone to see how fast your platter is going, almost like magic.<p>You can see what it looks like in action here: <a href="https://twitter.com/OKatBest/status/1795453042994680148" rel="nofollow">https://twitter.com/OKatBest/status/1795453042994680148</a><p>The app itself is free without ads, subscriptions, or trackers. It's a tool I built for myself, and I just thought someone else might want to use it too. I have never seen this technology being used before, all other apps require you to either print something and use the camera, or to place your phone on the spinning platter and use the accelerometer.<p>You can grab it on the App Store, and I am working on an Android version I hope to release at some point in June.<p>Would love to hear what you think about it!<p>Ivan_

Show HN: I made a free app to calibrate your turntable by simply playing a song

Hey there!<p>I made a little app that lets you to calibrate your turntable by putting on any record and tapping a button. It's called Grooved and it uses your phone's microphone to see how fast your platter is going, almost like magic.<p>You can see what it looks like in action here: <a href="https://twitter.com/OKatBest/status/1795453042994680148" rel="nofollow">https://twitter.com/OKatBest/status/1795453042994680148</a><p>The app itself is free without ads, subscriptions, or trackers. It's a tool I built for myself, and I just thought someone else might want to use it too. I have never seen this technology being used before, all other apps require you to either print something and use the camera, or to place your phone on the spinning platter and use the accelerometer.<p>You can grab it on the App Store, and I am working on an Android version I hope to release at some point in June.<p>Would love to hear what you think about it!<p>Ivan_

Show HN: I made a free app to calibrate your turntable by simply playing a song

Hey there!<p>I made a little app that lets you to calibrate your turntable by putting on any record and tapping a button. It's called Grooved and it uses your phone's microphone to see how fast your platter is going, almost like magic.<p>You can see what it looks like in action here: <a href="https://twitter.com/OKatBest/status/1795453042994680148" rel="nofollow">https://twitter.com/OKatBest/status/1795453042994680148</a><p>The app itself is free without ads, subscriptions, or trackers. It's a tool I built for myself, and I just thought someone else might want to use it too. I have never seen this technology being used before, all other apps require you to either print something and use the camera, or to place your phone on the spinning platter and use the accelerometer.<p>You can grab it on the App Store, and I am working on an Android version I hope to release at some point in June.<p>Would love to hear what you think about it!<p>Ivan_

Show HN: PaperTube – Turn YouTube Videos into Kindle-Ready Articles

Would you prefer reading podcasts, TED talks, or conversations instead of watching/listening to videos? If so, you might find this interesting.<p>PaperTube lets you turn any YouTube video into an easy-to-read, well-formatted article with speaker names, and you can even get it on your Kindle. I've been working on this small project over the last week. Right now, it supports sending any video to Kindle. I'd love to hear if anyone is interested in this. Looking forward to your feedback and discussion! It's currently free because I have some free credits for the LLMs I'm using. I need to find a way to fund it. Some features I'm planning include subscribing to YouTube channels and getting daily or weekly articles on your Kindle, and maybe a browser extension to quickly convert any video.

Show HN: PaperTube – Turn YouTube Videos into Kindle-Ready Articles

Would you prefer reading podcasts, TED talks, or conversations instead of watching/listening to videos? If so, you might find this interesting.<p>PaperTube lets you turn any YouTube video into an easy-to-read, well-formatted article with speaker names, and you can even get it on your Kindle. I've been working on this small project over the last week. Right now, it supports sending any video to Kindle. I'd love to hear if anyone is interested in this. Looking forward to your feedback and discussion! It's currently free because I have some free credits for the LLMs I'm using. I need to find a way to fund it. Some features I'm planning include subscribing to YouTube channels and getting daily or weekly articles on your Kindle, and maybe a browser extension to quickly convert any video.

Show HN: I've Created the First Artificial Memory (and It's Open-Source)

Show HN: I've Created the First Artificial Memory (and It's Open-Source)

Show HN: Use Go's HTML/template to write React-like code

Hi all,<p>I'm currently working on my hobby project, and one of the fun constraints I put in was to build using as less dependencies as possible.<p>I chose Go as it has a really good standard library, and all went well for building backend. But for frontend, I was wondering whether I should break the constraint and go for React. I tried options like Web Components, but I really didn't like the ergonomics and I didn't want to use jQuery either.<p>Out of curiosity, I was exploring Go's html/template package to see if I can write UIs in a React-like manner. I found most of the online docs using the "slots" like approach which I found unintuitive.<p>But after trial and error, I found an approach that's very close to React and without using any 3rd party packages like templ.<p>I'd like to share this with the community, not sure if it's a common approach - <a href="https://www.sheshbabu.com/posts/react-like-composition-using-go-html-template/" rel="nofollow">https://www.sheshbabu.com/posts/react-like-composition-using...</a>

Show HN: Use Go's HTML/template to write React-like code

Hi all,<p>I'm currently working on my hobby project, and one of the fun constraints I put in was to build using as less dependencies as possible.<p>I chose Go as it has a really good standard library, and all went well for building backend. But for frontend, I was wondering whether I should break the constraint and go for React. I tried options like Web Components, but I really didn't like the ergonomics and I didn't want to use jQuery either.<p>Out of curiosity, I was exploring Go's html/template package to see if I can write UIs in a React-like manner. I found most of the online docs using the "slots" like approach which I found unintuitive.<p>But after trial and error, I found an approach that's very close to React and without using any 3rd party packages like templ.<p>I'd like to share this with the community, not sure if it's a common approach - <a href="https://www.sheshbabu.com/posts/react-like-composition-using-go-html-template/" rel="nofollow">https://www.sheshbabu.com/posts/react-like-composition-using...</a>

Show HN: Affordable text-to-speech for long-form content

Hi HN, I’m Michael, creator of AudiowaveAI. I started this project out of frustration when I couldn't find an audiobook version of <i>Make</i> by Pieter Levels. The available text-to-speech options were either too robotic, overly complex, or simply too costly.<p>It works really well for non-fiction long-form content (i.e. hours of audio).<p>It’s early days for AudiowaveAI, and I’m looking for feedback to improve the product. Try it out and share your thoughts: [AudiowaveAI](<a href="https://audiowaveai.com" rel="nofollow">https://audiowaveai.com</a>). Thanks!

Show HN: Affordable text-to-speech for long-form content

Hi HN, I’m Michael, creator of AudiowaveAI. I started this project out of frustration when I couldn't find an audiobook version of <i>Make</i> by Pieter Levels. The available text-to-speech options were either too robotic, overly complex, or simply too costly.<p>It works really well for non-fiction long-form content (i.e. hours of audio).<p>It’s early days for AudiowaveAI, and I’m looking for feedback to improve the product. Try it out and share your thoughts: [AudiowaveAI](<a href="https://audiowaveai.com" rel="nofollow">https://audiowaveai.com</a>). Thanks!

Show HN: Offline sketch to image geneartor in a whiteboard

I'm excited to share my indie product, <a href="https://drawing.pics" rel="nofollow">https://drawing.pics</a><p>In this post, I'll share all the technical details behind it.<p>So, what exactly is DrawingPics? It's an image-to-image and sketch-to-image generator that runs in a whiteboard, allowing you to draw your draft directly on the canvas and easily generate the desired image.<p>How does it work from a technical perspective? TLDR: Miniconda + Diffusers + Electron + Excalidraw<p>One of the most important aspects of creating this tool was finding a stable diffusion locally inference solution. There are many solutions available, such as <a href="https://github.com/leejet/stable-diffusion.cpp">https://github.com/leejet/stable-diffusion.cpp</a> and <a href="https://github.com/apple/ml-stable-diffusion">https://github.com/apple/ml-stable-diffusion</a>. I tested the C++ version, but the inference speed was very slow, and Metal GPU support still had issues (you can find relevant issues in their repo). Ultimately, I decided to use python to run it because PyTorch is mature and MPS support is well-established. And I chose Miniconda, because it can create a small, isolated Python environment to run the program.<p>The AI model should run in the background so that we can continuously produce images while you draw. We need to find an RPC method to enable communication between the Python process and Electron's Node.js process. The easiest way is to run a Python HTTP server, but the memory usage is too high. We need a more lightweight solution, so I used xmlrpc for memory efficiency, although there might be better alternatives that I'm unaware of. The AI inference part is handled by diffusers, which is great, but I had to apply some custom patches to make it work in this situation. This can be a bit challenging if you're not familiar with Python.<p>For the frontend, I initially used a low-level canvas library and tried to implement a drawing pad from scratch. However, it had too many details, so I chose a more mature option: Excalidraw. It's fantastic, with the only shortcoming being limited API support.<p>Finally, I combined all these technologies in Electron, ensuring they work smoothly on both the main process and the renderer process.<p>Ok, Is DrawingPics free to use? 80% of the project is free to use. I only charge a lifetime license for the effort I've put in, as I've spent 4 months on it and made numerous improvements based on user feedback. Currently, only Mac is supported. Windows support will be added later.

Show HN: Offline sketch to image geneartor in a whiteboard

I'm excited to share my indie product, <a href="https://drawing.pics" rel="nofollow">https://drawing.pics</a><p>In this post, I'll share all the technical details behind it.<p>So, what exactly is DrawingPics? It's an image-to-image and sketch-to-image generator that runs in a whiteboard, allowing you to draw your draft directly on the canvas and easily generate the desired image.<p>How does it work from a technical perspective? TLDR: Miniconda + Diffusers + Electron + Excalidraw<p>One of the most important aspects of creating this tool was finding a stable diffusion locally inference solution. There are many solutions available, such as <a href="https://github.com/leejet/stable-diffusion.cpp">https://github.com/leejet/stable-diffusion.cpp</a> and <a href="https://github.com/apple/ml-stable-diffusion">https://github.com/apple/ml-stable-diffusion</a>. I tested the C++ version, but the inference speed was very slow, and Metal GPU support still had issues (you can find relevant issues in their repo). Ultimately, I decided to use python to run it because PyTorch is mature and MPS support is well-established. And I chose Miniconda, because it can create a small, isolated Python environment to run the program.<p>The AI model should run in the background so that we can continuously produce images while you draw. We need to find an RPC method to enable communication between the Python process and Electron's Node.js process. The easiest way is to run a Python HTTP server, but the memory usage is too high. We need a more lightweight solution, so I used xmlrpc for memory efficiency, although there might be better alternatives that I'm unaware of. The AI inference part is handled by diffusers, which is great, but I had to apply some custom patches to make it work in this situation. This can be a bit challenging if you're not familiar with Python.<p>For the frontend, I initially used a low-level canvas library and tried to implement a drawing pad from scratch. However, it had too many details, so I chose a more mature option: Excalidraw. It's fantastic, with the only shortcoming being limited API support.<p>Finally, I combined all these technologies in Electron, ensuring they work smoothly on both the main process and the renderer process.<p>Ok, Is DrawingPics free to use? 80% of the project is free to use. I only charge a lifetime license for the effort I've put in, as I've spent 4 months on it and made numerous improvements based on user feedback. Currently, only Mac is supported. Windows support will be added later.

Show HN: Offline sketch to image geneartor in a whiteboard

I'm excited to share my indie product, <a href="https://drawing.pics" rel="nofollow">https://drawing.pics</a><p>In this post, I'll share all the technical details behind it.<p>So, what exactly is DrawingPics? It's an image-to-image and sketch-to-image generator that runs in a whiteboard, allowing you to draw your draft directly on the canvas and easily generate the desired image.<p>How does it work from a technical perspective? TLDR: Miniconda + Diffusers + Electron + Excalidraw<p>One of the most important aspects of creating this tool was finding a stable diffusion locally inference solution. There are many solutions available, such as <a href="https://github.com/leejet/stable-diffusion.cpp">https://github.com/leejet/stable-diffusion.cpp</a> and <a href="https://github.com/apple/ml-stable-diffusion">https://github.com/apple/ml-stable-diffusion</a>. I tested the C++ version, but the inference speed was very slow, and Metal GPU support still had issues (you can find relevant issues in their repo). Ultimately, I decided to use python to run it because PyTorch is mature and MPS support is well-established. And I chose Miniconda, because it can create a small, isolated Python environment to run the program.<p>The AI model should run in the background so that we can continuously produce images while you draw. We need to find an RPC method to enable communication between the Python process and Electron's Node.js process. The easiest way is to run a Python HTTP server, but the memory usage is too high. We need a more lightweight solution, so I used xmlrpc for memory efficiency, although there might be better alternatives that I'm unaware of. The AI inference part is handled by diffusers, which is great, but I had to apply some custom patches to make it work in this situation. This can be a bit challenging if you're not familiar with Python.<p>For the frontend, I initially used a low-level canvas library and tried to implement a drawing pad from scratch. However, it had too many details, so I chose a more mature option: Excalidraw. It's fantastic, with the only shortcoming being limited API support.<p>Finally, I combined all these technologies in Electron, ensuring they work smoothly on both the main process and the renderer process.<p>Ok, Is DrawingPics free to use? 80% of the project is free to use. I only charge a lifetime license for the effort I've put in, as I've spent 4 months on it and made numerous improvements based on user feedback. Currently, only Mac is supported. Windows support will be added later.

Show HN: Offline sketch to image geneartor in a whiteboard

I'm excited to share my indie product, <a href="https://drawing.pics" rel="nofollow">https://drawing.pics</a><p>In this post, I'll share all the technical details behind it.<p>So, what exactly is DrawingPics? It's an image-to-image and sketch-to-image generator that runs in a whiteboard, allowing you to draw your draft directly on the canvas and easily generate the desired image.<p>How does it work from a technical perspective? TLDR: Miniconda + Diffusers + Electron + Excalidraw<p>One of the most important aspects of creating this tool was finding a stable diffusion locally inference solution. There are many solutions available, such as <a href="https://github.com/leejet/stable-diffusion.cpp">https://github.com/leejet/stable-diffusion.cpp</a> and <a href="https://github.com/apple/ml-stable-diffusion">https://github.com/apple/ml-stable-diffusion</a>. I tested the C++ version, but the inference speed was very slow, and Metal GPU support still had issues (you can find relevant issues in their repo). Ultimately, I decided to use python to run it because PyTorch is mature and MPS support is well-established. And I chose Miniconda, because it can create a small, isolated Python environment to run the program.<p>The AI model should run in the background so that we can continuously produce images while you draw. We need to find an RPC method to enable communication between the Python process and Electron's Node.js process. The easiest way is to run a Python HTTP server, but the memory usage is too high. We need a more lightweight solution, so I used xmlrpc for memory efficiency, although there might be better alternatives that I'm unaware of. The AI inference part is handled by diffusers, which is great, but I had to apply some custom patches to make it work in this situation. This can be a bit challenging if you're not familiar with Python.<p>For the frontend, I initially used a low-level canvas library and tried to implement a drawing pad from scratch. However, it had too many details, so I chose a more mature option: Excalidraw. It's fantastic, with the only shortcoming being limited API support.<p>Finally, I combined all these technologies in Electron, ensuring they work smoothly on both the main process and the renderer process.<p>Ok, Is DrawingPics free to use? 80% of the project is free to use. I only charge a lifetime license for the effort I've put in, as I've spent 4 months on it and made numerous improvements based on user feedback. Currently, only Mac is supported. Windows support will be added later.

Show HN: I generated API documentation for all Java packages

Hi HN! I'm excited to share a project I've been working on for the past year: Docland. It is an API documentation browser that generates documentation on demand (through compilation, not LLMs) for Java packages. Instead of relying on Javadoc, the built-in doc generator, I created the engine from scratch to give the documentations a modern look, build fast search indexes, and enable link resolution to other packages.<p>I built Docland because I constantly found it frustrating to locate and view API definitions when programming. You'd have to Google the function/class name, skip all the SEO articles, find the page you want, yet the documentation might be poorly formatted or does not support searching.<p>So I thought it would be really cool to create a documentation site dedicated for programming languages and libraries, so that you can find the docs all in one place with a uniform look. Docland currently only supports Java, but more programming languages can be supported thanks to its modular architecture.<p>Please try it out and let me know what you think! Also, the process of building Docland was extremely fun and challenging. I'm happy to share about that too.<p>Thank you!<p>Martin

Show HN: I generated API documentation for all Java packages

Hi HN! I'm excited to share a project I've been working on for the past year: Docland. It is an API documentation browser that generates documentation on demand (through compilation, not LLMs) for Java packages. Instead of relying on Javadoc, the built-in doc generator, I created the engine from scratch to give the documentations a modern look, build fast search indexes, and enable link resolution to other packages.<p>I built Docland because I constantly found it frustrating to locate and view API definitions when programming. You'd have to Google the function/class name, skip all the SEO articles, find the page you want, yet the documentation might be poorly formatted or does not support searching.<p>So I thought it would be really cool to create a documentation site dedicated for programming languages and libraries, so that you can find the docs all in one place with a uniform look. Docland currently only supports Java, but more programming languages can be supported thanks to its modular architecture.<p>Please try it out and let me know what you think! Also, the process of building Docland was extremely fun and challenging. I'm happy to share about that too.<p>Thank you!<p>Martin

Show HN: I made a online free tool to enhance and auto-crop your screenshots

I'm Gab, a 29-year-old self-taught French developer.<p>I created SocialScreenshots to help me quickly create visuals for my social media and as there's a lot of similar tools, I wanted to release it for free !<p>Features Available: - Automatically crop your screenshots - Enhance your screenshots with our easy-to-use editor - Capture screenshot directly from website<p>Unlike similar tools, with SocialScreenshots: 1. You can use it for free forever without any watermark! 2. You don’t need to install anything. 3. You don’t even need to create an account. 4. All of the processing happens locally in your browser

Show HN: I made a online free tool to enhance and auto-crop your screenshots

I'm Gab, a 29-year-old self-taught French developer.<p>I created SocialScreenshots to help me quickly create visuals for my social media and as there's a lot of similar tools, I wanted to release it for free !<p>Features Available: - Automatically crop your screenshots - Enhance your screenshots with our easy-to-use editor - Capture screenshot directly from website<p>Unlike similar tools, with SocialScreenshots: 1. You can use it for free forever without any watermark! 2. You don’t need to install anything. 3. You don’t even need to create an account. 4. All of the processing happens locally in your browser

Show HN: Boldly go where Gradient Descent has never gone before with DiscoGrad

Trying to do gradient descent using automatic differentiation over branchy programs? Or to combine them with neural networks for end-to-end training? Then this might be interesting to you.<p>We develped DiscoGrad, a tool for automatic differentiation through C++ programs involving input-dependent control flow (e.g., "if (f(x) < c) { ... }", differentiating wrt. x) and randomness. Our initial motivation was to enable the use of gradient descent with simulations, which often rely heavily on such discrete branching. The latter makes plain autodiff mostly useless, since it can only account for the single path taken through the program. Our tool offers several backends that handle this situation, giving useful descent directions for optimization by accounting for alternative branches. Besides simulations, this problem arises in many other places, for example in deep learning when trying to combine imperative programs with neural networks.<p>In a nutshell, DiscoGrad applies an (LLVM-based) source-to-source transformation to your C++ program, adding some calls to our header library, which then handles the gradient computation. What sets it apart from similar tools/estimators is that it's fully automatic (no need to come up with a differentiable problem formulation/reparametrization) and that the branching condition can be any function of the program inputs (no need to know upfront what distribution the condition follows).<p>We're currently a team of two working on DiscoGrad as part of a research project, so don't expect to see production-grade code quality, but we do intend for it to be more than a throwaway research prototype. Use cases we've successfully tested include calibrating simulation models of epidemics or evacuation scenarios via gradient descent, and combining simulations with neural networks in an end-to-end trainable fashion.<p>We hope you find this interesting and useful, and we're happy to answer questions!

< 1 2 3 ... 89 90 91 92 93 ... 719 720 721 >