The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Simulate 3D plants in the browser
Report Phone Spam – Shut down robocallers and text spammers
Do you receive unsolicited phone calls or SMS/text spam? I made a free public service site explaining how to find the telecom carrier that is responsible for the spammer's (real) phone number and report the abuse to them – so the carrier can terminate their service.<p>It works, and it feels like magic.<p>Background: Earlier this year, I wrote an HN comment[1] explaining how to find the telecom carrier responsible for a robocall or SMS spam campaign. Those steps aren't documented anywhere else, even though they're actually pretty easy.<p>This info deserved to be much more visible, so now it is: <a href="https://reportphonespam.org/" rel="nofollow noreferrer">https://reportphonespam.org/</a><p>As my site says, most reputable telecom carriers don't want unsolicited messages on their network or phone numbers. In order to disconnect their abusive customers, they need to hear about the abuse. That's where you come in. In a few minutes, you can report abuse to the responsible carrier, who will investigate and often shut off the spammer's phone number(s).<p>[1]: <a href="https://news.ycombinator.com/item?id=34570065#34570835">https://news.ycombinator.com/item?id=34570065#34570835</a>
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe
Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe
Show HN: Bi-directional sync between Postgres and SQLite
Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback!
- Ralf, Conrad, Kobie, Phillip and team
Show HN: Bi-directional sync between Postgres and SQLite
Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback!
- Ralf, Conrad, Kobie, Phillip and team
Show HN: Dobb·E – towards home robots with an open-source platform
Hi HN! Proud to share our open-source robot platform, Dobb·E, a home robot system that needs just 5 minutes of human teaching to learn new tasks. We've already taken Dobb·E to 10 different homes in New York, taught it 100+ tasks, and we are just getting started! I would love to hear your thoughts about this.<p>Here are some more details, below (or see a Twitter thread with attached media: <a href="https://twitter.com/i/status/1729515379892826211" rel="nofollow noreferrer">https://twitter.com/i/status/1729515379892826211</a> or <a href="https://nitter.net/i/status/1729515379892826211" rel="nofollow noreferrer">https://nitter.net/i/status/1729515379892826211</a>):<p>We engineered Dobb·E to maximize efficiency, safety, and user comfort. As a system, it is composed of four parts: a data collection tool, a home dataset, a pretrained vision model, and a policy fine-tuning recipe.<p>We teach our robots with imitation learning, and for data collection, we created the “Stick”, a tool made out of $25 of hardware and an iPhone.<p>Then, using the Stick, we collected a 13 hour dataset in 22 New York homes, called Homes of New York (HoNY). HoNY has 1.5M frames collected over 216 different "environments" which is an order of magnitude larger compared to similar open source datasets.<p>Then we trained a foundational vision model that we can fine-tune fast (15 minutes!) on a new task with only 5 minutes (human time)/ 90 seconds (demo time) of data. So from start to finish, it takes about 20 minutes to teach the robot a new task.<p>Over a month, we visited 10 homes, tried 109 tasks, and got 81% success rate in simple household tasks. We also found a line of challenges, from mirrors to heavy objects, that we must overcome if we are to get a general purpose home robot.<p>We open-sourced our entire system because our primary goal is to get more robotics and AI researchers, engineers, and enthusiasts to go beyond constrained lab environments and start getting into homes!<p>So here is how you can get started:<p>1. Code and STL files: <a href="https://github.com/notmahi/dobb-e/">https://github.com/notmahi/dobb-e/</a><p>2. Technical documentation: <a href="https://docs.dobb-e.com/" rel="nofollow noreferrer">https://docs.dobb-e.com/</a><p>3. Paper: <a href="https://arxiv.org/abs/2311.16098" rel="nofollow noreferrer">https://arxiv.org/abs/2311.16098</a><p>4. More videos and the dataset: <a href="https://dobb-e.com" rel="nofollow noreferrer">https://dobb-e.com</a><p>5. Robot we used: <a href="https://hello-robot.com" rel="nofollow noreferrer">https://hello-robot.com</a>
Show HN: Dobb·E – towards home robots with an open-source platform
Hi HN! Proud to share our open-source robot platform, Dobb·E, a home robot system that needs just 5 minutes of human teaching to learn new tasks. We've already taken Dobb·E to 10 different homes in New York, taught it 100+ tasks, and we are just getting started! I would love to hear your thoughts about this.<p>Here are some more details, below (or see a Twitter thread with attached media: <a href="https://twitter.com/i/status/1729515379892826211" rel="nofollow noreferrer">https://twitter.com/i/status/1729515379892826211</a> or <a href="https://nitter.net/i/status/1729515379892826211" rel="nofollow noreferrer">https://nitter.net/i/status/1729515379892826211</a>):<p>We engineered Dobb·E to maximize efficiency, safety, and user comfort. As a system, it is composed of four parts: a data collection tool, a home dataset, a pretrained vision model, and a policy fine-tuning recipe.<p>We teach our robots with imitation learning, and for data collection, we created the “Stick”, a tool made out of $25 of hardware and an iPhone.<p>Then, using the Stick, we collected a 13 hour dataset in 22 New York homes, called Homes of New York (HoNY). HoNY has 1.5M frames collected over 216 different "environments" which is an order of magnitude larger compared to similar open source datasets.<p>Then we trained a foundational vision model that we can fine-tune fast (15 minutes!) on a new task with only 5 minutes (human time)/ 90 seconds (demo time) of data. So from start to finish, it takes about 20 minutes to teach the robot a new task.<p>Over a month, we visited 10 homes, tried 109 tasks, and got 81% success rate in simple household tasks. We also found a line of challenges, from mirrors to heavy objects, that we must overcome if we are to get a general purpose home robot.<p>We open-sourced our entire system because our primary goal is to get more robotics and AI researchers, engineers, and enthusiasts to go beyond constrained lab environments and start getting into homes!<p>So here is how you can get started:<p>1. Code and STL files: <a href="https://github.com/notmahi/dobb-e/">https://github.com/notmahi/dobb-e/</a><p>2. Technical documentation: <a href="https://docs.dobb-e.com/" rel="nofollow noreferrer">https://docs.dobb-e.com/</a><p>3. Paper: <a href="https://arxiv.org/abs/2311.16098" rel="nofollow noreferrer">https://arxiv.org/abs/2311.16098</a><p>4. More videos and the dataset: <a href="https://dobb-e.com" rel="nofollow noreferrer">https://dobb-e.com</a><p>5. Robot we used: <a href="https://hello-robot.com" rel="nofollow noreferrer">https://hello-robot.com</a>
Show HN: A Dalle-3 and GPT4-Vision feedback loop
I used to enjoy Translation Party, and over the weekend I realized that we can build the same feedback loop with DALLE-3 and GPT4-Vision. Start with a text prompt, let DALLE-3 generate an image, then GPT-4 Vision turns that image back into a text prompt, DALLE-3 creates another image, and so on.<p>You need to bring your own OpenAI API key (costs about $0.10/run)<p>Some prompts are very stable, others go wild. If you bias GPT4's prompting by telling it to "make it weird" you can get crazy results.<p>Here's a few of my favorites:<p>- Gnomes: <a href="https://dalle.party/?party=k4eeMQ6I" rel="nofollow noreferrer">https://dalle.party/?party=k4eeMQ6I</a><p>- Start with a sailboat but bias GPT4V to "replace everything with cats": <a href="https://dalle.party/?party=0uKfJjQn" rel="nofollow noreferrer">https://dalle.party/?party=0uKfJjQn</a><p>- A more stable one (but everyone is always an actor): <a href="https://dalle.party/?party=oxpeZKh5" rel="nofollow noreferrer">https://dalle.party/?party=oxpeZKh5</a>
Show HN: A Dalle-3 and GPT4-Vision feedback loop
I used to enjoy Translation Party, and over the weekend I realized that we can build the same feedback loop with DALLE-3 and GPT4-Vision. Start with a text prompt, let DALLE-3 generate an image, then GPT-4 Vision turns that image back into a text prompt, DALLE-3 creates another image, and so on.<p>You need to bring your own OpenAI API key (costs about $0.10/run)<p>Some prompts are very stable, others go wild. If you bias GPT4's prompting by telling it to "make it weird" you can get crazy results.<p>Here's a few of my favorites:<p>- Gnomes: <a href="https://dalle.party/?party=k4eeMQ6I" rel="nofollow noreferrer">https://dalle.party/?party=k4eeMQ6I</a><p>- Start with a sailboat but bias GPT4V to "replace everything with cats": <a href="https://dalle.party/?party=0uKfJjQn" rel="nofollow noreferrer">https://dalle.party/?party=0uKfJjQn</a><p>- A more stable one (but everyone is always an actor): <a href="https://dalle.party/?party=oxpeZKh5" rel="nofollow noreferrer">https://dalle.party/?party=oxpeZKh5</a>
Show HN: Trains.fyi – a live map of passenger trains in the US and Canada
Hey all! My train the other day was delayed and I got curious where they all were at any given time, so I built a map and figured I'd share it.
Show HN: Trains.fyi – a live map of passenger trains in the US and Canada
Hey all! My train the other day was delayed and I got curious where they all were at any given time, so I built a map and figured I'd share it.
Sopwith – a classic bi-plane shoot 'em up from 1984 in the browser
src: <a href="https://github.com/midzer/sdl-sopwith">https://github.com/midzer/sdl-sopwith</a><p>via: <a href="https://fragglet.github.io/sdl-sopwith/" rel="nofollow noreferrer">https://fragglet.github.io/sdl-sopwith/</a>
Show HN: I saw this mind-blowing experiment, so I made a simple version of it
Two browser windows (acting as socket clients) communicate their:<p>- Screen dimensions - (screen.width, screen.height)<p>- Window dimensions - (window.innerWidth, window.innerHeight)<p>- Window X/Y position - (window.screenX, window.screenY)<p>...or whichever calculation works best for you.<p>The original work by Bjorn Staal <a href="https://twitter.com/_nonfigurativ_/status/172732259457002734" rel="nofollow noreferrer">https://twitter.com/_nonfigurativ_/status/172732259457002734</a> used localStorage, but I found sockets more fun, because if tweaked a bit, this can be shared with friends.<p>Here's a demo of how it works and the codebase: <a href="https://github.com/Momciloo/fun-with-sockets/">https://github.com/Momciloo/fun-with-sockets/</a>
Show HN: I saw this mind-blowing experiment, so I made a simple version of it
Two browser windows (acting as socket clients) communicate their:<p>- Screen dimensions - (screen.width, screen.height)<p>- Window dimensions - (window.innerWidth, window.innerHeight)<p>- Window X/Y position - (window.screenX, window.screenY)<p>...or whichever calculation works best for you.<p>The original work by Bjorn Staal <a href="https://twitter.com/_nonfigurativ_/status/172732259457002734" rel="nofollow noreferrer">https://twitter.com/_nonfigurativ_/status/172732259457002734</a> used localStorage, but I found sockets more fun, because if tweaked a bit, this can be shared with friends.<p>Here's a demo of how it works and the codebase: <a href="https://github.com/Momciloo/fun-with-sockets/">https://github.com/Momciloo/fun-with-sockets/</a>
Show HN: I saw this mind-blowing experiment, so I made a simple version of it
Two browser windows (acting as socket clients) communicate their:<p>- Screen dimensions - (screen.width, screen.height)<p>- Window dimensions - (window.innerWidth, window.innerHeight)<p>- Window X/Y position - (window.screenX, window.screenY)<p>...or whichever calculation works best for you.<p>The original work by Bjorn Staal <a href="https://twitter.com/_nonfigurativ_/status/172732259457002734" rel="nofollow noreferrer">https://twitter.com/_nonfigurativ_/status/172732259457002734</a> used localStorage, but I found sockets more fun, because if tweaked a bit, this can be shared with friends.<p>Here's a demo of how it works and the codebase: <a href="https://github.com/Momciloo/fun-with-sockets/">https://github.com/Momciloo/fun-with-sockets/</a>
Show HN: Lua Carousel, create little programs on desktop or mobile devices
Show HN: AI Generated SVG's
Show HN: Perfect Pitch Puzzle – a musical Wordle daily ear training game
Hi all! Thanks for checking out the side project my family and I have been working on (on and off) for the past year. We were playing wordle when we thought: wouldn't it be fun if you had to guess musical notes (ABCDEFG) instead of words? And what if the notes you had to guess were actually the first six notes of a familiar melody?<p>My brother and I both have perfect pitch, which has been really helpful when we want to cover a song that we like, or improvise in a jazz or blue grass setting. We don’t promise that this game will help you gain perfect pitch, but it <i>is</i> possible to train your ear to more accurately gauge sounds, and our hope is that this game will help with that.<p>So far we’ve gotten feedback from consistent players that the game <i>has</i> helped non-musicians more easily identify notes based on relative pitches, and helped even musicians improve their ability to remember tunes better, which is good to hear.<p>The game has evolved with different instruments and difficulty modes (easy, normal, hard), but the essence has remained the same:
- One new musical puzzle a day
- The octave moves with the melody, so you don’t need to worry about the octave; you just need to guess the pitch<p>There are a few things we want to improve as well, like:
- improved mobile support (especially Android)
- a “practice mode” - allow users to play more than one game per day, or multiple variations of notes, with visual feedback on how close they were to guessing the note
- making it easier to add new songs to the database (currently it takes 5-10 minutes to code in a new song)
any other feedback that we get here or in our Discord. :)<p>PS. If you already have perfect pitch or want to challenge yourself to the impossible, I'd recommend playing the "bird_tweet" instrument in "hard" mode!