The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Simulate 3D plants in the browser
Show HN: Simulate 3D plants in the browser
Show HN: Simulate 3D plants in the browser
Show HN: Simulate 3D plants in the browser
Report Phone Spam – Shut down robocallers and text spammers
Do you receive unsolicited phone calls or SMS/text spam? I made a free public service site explaining how to find the telecom carrier that is responsible for the spammer's (real) phone number and report the abuse to them – so the carrier can terminate their service.<p>It works, and it feels like magic.<p>Background: Earlier this year, I wrote an HN comment[1] explaining how to find the telecom carrier responsible for a robocall or SMS spam campaign. Those steps aren't documented anywhere else, even though they're actually pretty easy.<p>This info deserved to be much more visible, so now it is: <a href="https://reportphonespam.org/" rel="nofollow noreferrer">https://reportphonespam.org/</a><p>As my site says, most reputable telecom carriers don't want unsolicited messages on their network or phone numbers. In order to disconnect their abusive customers, they need to hear about the abuse. That's where you come in. In a few minutes, you can report abuse to the responsible carrier, who will investigate and often shut off the spammer's phone number(s).<p>[1]: <a href="https://news.ycombinator.com/item?id=34570065#34570835">https://news.ycombinator.com/item?id=34570065#34570835</a>
Show HN: Advent of Code CLI
Show HN: Advent of Code CLI
Show HN: ChatCBT – AI-powered cognitive behavioral therapist for Obsidian
ChatCBT is an AI-powered cognitive behavioral therapist for your local Obsidian notes.<p>You have the choice to use OpenAI, or a 100% local model with Ollama for total data privacy.<p>When you're done with your conversation, ChatCBT can automatically summarize the chat into a table listing your negative beliefs, emotions, categories of negative thinking, and reframed thoughts. This way you can start to recognize patterns in your thinking and begin to rewire your reactions to disturbing circumstances.<p>Conversations are stored in markdown files on your local machine, ensuring privacy and portability while giving you the freedom to organize your sessions as you please. You could easily share these files with a therapist.<p>I built this for myself when I noticed the patterns of chat help I was getting from my therapist in between therapy sessions was essentially coaching that didn't require much context beyond the immediate situation and emotions. This felt like a particularly good use case for LLMs.<p>ChatCBT has been pretty effective for me to talk myself through spiraling episodes of negative thinking. I've been able to get back on my horse faster, and it's convenient that it's available 24/7 and 5000x cheaper than a therapy session (or free if using Ollama). That's why I'd like to share it - curious if it helps anyone else.<p>It's under review to become an Obsidian community plugin, but in the meantime it's available now via git clone (see readme). Happy for feedback
Show HN: ChatCBT – AI-powered cognitive behavioral therapist for Obsidian
ChatCBT is an AI-powered cognitive behavioral therapist for your local Obsidian notes.<p>You have the choice to use OpenAI, or a 100% local model with Ollama for total data privacy.<p>When you're done with your conversation, ChatCBT can automatically summarize the chat into a table listing your negative beliefs, emotions, categories of negative thinking, and reframed thoughts. This way you can start to recognize patterns in your thinking and begin to rewire your reactions to disturbing circumstances.<p>Conversations are stored in markdown files on your local machine, ensuring privacy and portability while giving you the freedom to organize your sessions as you please. You could easily share these files with a therapist.<p>I built this for myself when I noticed the patterns of chat help I was getting from my therapist in between therapy sessions was essentially coaching that didn't require much context beyond the immediate situation and emotions. This felt like a particularly good use case for LLMs.<p>ChatCBT has been pretty effective for me to talk myself through spiraling episodes of negative thinking. I've been able to get back on my horse faster, and it's convenient that it's available 24/7 and 5000x cheaper than a therapy session (or free if using Ollama). That's why I'd like to share it - curious if it helps anyone else.<p>It's under review to become an Obsidian community plugin, but in the meantime it's available now via git clone (see readme). Happy for feedback
Show HN: ThreeFold – Decentralized Cloud Infrastructure
We built it from the ground up, minimal Linux based operating system (written in go and rust), messaging system written in rust and infrastructure as code support (terraform and pulumi)<p>There is even a dashboard, and a playground UI to manage it (that you can even run yourself) . Beautifully decentralized, you have full control of everything, you can deploy on your own servers if the cost of the cloud is too much. You can even run the whole system yourself if you want to.<p>Here is a link to documentation <a href="https://manual.grid.tf/" rel="nofollow noreferrer">https://manual.grid.tf/</a><p>Also, we are honored if it piqued your interest, and would love to support your testing journey free of charge.
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
Show HN: React and Tailwind CSS UI Components
CodeSnaps is a React and Tailwind CSS UI component library with dark mode support. It helps developers to design faster and better with copy-and-paste code snippets without building their MVPs and apps from scratch.
Show HN: Australian Acoustic Observatory Search
The Australian Acoustic Observatory (<a href="https://acousticobservatory.org/" rel="nofollow noreferrer">https://acousticobservatory.org/</a>) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.<p>Here's some fun examples!<p>Laughing Kookaburra: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/372176?start_offset=25&end_offset=30" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Pacific Koel: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/288576?start_offset=15&end_offset=20" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Chiming Wedgebill: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/387006?start_offset=0&end_offset=5" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>How it works, in a nutshell:
We use audio source separation (<<a href="https://blog.research.google/2022/01/separating-birdsong-in-wild-for.html" rel="nofollow noreferrer">https://blog.research.google/2022/01/separating-birdsong-in-...</a>>) to pull apart the A2O data, and then run an embedding model (<<a href="https://arxiv.org/abs/2307.06292" rel="nofollow noreferrer">https://arxiv.org/abs/2307.06292</a>>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.<p>Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.<p>(Oh, and here's a bit of further reading: <a href="https://blog.google/intl/en-au/company-news/technology/ai-ecoacoustics/" rel="nofollow noreferrer">https://blog.google/intl/en-au/company-news/technology/ai-ec...</a> )
Show HN: Australian Acoustic Observatory Search
The Australian Acoustic Observatory (<a href="https://acousticobservatory.org/" rel="nofollow noreferrer">https://acousticobservatory.org/</a>) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.<p>Here's some fun examples!<p>Laughing Kookaburra: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/372176?start_offset=25&end_offset=30" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Pacific Koel: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/288576?start_offset=15&end_offset=20" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Chiming Wedgebill: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/387006?start_offset=0&end_offset=5" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>How it works, in a nutshell:
We use audio source separation (<<a href="https://blog.research.google/2022/01/separating-birdsong-in-wild-for.html" rel="nofollow noreferrer">https://blog.research.google/2022/01/separating-birdsong-in-...</a>>) to pull apart the A2O data, and then run an embedding model (<<a href="https://arxiv.org/abs/2307.06292" rel="nofollow noreferrer">https://arxiv.org/abs/2307.06292</a>>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.<p>Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.<p>(Oh, and here's a bit of further reading: <a href="https://blog.google/intl/en-au/company-news/technology/ai-ecoacoustics/" rel="nofollow noreferrer">https://blog.google/intl/en-au/company-news/technology/ai-ec...</a> )
Show HN: Unofficial YouTube Wrapped 2023, see top creators and watching habits
Show HN: Unofficial YouTube Wrapped 2023, see top creators and watching habits
Show HN: Generate a video to animate stars of any GitHub repository
I made this as a fun tiny project to experiment with Remotion [1] to generate videos. You can find the source code on GitHub [2].<p>[1] <a href="https://www.remotion.dev/" rel="nofollow noreferrer">https://www.remotion.dev/</a><p>[2] <a href="https://github.com/scastiel/github-stars-video">https://github.com/scastiel/github-stars-video</a>
Show HN: Generate a video to animate stars of any GitHub repository
I made this as a fun tiny project to experiment with Remotion [1] to generate videos. You can find the source code on GitHub [2].<p>[1] <a href="https://www.remotion.dev/" rel="nofollow noreferrer">https://www.remotion.dev/</a><p>[2] <a href="https://github.com/scastiel/github-stars-video">https://github.com/scastiel/github-stars-video</a>
Show HN: Qdorks.com – Advanced Google search query composer
Hi HN,<p>qdorks.com is an advanced Google search query composer I have been working on lately. It makes it super easy to write complex Google search queries and save them for later usage.<p>Main features:<p>* Query composer with complex grouping, logical and exclusion operators per clause.<p>* Share your queries with others so they can easily copy and modify the query.<p>* No need to register; only when you want to save the queries.<p>* AI support for PRO users.<p>* More to come...<p>I am pretty happy with how it works and would love your feedback! Happy to answer any questions. Thank you!