The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning

Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!

Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning

Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!

Show HN: React and Tailwind CSS UI Components

CodeSnaps is a React and Tailwind CSS UI component library with dark mode support. It helps developers to design faster and better with copy-and-paste code snippets without building their MVPs and apps from scratch.

Show HN: Australian Acoustic Observatory Search

The Australian Acoustic Observatory (<a href="https://acousticobservatory.org/" rel="nofollow noreferrer">https://acousticobservatory.org/</a>) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.<p>Here's some fun examples!<p>Laughing Kookaburra: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/372176?start_offset=25&end_offset=30" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Pacific Koel: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/288576?start_offset=15&end_offset=20" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Chiming Wedgebill: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/387006?start_offset=0&end_offset=5" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>How it works, in a nutshell: We use audio source separation (<<a href="https://blog.research.google/2022/01/separating-birdsong-in-wild-for.html" rel="nofollow noreferrer">https://blog.research.google/2022/01/separating-birdsong-in-...</a>>) to pull apart the A2O data, and then run an embedding model (<<a href="https://arxiv.org/abs/2307.06292" rel="nofollow noreferrer">https://arxiv.org/abs/2307.06292</a>>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.<p>Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.<p>(Oh, and here's a bit of further reading: <a href="https://blog.google/intl/en-au/company-news/technology/ai-ecoacoustics/" rel="nofollow noreferrer">https://blog.google/intl/en-au/company-news/technology/ai-ec...</a> )

Show HN: Australian Acoustic Observatory Search

The Australian Acoustic Observatory (<a href="https://acousticobservatory.org/" rel="nofollow noreferrer">https://acousticobservatory.org/</a>) has 360 microphones across the continent, and over 2 million hours of audio. However, none of it is labeled: We want to make this enormous repository useful to researchers. We have found that researchers are often looking for 'hard' signals - specific call-types, birds with very little available training data, and so on. So we built an acoustic-similarity search tool, allowing researchers to provide an example of what they're looking for, which we then match against embeddings from the A2O dataset.<p>Here's some fun examples!<p>Laughing Kookaburra: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/372176?start_offset=25&end_offset=30" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Pacific Koel: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/288576?start_offset=15&end_offset=20" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>Chiming Wedgebill: <<a href="https://search.acousticobservatory.org/search/index.html?q=https://api.search.acousticobservatory.org/api/v1/a2o/audio_recordings/download/flac/387006?start_offset=0&end_offset=5" rel="nofollow noreferrer">https://search.acousticobservatory.org/search/index.html?q=h...</a>><p>How it works, in a nutshell: We use audio source separation (<<a href="https://blog.research.google/2022/01/separating-birdsong-in-wild-for.html" rel="nofollow noreferrer">https://blog.research.google/2022/01/separating-birdsong-in-...</a>>) to pull apart the A2O data, and then run an embedding model (<<a href="https://arxiv.org/abs/2307.06292" rel="nofollow noreferrer">https://arxiv.org/abs/2307.06292</a>>) on each channel of the separated audio to produce a 'fingerprint' of the sound. All of this is put in a vector database with a link back to the original audio. When someone performs a search, we embed their audio, and then match against all of the embeddings in the vector database.<p>Right now, about 1% of the A2O data is indexed (the first minute of every recording, evenly sampled across the day). We're looking to get initial feedback and will then continue to iterate and expand coverage.<p>(Oh, and here's a bit of further reading: <a href="https://blog.google/intl/en-au/company-news/technology/ai-ecoacoustics/" rel="nofollow noreferrer">https://blog.google/intl/en-au/company-news/technology/ai-ec...</a> )

Show HN: Unofficial YouTube Wrapped 2023, see top creators and watching habits

Show HN: Unofficial YouTube Wrapped 2023, see top creators and watching habits

Show HN: Generate a video to animate stars of any GitHub repository

I made this as a fun tiny project to experiment with Remotion [1] to generate videos. You can find the source code on GitHub [2].<p>[1] <a href="https://www.remotion.dev/" rel="nofollow noreferrer">https://www.remotion.dev/</a><p>[2] <a href="https://github.com/scastiel/github-stars-video">https://github.com/scastiel/github-stars-video</a>

Show HN: Generate a video to animate stars of any GitHub repository

I made this as a fun tiny project to experiment with Remotion [1] to generate videos. You can find the source code on GitHub [2].<p>[1] <a href="https://www.remotion.dev/" rel="nofollow noreferrer">https://www.remotion.dev/</a><p>[2] <a href="https://github.com/scastiel/github-stars-video">https://github.com/scastiel/github-stars-video</a>

Show HN: Qdorks.com – Advanced Google search query composer

Hi HN,<p>qdorks.com is an advanced Google search query composer I have been working on lately. It makes it super easy to write complex Google search queries and save them for later usage.<p>Main features:<p>* Query composer with complex grouping, logical and exclusion operators per clause.<p>* Share your queries with others so they can easily copy and modify the query.<p>* No need to register; only when you want to save the queries.<p>* AI support for PRO users.<p>* More to come...<p>I am pretty happy with how it works and would love your feedback! Happy to answer any questions. Thank you!

Show HN: Qdorks.com – Advanced Google search query composer

Hi HN,<p>qdorks.com is an advanced Google search query composer I have been working on lately. It makes it super easy to write complex Google search queries and save them for later usage.<p>Main features:<p>* Query composer with complex grouping, logical and exclusion operators per clause.<p>* Share your queries with others so they can easily copy and modify the query.<p>* No need to register; only when you want to save the queries.<p>* AI support for PRO users.<p>* More to come...<p>I am pretty happy with how it works and would love your feedback! Happy to answer any questions. Thank you!

Show HN: Qdorks.com – Advanced Google search query composer

Hi HN,<p>qdorks.com is an advanced Google search query composer I have been working on lately. It makes it super easy to write complex Google search queries and save them for later usage.<p>Main features:<p>* Query composer with complex grouping, logical and exclusion operators per clause.<p>* Share your queries with others so they can easily copy and modify the query.<p>* No need to register; only when you want to save the queries.<p>* AI support for PRO users.<p>* More to come...<p>I am pretty happy with how it works and would love your feedback! Happy to answer any questions. Thank you!

Show HN: Taipy – Turns Data and AI algorithms into full web applications

Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe

Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe

Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe

Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe

Show HN: Bi-directional sync between Postgres and SQLite

Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback! - Ralf, Conrad, Kobie, Phillip and team

Show HN: Bi-directional sync between Postgres and SQLite

Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback! - Ralf, Conrad, Kobie, Phillip and team

Show HN: Bi-directional sync between Postgres and SQLite

Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback! - Ralf, Conrad, Kobie, Phillip and team

< 1 2 3 ... 395 396 397 398 399 ... 937 938 939 >