The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: A self-published art book about Google's first 25 years

This took me 3 years to finish. (It is 100% self-published, not endorsed by Google.)<p>So… I wrote a book. It’s a different book with a unique approach. It’s not a novel or a technical book. It’s a biography, a company’s biography. My hope is that it serves two purposes: to inspire founders and to captivate interior designers.<p>It all started three years ago. My wife and I were discussing interior design for our living room and she brought up what beautiful coffee table books should we have. After looking at her favorites I realized there’s nothing like this for tech startups. Digital products are intangible, how cool would it be to make them tangible and a beautiful art/decor book for your home office or living room?<p>This idea of consolidating my particular interests with my wife’s decorative goals got stuck in the back of my head for a while. I searched and searched. I could only find these kinds of books related to design, architecture, fashion, or travel. Nothing about tech startups. The next logical step for me was to do it myself…<p>And oh… what a “mistake” it was. This whole project was done during nights and weekends - the first half while working on my startup and the last half while helping create Webflow Labs. You have no idea how many times I “quit” the project. In the end, perseverance and discipline were key to making it happen.<p>During my founder journey, I collected several notes about successful (and unsuccessful) tech startups to help me learn from them. I decided to write about Google’s fascinating story: a true generational company and one of the most valuable companies in the world.<p>It’s with great pleasure and satisfaction that I’m introducing you to “Google - First 25 Years”. A book celebrating the minds behind the tech giant's incredible journey from Stanford to global dominance.<p>I have absolutely no idea if this book’s format and concept make sense or if it has an audience willing to invest ($169). Here are some of the unique features:<p>▪ Silk hardcover<p>▪ Limited edition<p>▪ Handpainted edges<p>▪ More than 30 handmade illustrations<p>I’m producing only 1,000 copies. If the book concept seems interesting enough, leave a comment, I’d love to know your thoughts!<p>Either way, this project was a super fun ride where being outside my comfort zone was the best way to learn storytelling, graphic design, and product manufacturing.<p>At least, I hope this inspires you to go back to your side projects and finish them!

Show HN: A self-published art book about Google's first 25 years

This took me 3 years to finish. (It is 100% self-published, not endorsed by Google.)<p>So… I wrote a book. It’s a different book with a unique approach. It’s not a novel or a technical book. It’s a biography, a company’s biography. My hope is that it serves two purposes: to inspire founders and to captivate interior designers.<p>It all started three years ago. My wife and I were discussing interior design for our living room and she brought up what beautiful coffee table books should we have. After looking at her favorites I realized there’s nothing like this for tech startups. Digital products are intangible, how cool would it be to make them tangible and a beautiful art/decor book for your home office or living room?<p>This idea of consolidating my particular interests with my wife’s decorative goals got stuck in the back of my head for a while. I searched and searched. I could only find these kinds of books related to design, architecture, fashion, or travel. Nothing about tech startups. The next logical step for me was to do it myself…<p>And oh… what a “mistake” it was. This whole project was done during nights and weekends - the first half while working on my startup and the last half while helping create Webflow Labs. You have no idea how many times I “quit” the project. In the end, perseverance and discipline were key to making it happen.<p>During my founder journey, I collected several notes about successful (and unsuccessful) tech startups to help me learn from them. I decided to write about Google’s fascinating story: a true generational company and one of the most valuable companies in the world.<p>It’s with great pleasure and satisfaction that I’m introducing you to “Google - First 25 Years”. A book celebrating the minds behind the tech giant's incredible journey from Stanford to global dominance.<p>I have absolutely no idea if this book’s format and concept make sense or if it has an audience willing to invest ($169). Here are some of the unique features:<p>▪ Silk hardcover<p>▪ Limited edition<p>▪ Handpainted edges<p>▪ More than 30 handmade illustrations<p>I’m producing only 1,000 copies. If the book concept seems interesting enough, leave a comment, I’d love to know your thoughts!<p>Either way, this project was a super fun ride where being outside my comfort zone was the best way to learn storytelling, graphic design, and product manufacturing.<p>At least, I hope this inspires you to go back to your side projects and finish them!

Show HN: A self-published art book about Google's first 25 years

This took me 3 years to finish. (It is 100% self-published, not endorsed by Google.)<p>So… I wrote a book. It’s a different book with a unique approach. It’s not a novel or a technical book. It’s a biography, a company’s biography. My hope is that it serves two purposes: to inspire founders and to captivate interior designers.<p>It all started three years ago. My wife and I were discussing interior design for our living room and she brought up what beautiful coffee table books should we have. After looking at her favorites I realized there’s nothing like this for tech startups. Digital products are intangible, how cool would it be to make them tangible and a beautiful art/decor book for your home office or living room?<p>This idea of consolidating my particular interests with my wife’s decorative goals got stuck in the back of my head for a while. I searched and searched. I could only find these kinds of books related to design, architecture, fashion, or travel. Nothing about tech startups. The next logical step for me was to do it myself…<p>And oh… what a “mistake” it was. This whole project was done during nights and weekends - the first half while working on my startup and the last half while helping create Webflow Labs. You have no idea how many times I “quit” the project. In the end, perseverance and discipline were key to making it happen.<p>During my founder journey, I collected several notes about successful (and unsuccessful) tech startups to help me learn from them. I decided to write about Google’s fascinating story: a true generational company and one of the most valuable companies in the world.<p>It’s with great pleasure and satisfaction that I’m introducing you to “Google - First 25 Years”. A book celebrating the minds behind the tech giant's incredible journey from Stanford to global dominance.<p>I have absolutely no idea if this book’s format and concept make sense or if it has an audience willing to invest ($169). Here are some of the unique features:<p>▪ Silk hardcover<p>▪ Limited edition<p>▪ Handpainted edges<p>▪ More than 30 handmade illustrations<p>I’m producing only 1,000 copies. If the book concept seems interesting enough, leave a comment, I’d love to know your thoughts!<p>Either way, this project was a super fun ride where being outside my comfort zone was the best way to learn storytelling, graphic design, and product manufacturing.<p>At least, I hope this inspires you to go back to your side projects and finish them!

Show HN: BiTE – Cross-platform executable viewer and reverse engineering tool

Hey everyone!<p>I’m excited to share a project I’ve been working on throughout my university studies. It’s called BiTE (<a href="https://github.com/WINSDK/bite">https://github.com/WINSDK/bite</a>) and it's a tool primarily focused on being an executable viewer with reverse engineering capabilities.<p>BiTE supports Windows, MacOS, and Linux, along with their associated executable formats. It’s also capable of parsing and displaying debug information using DWARF/PDB formats, which I hope will be useful even for just comparing codegen.<p>I’ve put a lot of effort into this and it's the first time I'm releasing something like this publicly. Any feedback, bug reports, or feature suggestions would be greatly appreciated!

Show HN: BiTE – Cross-platform executable viewer and reverse engineering tool

Hey everyone!<p>I’m excited to share a project I’ve been working on throughout my university studies. It’s called BiTE (<a href="https://github.com/WINSDK/bite">https://github.com/WINSDK/bite</a>) and it's a tool primarily focused on being an executable viewer with reverse engineering capabilities.<p>BiTE supports Windows, MacOS, and Linux, along with their associated executable formats. It’s also capable of parsing and displaying debug information using DWARF/PDB formats, which I hope will be useful even for just comparing codegen.<p>I’ve put a lot of effort into this and it's the first time I'm releasing something like this publicly. Any feedback, bug reports, or feature suggestions would be greatly appreciated!

Show HN: BiTE – Cross-platform executable viewer and reverse engineering tool

Hey everyone!<p>I’m excited to share a project I’ve been working on throughout my university studies. It’s called BiTE (<a href="https://github.com/WINSDK/bite">https://github.com/WINSDK/bite</a>) and it's a tool primarily focused on being an executable viewer with reverse engineering capabilities.<p>BiTE supports Windows, MacOS, and Linux, along with their associated executable formats. It’s also capable of parsing and displaying debug information using DWARF/PDB formats, which I hope will be useful even for just comparing codegen.<p>I’ve put a lot of effort into this and it's the first time I'm releasing something like this publicly. Any feedback, bug reports, or feature suggestions would be greatly appreciated!

Show HN: Open Source TailwindCSS UI Components

Free Tailwind html UI Components - built to create landing pages and websites. Easyfrontend UI components are free and open-source. Copy paste the components to update your existing site or create a new site from it.

Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.<p>It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.<p>It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.<p>I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.<p>The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.<p>You could call it ad-hoc model distillation :)<p>You can change the speed / accuracy of a model at will, in real time.<p>Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.<p>It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.<p>The algorithm is described here, and the implementation is open source.<p><a href="https://kolinko.github.io/effort/" rel="nofollow">https://kolinko.github.io/effort/</a><p>I know these are bold claims, but I hope they survive the scrutiny :)

Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.<p>It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.<p>It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.<p>I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.<p>The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.<p>You could call it ad-hoc model distillation :)<p>You can change the speed / accuracy of a model at will, in real time.<p>Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.<p>It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.<p>The algorithm is described here, and the implementation is open source.<p><a href="https://kolinko.github.io/effort/" rel="nofollow">https://kolinko.github.io/effort/</a><p>I know these are bold claims, but I hope they survive the scrutiny :)

Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.<p>It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.<p>It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.<p>I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.<p>The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.<p>You could call it ad-hoc model distillation :)<p>You can change the speed / accuracy of a model at will, in real time.<p>Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.<p>It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.<p>The algorithm is described here, and the implementation is open source.<p><a href="https://kolinko.github.io/effort/" rel="nofollow">https://kolinko.github.io/effort/</a><p>I know these are bold claims, but I hope they survive the scrutiny :)

Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.<p>It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.<p>It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.<p>I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.<p>The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.<p>You could call it ad-hoc model distillation :)<p>You can change the speed / accuracy of a model at will, in real time.<p>Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.<p>It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.<p>The algorithm is described here, and the implementation is open source.<p><a href="https://kolinko.github.io/effort/" rel="nofollow">https://kolinko.github.io/effort/</a><p>I know these are bold claims, but I hope they survive the scrutiny :)

Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.<p>It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.<p>It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.<p>I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.<p>The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.<p>You could call it ad-hoc model distillation :)<p>You can change the speed / accuracy of a model at will, in real time.<p>Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.<p>It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.<p>The algorithm is described here, and the implementation is open source.<p><a href="https://kolinko.github.io/effort/" rel="nofollow">https://kolinko.github.io/effort/</a><p>I know these are bold claims, but I hope they survive the scrutiny :)

Show HN: Search HN for interesting comment sections

I built this tool to help me find interesting discussions on Hacker News. I love reading HN discussions almost more than the articles themselves. However, I found that full text search, although highly performant, is not always good at surfacing interesting discussions on a certain topic -- especially if you don't know what to search for exactly.<p>I built this by scraping the most recent ~6 million posts (that's about 2 years of history) and putting the resulting posts and their vector embeddings into Postgres.<p>Let me know what could be improved, and if you'd like a more detailed writeup of how this was built :)

Show HN: Search HN for interesting comment sections

I built this tool to help me find interesting discussions on Hacker News. I love reading HN discussions almost more than the articles themselves. However, I found that full text search, although highly performant, is not always good at surfacing interesting discussions on a certain topic -- especially if you don't know what to search for exactly.<p>I built this by scraping the most recent ~6 million posts (that's about 2 years of history) and putting the resulting posts and their vector embeddings into Postgres.<p>Let me know what could be improved, and if you'd like a more detailed writeup of how this was built :)

Show HN: YouTube Shorts Redirector

I am neurodivergent and noticed the Youtube Shorts format was hacking my brain to engage longer than I wanted. I wrote this quick extension to gain my time back. If you have suggestions for improvement, I'm all ears. Thank you :)

Show HN: a Rust based CLI tool 'imgcatr' for displaying images

cat for images, by RUST

Show HN: a Rust based CLI tool 'imgcatr' for displaying images

cat for images, by RUST

Show HN: Render audio waveforms to HTML canvas using WebGPU

Hey HN. I built this quick and dirty component to render audio waveforms using WebGPU. I just published it to NPM.<p>It's the first time I use WebGPU and it's been a while since I write shaders. Feedback is very welcome!<p>GitHub: <a href="https://github.com/mrkev/webgpu-waveform">https://github.com/mrkev/webgpu-waveform</a> Examples: <a href="https://aykev.dev/webgpu-waveform" rel="nofollow">https://aykev.dev/webgpu-waveform</a>

Show HN: Render audio waveforms to HTML canvas using WebGPU

Hey HN. I built this quick and dirty component to render audio waveforms using WebGPU. I just published it to NPM.<p>It's the first time I use WebGPU and it's been a while since I write shaders. Feedback is very welcome!<p>GitHub: <a href="https://github.com/mrkev/webgpu-waveform">https://github.com/mrkev/webgpu-waveform</a> Examples: <a href="https://aykev.dev/webgpu-waveform" rel="nofollow">https://aykev.dev/webgpu-waveform</a>

Show HN: Term Typer – Learn a language by typing

Hey HN! I'm from Brazil and I created Term Typer to help my little brother learn other languages while practicing his keyboard typing skills. We've found it super helpful and fun. Feel free to try it out and let me know your thoughts and feedback. Thanks a lot!

< 1 2 3 ... 223 224 225 226 227 ... 833 834 835 >