The best Hacker News stories from Show from the past day
Latest posts:
Show HN: QWANJI
Hey HN! I've built this bit of cuteware (software not solving a problem but just for fun) inspired from the patterns made by swipe typing on mobile devices. To me it seemed like the closest thing to an English version of Kanji and it would be cool to consistently recreate those patterns.<p>The implementation is vanilla js to keep things simple (and from a bit of framework fatigue)<p>I'm keen to see how people use qwanji. Any and all feedback welcome!
Show HN: QWANJI
Hey HN! I've built this bit of cuteware (software not solving a problem but just for fun) inspired from the patterns made by swipe typing on mobile devices. To me it seemed like the closest thing to an English version of Kanji and it would be cool to consistently recreate those patterns.<p>The implementation is vanilla js to keep things simple (and from a bit of framework fatigue)<p>I'm keen to see how people use qwanji. Any and all feedback welcome!
Show HN: CompressX, my FFmpeg wrapper for macOS
Hey HN, just wanted to share my success story with CompressX, my FFmpeg wrapper for macOS.<p>For those who may not be familiar, FFmpeg is a powerful tool for converting, streaming, and recording audio and video content.<p>I started CompressX as a weekend project to serve my 9-5 jobs, primarily to compress demo videos for uploading to GitLab and sharing with my colleagues. It took me 2 weeks to make the first working version. I shared the demo on Twitter and the reaction was extraordinary. People loved it, they said that I was bringing the Pied Piper to life.<p>Four months later, I hit the $9,000 mark in revenue. I never expected to make a dime from this project, let alone eight thousand dollars. It's been a surreal experience, but it's also been incredibly rewarding.<p>I put a lot of time and effort into developing this tool, and it's amazing to see it paying off. It's been a great journey so far and I'm excited to see where it takes me next.
Show HN: CompressX, my FFmpeg wrapper for macOS
Hey HN, just wanted to share my success story with CompressX, my FFmpeg wrapper for macOS.<p>For those who may not be familiar, FFmpeg is a powerful tool for converting, streaming, and recording audio and video content.<p>I started CompressX as a weekend project to serve my 9-5 jobs, primarily to compress demo videos for uploading to GitLab and sharing with my colleagues. It took me 2 weeks to make the first working version. I shared the demo on Twitter and the reaction was extraordinary. People loved it, they said that I was bringing the Pied Piper to life.<p>Four months later, I hit the $9,000 mark in revenue. I never expected to make a dime from this project, let alone eight thousand dollars. It's been a surreal experience, but it's also been incredibly rewarding.<p>I put a lot of time and effort into developing this tool, and it's amazing to see it paying off. It's been a great journey so far and I'm excited to see where it takes me next.
Show HN: CompressX, my FFmpeg wrapper for macOS
Hey HN, just wanted to share my success story with CompressX, my FFmpeg wrapper for macOS.<p>For those who may not be familiar, FFmpeg is a powerful tool for converting, streaming, and recording audio and video content.<p>I started CompressX as a weekend project to serve my 9-5 jobs, primarily to compress demo videos for uploading to GitLab and sharing with my colleagues. It took me 2 weeks to make the first working version. I shared the demo on Twitter and the reaction was extraordinary. People loved it, they said that I was bringing the Pied Piper to life.<p>Four months later, I hit the $9,000 mark in revenue. I never expected to make a dime from this project, let alone eight thousand dollars. It's been a surreal experience, but it's also been incredibly rewarding.<p>I put a lot of time and effort into developing this tool, and it's amazing to see it paying off. It's been a great journey so far and I'm excited to see where it takes me next.
Show HN: ADS-B visualizer
I've created a web app for querying and visualization of ADS-B datasets: <a href="https://adsb.exposed/" rel="nofollow">https://adsb.exposed/</a><p>Source code: <a href="https://github.com/ClickHouse/adsb.exposed/">https://github.com/ClickHouse/adsb.exposed/</a><p>The results significantly exceeded my expectations because the pictures are insanely beautiful, and the data is a treasure trove.<p>It proves many statements that were not certain:
- it is feasible to generate tiles by aggregation on a pixel level (instead of hexagons or rectangular grid);
- it does not require JPG/PNG tiles - we can transfer raw bitmap data with zstd compression;
- it is possible to do it in real time;
Show HN: ADS-B visualizer
I've created a web app for querying and visualization of ADS-B datasets: <a href="https://adsb.exposed/" rel="nofollow">https://adsb.exposed/</a><p>Source code: <a href="https://github.com/ClickHouse/adsb.exposed/">https://github.com/ClickHouse/adsb.exposed/</a><p>The results significantly exceeded my expectations because the pictures are insanely beautiful, and the data is a treasure trove.<p>It proves many statements that were not certain:
- it is feasible to generate tiles by aggregation on a pixel level (instead of hexagons or rectangular grid);
- it does not require JPG/PNG tiles - we can transfer raw bitmap data with zstd compression;
- it is possible to do it in real time;
Show HN: ADS-B visualizer
I've created a web app for querying and visualization of ADS-B datasets: <a href="https://adsb.exposed/" rel="nofollow">https://adsb.exposed/</a><p>Source code: <a href="https://github.com/ClickHouse/adsb.exposed/">https://github.com/ClickHouse/adsb.exposed/</a><p>The results significantly exceeded my expectations because the pictures are insanely beautiful, and the data is a treasure trove.<p>It proves many statements that were not certain:
- it is feasible to generate tiles by aggregation on a pixel level (instead of hexagons or rectangular grid);
- it does not require JPG/PNG tiles - we can transfer raw bitmap data with zstd compression;
- it is possible to do it in real time;
Show HN: Sonauto – A more controllable AI music creator
Hey HN,<p>My cofounder and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: <a href="https://sonauto.ai/">https://sonauto.ai/</a><p>Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec (<a href="https://github.com/descriptinc/descript-audio-codec">https://github.com/descriptinc/descript-audio-codec</a>) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics!<p>We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make <i>your own</i> music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you.<p>For example, we turned this drum line (<a href="https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz">https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz</a>) into this full song (<a href="https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7">https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7</a> skip to 1:05 if impatient) or this other song I like better (<a href="https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins">https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins</a> - we accidentally compressed it with AAC instead of Opus which hurt quality, though)<p>We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site free and unlimited for as long as possible.<p>We'd love to answer your questions. Let us know what you think of our first model! <a href="https://sonauto.ai/">https://sonauto.ai/</a>
Show HN: Sonauto – A more controllable AI music creator
Hey HN,<p>My cofounder and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: <a href="https://sonauto.ai/">https://sonauto.ai/</a><p>Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec (<a href="https://github.com/descriptinc/descript-audio-codec">https://github.com/descriptinc/descript-audio-codec</a>) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics!<p>We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make <i>your own</i> music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you.<p>For example, we turned this drum line (<a href="https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz">https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz</a>) into this full song (<a href="https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7">https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7</a> skip to 1:05 if impatient) or this other song I like better (<a href="https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins">https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins</a> - we accidentally compressed it with AAC instead of Opus which hurt quality, though)<p>We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site free and unlimited for as long as possible.<p>We'd love to answer your questions. Let us know what you think of our first model! <a href="https://sonauto.ai/">https://sonauto.ai/</a>
Show HN: Sonauto – A more controllable AI music creator
Hey HN,<p>My cofounder and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: <a href="https://sonauto.ai/">https://sonauto.ai/</a><p>Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec (<a href="https://github.com/descriptinc/descript-audio-codec">https://github.com/descriptinc/descript-audio-codec</a>) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics!<p>We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make <i>your own</i> music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you.<p>For example, we turned this drum line (<a href="https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz">https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz</a>) into this full song (<a href="https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7">https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7</a> skip to 1:05 if impatient) or this other song I like better (<a href="https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins">https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins</a> - we accidentally compressed it with AAC instead of Opus which hurt quality, though)<p>We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site free and unlimited for as long as possible.<p>We'd love to answer your questions. Let us know what you think of our first model! <a href="https://sonauto.ai/">https://sonauto.ai/</a>
Show HN: The fastest way to run Mixtral 8x7B on Apple Silicon Macs
I’d originally launched my app: Private LLM[1][2] on HN around 10 months ago, with a single RedPajama Chat 3B model. The app has come a long way since then. About a month ago, I added support for 4-bit OmniQuant quantized Mixtral 8x7B Instruct model, and it seems to outperform Q4 models at inference speed and Q8 models at text generation quality, while consuming only about 24GB of RAM[3] at 8k context length. The trick is: a) to use a better quantization algorithm and b) to use unquantized embeddings and the MoE gates (the overhead is quite small).<p>Other notable features include many more downloadable models, support for App Intents (Siri, Apple Shortcuts), on-device grammar correction, summarization etc with macOS services and an iOS version (universal app), also with many smaller downloadable models and support for App Intents. There's a small community of users building and sharing LLM based shortcuts on the App's discord.<p>Last week, I also shipped support for the bilingual Yi-34B Chat model, which consumes ~18GB of RAM. iOS users and users with low memory Macs can download the related Yi-6B Chat model.<p>Unlike most popular offline LLM apps out there, this app uses mlc-llm for inference and not llama.cpp. Also, all models in the app are quantized with OmniQuant[4] quantization and not RTN quantization.<p>[1]: https://privatellm.app/<p>[2]: https://apps.apple.com/us/app/private-llm-local-ai-chatbot/id6448106860<p>[3]: https://www.youtube.com/watch?v=4AE8yXIWSAA<p>[4]: https://arxiv.org/abs/2308.13137
Show HN: Chat2DB – Revolutionizing Database Management with Conversational UI
Hello HN community!<p>I'm excited to introduce you to Chat2DB, a tool we've been developing that's aimed at revolutionizing the way we interact with databases. Our goal was to make database management as intuitive and straightforward as chatting with a friend. With Chat2DB, you can execute complex queries, manage your data, and even receive insights through a simple conversational interface.<p>Why did we create Chat2DB? We noticed that many people, even those with technical backgrounds, find database management daunting due to its complexity and the steep learning curve of query languages. We believed there had to be a more accessible way to interact with databases – and that's how Chat2DB was born.<p>Features include:<p>1.A natural language processing engine that understands your queries and executes them without the need for SQL knowledge.
2. Support for multiple database types, making it a versatile tool for developers, data analysts, and businesses alike.
3. Real-time insights and suggestions to optimize your data and queries.
4. AI-driven Intelligent Reports: Chat2DB enables fast, accurate decision-making by accurately analyzing requirements, deeply mining insights, and presenting them in most intuitive reports.
5. AI-driven intelligent SQL development: Chat2DB's SQL development has completely changed the way we interact with data. By using AI technology enabling every user to have the powerful ability to easily handle SQL.
6.AI-driven Data Exploration: Users across roles can use natural language to interact with data on tailored dialogue pages, bypassing the need to understand complex data source details.<p>We're currently in beta and eagerly looking for feedback on how we can improve. Whether it's a feature request, bug report, or any other feedback, we're all ears. We truly believe Chat2DB can make database management a less intimidating and more efficient process for everyone.<p>Check it out here: <a href="https://chat2db.ai/" rel="nofollow">https://chat2db.ai/</a><p>Looking forward to hearing your thoughts and suggestions!
Show HN: I created automatic subtitling app to boost short videos
I made <a href="https://videofa.st" rel="nofollow">https://videofa.st</a><p>This is a web app to add automatic subtitles/captions into your short videos,<p>I created this app because I was wasting so much time to add subtitles manually into my own social short videos and the quality was poor, but now it's easy and fast!<p>I also needed to pay my bills and so I thought: why not help others getting access to this time saver app? That's how I decided to make it becomes a Saas and to share it with you today!<p>The link is videofa.st
Show HN: Neco – Coroutine Library for C
Show HN: Neco – Coroutine Library for C
Show HN: Neco – Coroutine Library for C
Show HN: DualShock calibration in the browser using WebHID
This website uses undocumented HID commands of both DS4 and DualSense to re-center and recalibrate DualShock analog sticks.
Show HN: DualShock calibration in the browser using WebHID
This website uses undocumented HID commands of both DS4 and DualSense to re-center and recalibrate DualShock analog sticks.
Show HN: I made a discrete logic network card