The best Hacker News stories from Show from the past day
Latest posts:
Show HN: An algorithmic audio landscape
This is an web audio experiment I've been wanting to do for a long time. Basically an ambient music composition, but all the sound elements are laid out in space, and that musical space can be explored freely.<p>It's definitely inspired by in-world music that sometimes appears in games. I basically took that concept, keeping the music aspect, and dropping the entire "game" aspect.<p>I also turned it into a more "traditional" non-interactive album, but since I started with code, why not program the whole thing? Had a blast making the entire album from code, the complete source for the album is here: <a href="https://github.com/pac-dev/AmbientGardenAlbum">https://github.com/pac-dev/AmbientGardenAlbum</a>
Show HN: Mojo Language Syntax Highlighting for Vim
Show HN: Payme, a library and CLI to generate QR codes for SEPA payments
I built this library and tool several years ago.<p>Some event where I was a co-organizer needed pre-payment for the orders, and to make this easy without going the path of an online payment service, I sent automatic mails with payment QR codes included.<p>The CLI also (by default) generates QR codes in the terminal, which I use often when an invoice needs to be paid: copy all necessary fields as CLI flags, generate the QR in the terminal, scan with the phone, double check everything and go! Maybe paying should not be this easy...
Show HN: ReverseETL – The open-source alternative to Hightouch and Census
Show HN: ReverseETL – The open-source alternative to Hightouch and Census
Show HN: ReverseETL – The open-source alternative to Hightouch and Census
Show HN: Continue.dev releases local, open-source tab-autocomplete
Hey HN! After working on this for the last couple of months, we (Continue.dev) are finally releasing tab-autocomplete in beta. It is private, local, and secure, using Ollama, llama.cpp, or any other LLM provider. I'm even more excited about it being completely open-source, because it means I can share how we built it!<p>I've been sharing details on Twitter for the last month (summarized here: <a href="https://twitter.com/NateSesti/status/1763264142163808279" rel="nofollow">https://twitter.com/NateSesti/status/1763264142163808279</a>), covering the following:<p>- A specific type of debouncing, and strategy for re-using streamed requests<p>- How we use the Language Server Protocol to surgically include important context<p>- Using tree-sitter to calculate the "AST Path", an abstraction with many uses<p>- Truncation: how we decide to stop early, complete multiple lines, and generally avoid mistaken artifacts<p>- Jaccard similarity as a method of searching over and ranking recently edited ranges / files<p>I will continue to share as we improve, so feel free to follow along. I'll also be compiling all of this into a more well-formatted blog post in the future!<p>Inspired by the "Copilot Internals" post from a year ago (<a href="https://news.ycombinator.com/item?id=34032872">https://news.ycombinator.com/item?id=34032872</a>), I’ll be sharing live updates, clear explanations, and non-obfuscated code.<p>---<p>...and one more thing: we've exposed a handful of options that let you customize your experience. You can alter:<p>- the model
- stop words
- all numerical parameters / thresholds used for retrieval and prompt construction
- the prompt template
- whether to complete multiple lines, or only one at a time
- and a bit more (<a href="https://github.com/continuedev/continue/blob/0445b489408f0e74a0fd96cab6b00ae350fec372/core/index.d.ts#L580-L593">https://github.com/continuedev/continue/blob/0445b489408f0e7...</a>)<p>This lets each individual make their own preferred trade-off between speed, accuracy, and other factors.
Show HN: Continue.dev releases local, open-source tab-autocomplete
Hey HN! After working on this for the last couple of months, we (Continue.dev) are finally releasing tab-autocomplete in beta. It is private, local, and secure, using Ollama, llama.cpp, or any other LLM provider. I'm even more excited about it being completely open-source, because it means I can share how we built it!<p>I've been sharing details on Twitter for the last month (summarized here: <a href="https://twitter.com/NateSesti/status/1763264142163808279" rel="nofollow">https://twitter.com/NateSesti/status/1763264142163808279</a>), covering the following:<p>- A specific type of debouncing, and strategy for re-using streamed requests<p>- How we use the Language Server Protocol to surgically include important context<p>- Using tree-sitter to calculate the "AST Path", an abstraction with many uses<p>- Truncation: how we decide to stop early, complete multiple lines, and generally avoid mistaken artifacts<p>- Jaccard similarity as a method of searching over and ranking recently edited ranges / files<p>I will continue to share as we improve, so feel free to follow along. I'll also be compiling all of this into a more well-formatted blog post in the future!<p>Inspired by the "Copilot Internals" post from a year ago (<a href="https://news.ycombinator.com/item?id=34032872">https://news.ycombinator.com/item?id=34032872</a>), I’ll be sharing live updates, clear explanations, and non-obfuscated code.<p>---<p>...and one more thing: we've exposed a handful of options that let you customize your experience. You can alter:<p>- the model
- stop words
- all numerical parameters / thresholds used for retrieval and prompt construction
- the prompt template
- whether to complete multiple lines, or only one at a time
- and a bit more (<a href="https://github.com/continuedev/continue/blob/0445b489408f0e74a0fd96cab6b00ae350fec372/core/index.d.ts#L580-L593">https://github.com/continuedev/continue/blob/0445b489408f0e7...</a>)<p>This lets each individual make their own preferred trade-off between speed, accuracy, and other factors.
Show HN: Continue.dev releases local, open-source tab-autocomplete
Hey HN! After working on this for the last couple of months, we (Continue.dev) are finally releasing tab-autocomplete in beta. It is private, local, and secure, using Ollama, llama.cpp, or any other LLM provider. I'm even more excited about it being completely open-source, because it means I can share how we built it!<p>I've been sharing details on Twitter for the last month (summarized here: <a href="https://twitter.com/NateSesti/status/1763264142163808279" rel="nofollow">https://twitter.com/NateSesti/status/1763264142163808279</a>), covering the following:<p>- A specific type of debouncing, and strategy for re-using streamed requests<p>- How we use the Language Server Protocol to surgically include important context<p>- Using tree-sitter to calculate the "AST Path", an abstraction with many uses<p>- Truncation: how we decide to stop early, complete multiple lines, and generally avoid mistaken artifacts<p>- Jaccard similarity as a method of searching over and ranking recently edited ranges / files<p>I will continue to share as we improve, so feel free to follow along. I'll also be compiling all of this into a more well-formatted blog post in the future!<p>Inspired by the "Copilot Internals" post from a year ago (<a href="https://news.ycombinator.com/item?id=34032872">https://news.ycombinator.com/item?id=34032872</a>), I’ll be sharing live updates, clear explanations, and non-obfuscated code.<p>---<p>...and one more thing: we've exposed a handful of options that let you customize your experience. You can alter:<p>- the model
- stop words
- all numerical parameters / thresholds used for retrieval and prompt construction
- the prompt template
- whether to complete multiple lines, or only one at a time
- and a bit more (<a href="https://github.com/continuedev/continue/blob/0445b489408f0e74a0fd96cab6b00ae350fec372/core/index.d.ts#L580-L593">https://github.com/continuedev/continue/blob/0445b489408f0e7...</a>)<p>This lets each individual make their own preferred trade-off between speed, accuracy, and other factors.
Show HN: Kalosm an embeddable framework for pre-trained models in Rust
Hi everyone, I'm happy to announce the release of Kalosm! [Kalosm](<a href="https://floneum.com/kalosm/" rel="nofollow">https://floneum.com/kalosm/</a>) is a framework for embedded AI in rust.<p>## What is Kalosm?<p>Kalosm provides a simple interface for pre-trained language, audio, and image models models. To make it easy to use with these models in your application, Kalosm includes a set of integrations other systems like your database or documents.<p>```rust
use kalosm::language::*;<p>#[tokio::main]
async fn main() {
let mut model = Llama::new_chat();<p><pre><code> let mut chat = Chat::builder(&mut model)
.with_system_prompt("The assistant will act like a pirate")
.build();
loop {
chat.add_message(prompt_input("\n> ").unwrap())
.await
.unwrap()
.to_std_out()
.await
.unwrap();
}</code></pre>
}
```<p>## What can you build with Kalosm?<p>Kalosm is designed to be a flexible and powerful tool for building AI into your applications. It is a great fit for any application that uses AI models to process sensitive information where local processing is important.<p>Here are a few examples of applications that are built with Kalosm:<p>- Floneum (<a href="https://floneum.com/" rel="nofollow">https://floneum.com/</a>): A local open source workflow editor and automation tool that uses Kalosm to provide natural language processing and other AI features.<p>- Kalosm Chat (<a href="https://github.com/floneum/kalosm-chat">https://github.com/floneum/kalosm-chat</a>): A simple chat application that uses Kalosm to run quantized language models.<p>## Kalosm 0.2<p>The 0.2 release includes several new features and some performance improvements:<p>- Tasks and Agents<p>- Task Evaluation<p>- Prompt Auto-Tuning<p>- Regex Validation<p>- Surreal Database Integration<p>- RAG improvements<p>- Performance Improvements<p>- New Models<p>If you have any questions, feel free to ask them here, in the discord (<a href="https://discord.gg/dQdmhuB8q5" rel="nofollow">https://discord.gg/dQdmhuB8q5</a>) or on GitHub (<a href="https://github.com/floneum/floneum/tree/master/interfaces/kalosm">https://github.com/floneum/floneum/tree/master/interfaces/ka...</a>).<p>To get started with Kalosm, you can follow the quick start guide: <a href="https://floneum.com/kalosm/docs/" rel="nofollow">https://floneum.com/kalosm/docs/</a>
Show HN: Replay your typing in a few lines of JavaScript
I recently needed to make a text appear on a website and I wanted to get this real human feeling that computers don't have. I only found the TypeIt lib but it was not free and I didn't want to add a dependency for such a simple case.<p>Human replay allows to copy paste a few lines of JS to make a text appear exactly how you typed it.
Show HN: Replay your typing in a few lines of JavaScript
I recently needed to make a text appear on a website and I wanted to get this real human feeling that computers don't have. I only found the TypeIt lib but it was not free and I didn't want to add a dependency for such a simple case.<p>Human replay allows to copy paste a few lines of JS to make a text appear exactly how you typed it.
Show HN: OfflineLLM – a Vision Pro app running TinyLlama on device
Hey, I built this in a day while at Founders Inc Apple Vision Pro residency. Try it out, let me know what you think.
Show HN: OfflineLLM – a Vision Pro app running TinyLlama on device
Hey, I built this in a day while at Founders Inc Apple Vision Pro residency. Try it out, let me know what you think.
Show HN: Predictive text using only 13kb of JavaScript. no LLM
Show HN: Predictive text using only 13kb of JavaScript. no LLM
Show HN: Predictive text using only 13kb of JavaScript. no LLM
Show HN: Predictive text using only 13kb of JavaScript. no LLM
Show HN: Struct – A Feed-Centric Chat Platform
Hi HN! I’m Jason, a product designer at Struct Chat.<p>At Struct, we're frustrated by the clutter, distractions, and inefficiencies plaguing existing chat platforms like Slack and Discord.<p>We've built a radical new chat platform that leverages threads, feeds, and AI to help alleviate these problems, and give you back more time in your day.<p>Struct uses a thread-first approach to keep conversations on-topic. It applies AI-generated titles and summaries to help you decide what's worth your attention. Threads are then organized in a real-time feed, keeping all your conversations up-to-date and at the ready, eliminating the need for channel hopping.<p>Comprehensive search tools make finding things a breeze, and Strucbot, our AI assistant can answer questions based on what it’s learned from prior threads. It can even proactively respond when it notices repeat requests, providing quick answers so you don’t have to. Structbot is fully GPT-4 enabled, so you can riff with Chat GPT and your peers (generate code, ask questions, all the good stuff) without ever switching apps.<p>Struct is available on Linux, Windows, Mac, and even works as a Slack interface. Give us a try and let us know what you think.
Show HN: Struct – A Feed-Centric Chat Platform
Hi HN! I’m Jason, a product designer at Struct Chat.<p>At Struct, we're frustrated by the clutter, distractions, and inefficiencies plaguing existing chat platforms like Slack and Discord.<p>We've built a radical new chat platform that leverages threads, feeds, and AI to help alleviate these problems, and give you back more time in your day.<p>Struct uses a thread-first approach to keep conversations on-topic. It applies AI-generated titles and summaries to help you decide what's worth your attention. Threads are then organized in a real-time feed, keeping all your conversations up-to-date and at the ready, eliminating the need for channel hopping.<p>Comprehensive search tools make finding things a breeze, and Strucbot, our AI assistant can answer questions based on what it’s learned from prior threads. It can even proactively respond when it notices repeat requests, providing quick answers so you don’t have to. Structbot is fully GPT-4 enabled, so you can riff with Chat GPT and your peers (generate code, ask questions, all the good stuff) without ever switching apps.<p>Struct is available on Linux, Windows, Mac, and even works as a Slack interface. Give us a try and let us know what you think.