The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Feedsmith — Fast parser & generator for RSS, Atom, OPML feed namespaces
Hi HN! While working on a project that involves frequently parsing a lot of feeds, I needed a fast JavaScript-based parser to extract specific fields from feed namespaces. Existing Node packages were either too slow or merged all feed formats, losing namespace information. So I decided to write it myself and created this NPM package with a simple API.<p>Feedsmith supports all feed formats and many popular namespaces, including: Podcast, Media, iTunes, Dublin Core, and more. It can also parse and generate OPML files.<p>I am currently adding support for more namespaces and feed generation for RSS, Atom and RDF. The library grew into something bigger than I initially anticipated, so I also started creating a dedicated documentation website to describe all the features.
Show HN: Whippy Term - GUI terminal for embedded development (Linux and Windows)
Show HN: Reverse Pac-Man
Keep one eye on the first ghost and the other on the second
Show HN: eInk optimized manga with Kindle Comic Converter (+Kobo/ReMarkable)
Kindle Comic Converter optimizes comics and manga for eink readers like Kindle, Kobo, ReMarkable, and more. Pages display in fullscreen without margins, with proper fixed layout support. Its main feature is various optional image processing steps to look good on eink screens, which have different requirements than normal LCD screens. It also does filesize optimization by downscaling to your specific device's screen resolution, which can improve performance on underpowered ereaders. Supported input formats include folders/CBZ/CBR/PDF of JPG/PNG files and more. Supported output formats include MOBI/AZW3, EPUB, KEPUB, and CBZ.<p>Hey everyone! I'm the current maintainer of KCC since 2023, thanks for using it! I’ve been reading manga on Kindle ever since I got the big 9.7” Kindle DX from 2010 using mangle, and upgraded to the even bigger 10.2” Kindle Scribe 2022 using KCC.<p>The biggest contributions I've made to KCC are:<p>- added modern macOS support and removed homebrew requirement
- ported code to run on native Apple silicon M1 chip and later for a 2x speed boost (qt5->qt6)
- free open source windows codesign with SignPath - fixed Kindle Scribe support
- and tons of other various features and bug fixes and developer friendly changes
- created a legacy Windows 7 build with 300+ downloads…<p>The biggest community PRs were:<p>- huge 2x speed boosts due to various CPU/IO optimizations
- Kobo/Remarkable support<p>Enjoy using KCC and let me know if you have any questions!
Show HN: Klavis AI – Open-source MCP integration for AI applications
Hi HN, we are excited to show you Klavis AI. It is an open source project and we provide hosted versions with API access as well. (Website: <a href="https://www.klavis.ai/">https://www.klavis.ai/</a>, Github repo: <a href="https://github.com/Klavis-AI/klavis">https://github.com/Klavis-AI/klavis</a>)<p>We're addressing a couple of key problems with using MCPs. First, many available MCP servers lack native or used-based authentications, creating security vulnerabilities and adding complexity during development.<p>Second, many MCP servers are personal projects, not designed for the reliability needed in production.<p>Connecting to these servers usually requires writing custom MCP client code for the MCP protocol itself, which is a barrier, especially if you already have function calling systems in place.<p>Klavis AI aims to address these issues. To simplify access, we offer an API to launch production-ready, hosted MCP servers quickly via our API. The API also provides built-in OAuth and multi-tenancy auth support for MCP servers.<p>We also want to remove the need for developers to write MCP client code. You can use our API to interact with any remote MCP servers directly from your existing backend infrastructure. For faster prototyping or direct user interaction, we also provide open-source client interfaces for Web, Slack, and Discord.<p>The MCP servers and clients code is open source because we want to contribute to the MCP community.<p>For a quick start in the hosted verions, log in to our website and generate an API key. Then start calling our APIs directly. You can find more details in our doc: <a href="https://docs.klavis.ai">https://docs.klavis.ai</a><p>For a quick start in the open source version, go to our github repository and check out the detailed readme on each MCP server and client.<p>A little note about myself: my background includes working on the function calling for Google Gemini. During that time, I saw firsthand the challenges teams face when trying to connect AI agents to external tools. I want to bring my insights and energy to accelerate MCP adoption.<p>This is an early release, and we’d appreciate feedback from the community. What are your worst pain points related to MCPs, either as a developer or a general user? What other MCP servers or features would be most valuable to you?<p>We'll be around in the comments. Thanks for reading!
Show HN: Plexe – ML Models from a Prompt
Hey HN! We’re Vaibhav and Marcello. We’re building Plexe (<a href="https://github.com/plexe-ai/plexe">https://github.com/plexe-ai/plexe</a>), an open-source agent that turns natural language task descriptions into trained ML models. Here’s a video walkthrough: <a href="https://www.youtube.com/watch?v=bUwCSglhcXY" rel="nofollow">https://www.youtube.com/watch?v=bUwCSglhcXY</a>.<p>There are all kinds of uses for ML models that never get realized because the process of making them is messy and convoluted. You can spend months trying to find the data, clean it, experiment with models and deploy to production, only to find out that your project has been binned for taking so long. There are many tools for “automating” ML, but it still takes teams of ML experts to actually productionize something of value. And we can’t keep throwing LLMs at every ML problem. Why use a generic 10B parameter language model, if a logistic regression trained on your data could do the job better?<p>Our light-bulb moment was that we could use LLMs to generate task-specific ML models that would be trained on one’s own data. Thanks to the emergent reasoning ability of LLMs, it is now possible to create an agentic system that might automate most of the ML lifecycle.<p>A couple of months ago, we started developing a Python library that would let you define ML models on structured data using a description of the expected behaviour. Our initial implementation arranged potential solutions into a graph, using LLMs to write plans, implement them as code, and run the resulting training script. Using simple search algorithms, the system traversed the solution space to identify and package the best model.<p>However, we ran into several limitations, as the algorithm proved brittle under edge cases, and we kept having to put patches for every minor issue in the training process. We decided to rethink the approach, throw everything out, and rebuild the tool using an agentic approach prioritising generality and flexibility. What started as a single ML engineering agent turned into an agentic ML "team", with all experiments tracked and logged using MLFlow.<p>Our current implementation uses the smolagents library to define an agent hierarchy. We mapped the functionality of our previous implementation to a set of specialized agents, such as an “ML scientist” that proposes solution plans, and so on. Each agent has specialized tools, instructions, and prompt templates. To facilitate cross-agent communication, we implemented a shared memory that enables objects (datasets, code snippets, etc) to be passed across agents indirectly by referencing keys in a registry. You can find a detailed write-up on how it works here: <a href="https://github.com/plexe-ai/plexe/blob/main/docs/architecture/multi-agent-system.md">https://github.com/plexe-ai/plexe/blob/main/docs/architectur...</a><p>Plexe’s early release is focused on predictive problems over structured data, and can be used to build models such as forecasting player injury risk in high-intensity sports, product recommendations for an e-commerce marketplace, or predicting technical indicators for algorithmic trading. Here are some examples to get you started: <a href="https://github.com/plexe-ai/plexe/tree/main/examples">https://github.com/plexe-ai/plexe/tree/main/examples</a><p>To get it working on your data, you can dump any CSV, parquet, etc and Plexe uses what it needs from your dataset to figure out what features it should use. In the open-source tool, it only supports adding files right now but in our platform version, we'll have support for integrating with Postgres where it pulls all available data based on an SQL query and dumps it into a parquet file for the agent to build models.<p>Next up, we’ll be tackling more of the ML project lifecycle: we’re currently working on adding a “feature engineering agent” that focuses on the complex data transformations that are often required for data to be ready for model training. If you're interested, check Plexe out and let us know your thoughts!
Show HN: Plexe – ML Models from a Prompt
Hey HN! We’re Vaibhav and Marcello. We’re building Plexe (<a href="https://github.com/plexe-ai/plexe">https://github.com/plexe-ai/plexe</a>), an open-source agent that turns natural language task descriptions into trained ML models. Here’s a video walkthrough: <a href="https://www.youtube.com/watch?v=bUwCSglhcXY" rel="nofollow">https://www.youtube.com/watch?v=bUwCSglhcXY</a>.<p>There are all kinds of uses for ML models that never get realized because the process of making them is messy and convoluted. You can spend months trying to find the data, clean it, experiment with models and deploy to production, only to find out that your project has been binned for taking so long. There are many tools for “automating” ML, but it still takes teams of ML experts to actually productionize something of value. And we can’t keep throwing LLMs at every ML problem. Why use a generic 10B parameter language model, if a logistic regression trained on your data could do the job better?<p>Our light-bulb moment was that we could use LLMs to generate task-specific ML models that would be trained on one’s own data. Thanks to the emergent reasoning ability of LLMs, it is now possible to create an agentic system that might automate most of the ML lifecycle.<p>A couple of months ago, we started developing a Python library that would let you define ML models on structured data using a description of the expected behaviour. Our initial implementation arranged potential solutions into a graph, using LLMs to write plans, implement them as code, and run the resulting training script. Using simple search algorithms, the system traversed the solution space to identify and package the best model.<p>However, we ran into several limitations, as the algorithm proved brittle under edge cases, and we kept having to put patches for every minor issue in the training process. We decided to rethink the approach, throw everything out, and rebuild the tool using an agentic approach prioritising generality and flexibility. What started as a single ML engineering agent turned into an agentic ML "team", with all experiments tracked and logged using MLFlow.<p>Our current implementation uses the smolagents library to define an agent hierarchy. We mapped the functionality of our previous implementation to a set of specialized agents, such as an “ML scientist” that proposes solution plans, and so on. Each agent has specialized tools, instructions, and prompt templates. To facilitate cross-agent communication, we implemented a shared memory that enables objects (datasets, code snippets, etc) to be passed across agents indirectly by referencing keys in a registry. You can find a detailed write-up on how it works here: <a href="https://github.com/plexe-ai/plexe/blob/main/docs/architecture/multi-agent-system.md">https://github.com/plexe-ai/plexe/blob/main/docs/architectur...</a><p>Plexe’s early release is focused on predictive problems over structured data, and can be used to build models such as forecasting player injury risk in high-intensity sports, product recommendations for an e-commerce marketplace, or predicting technical indicators for algorithmic trading. Here are some examples to get you started: <a href="https://github.com/plexe-ai/plexe/tree/main/examples">https://github.com/plexe-ai/plexe/tree/main/examples</a><p>To get it working on your data, you can dump any CSV, parquet, etc and Plexe uses what it needs from your dataset to figure out what features it should use. In the open-source tool, it only supports adding files right now but in our platform version, we'll have support for integrating with Postgres where it pulls all available data based on an SQL query and dumps it into a parquet file for the agent to build models.<p>Next up, we’ll be tackling more of the ML project lifecycle: we’re currently working on adding a “feature engineering agent” that focuses on the complex data transformations that are often required for data to be ready for model training. If you're interested, check Plexe out and let us know your thoughts!
Show HN: Sheet Music in Smart Glasses
Hi everyone, my name is Kevin Lin, and this is a Show HN for my sheet music smart glasses project. My video was on the front page on Friday: <a href="https://news.ycombinator.com/item?id=43876243">https://news.ycombinator.com/item?id=43876243</a>, but dang said we should do a Show HN as well, so here goes!<p>I’ve wanted to put sheet music into smart glasses for a long time, but the perfect opportunity to execute came in mid-February, when Mentra (YC W25) tweeted about a smart glasses hackathon they were hosting - winners would get to take home a pair. I went, had a blast making a bunch of music-related apps with my teammate, and we won, so I got to take them home, refine the project, and make a pretty cool video about it (<a href="https://www.youtube.com/watch?v=j36u2i7PKKE" rel="nofollow">https://www.youtube.com/watch?v=j36u2i7PKKE</a>).<p>The glasses are Even Realities G1s. They look normal, but they have two microphones, a screen in each lens, and can be even made with a prescription. Every person I’ve met who tried them on was surprised at how good the display is, and the video recordings of them unfortunately don’t do them justice.<p>The software runs on AugmentOS, which is Mentra’s smart glasses operating system that works on various 3rd-party smart glasses, including the G1s. All I had to do to make an app was write and run a typescript file using the AugmentOS SDK. This gives you the voice transcription and raw audio as input, and text or bitmaps available as output to the screens, everything else is completely abstracted away. Your glasses communicate with an AugmentOS app, and then the app communicates with your typescript service.<p>The only hard part was creating a Python script to turn sheet music (MusicXML format) into small, optimized bitmaps to display on the screens. To start, the existing landscape of music-related Python libraries is pretty poorly documented and I ran into multiple never-before-seen error messages. Downscaling to the small size of the glasses screens also meant that stems and staff lines were disappearing, so I thought to use morphological dilation to emphasize those without making the notes unintelligible. The final pipeline was MusicXML -> music21 library to render chunks of bars to png -> dilate with opencv- > downscale -> convert to bitmap with Pillow -> optimize bitmaps with imagemagick. This is far from the best code I’ve ever written, but the LLMs attempt at this whole task was abysmal and my years of Python experience really got to shine here. The code is on GitHub: <a href="https://github.com/kevinlinxc/AugmentedChords">https://github.com/kevinlinxc/AugmentedChords</a>.<p>Putting it together, my typescript service serves these bitmaps locally when requested. I put together a UI where I can navigate menus and sheet music with voice commands (e.g. show catalog, next, select, start, exit, pause) and then I connected foot pedals to my laptop. Because of bitmap sending latency (~3s right now, but future glasses will do better), using foot pedals to turn the bars while playing wasn’t viable, so I instead had one of my pedals toggle autoscrolling, and the other two pedals sped up/temporarily paused the scrolling.<p>After lots of adjustments, I was able to play a full song using just the glasses! It took many takes and there was definitely lots of room for improvement. For example: - Bitmap sending is pretty slow, which is why using the foot pedals to turn bars wasn’t viable. - The resolution is pretty small, I would love to put more bars in at once so I can flip less frequently. - Since foot pedals aren’t portable, it would be cool to have a mode where the audio dictates when the sheet music changes. I tried implementing that with FFT but it was often wrong and more effort is needed. Head tilt controls would be cool too, because full manual control is a hard requirement for practicing.<p>All of these pain points are being targeted by Mentra and other companies competing in the space, and so I’m super excited to see the next generation! Also, feel free to ask me anything!
Show HN: Sheet Music in Smart Glasses
Hi everyone, my name is Kevin Lin, and this is a Show HN for my sheet music smart glasses project. My video was on the front page on Friday: <a href="https://news.ycombinator.com/item?id=43876243">https://news.ycombinator.com/item?id=43876243</a>, but dang said we should do a Show HN as well, so here goes!<p>I’ve wanted to put sheet music into smart glasses for a long time, but the perfect opportunity to execute came in mid-February, when Mentra (YC W25) tweeted about a smart glasses hackathon they were hosting - winners would get to take home a pair. I went, had a blast making a bunch of music-related apps with my teammate, and we won, so I got to take them home, refine the project, and make a pretty cool video about it (<a href="https://www.youtube.com/watch?v=j36u2i7PKKE" rel="nofollow">https://www.youtube.com/watch?v=j36u2i7PKKE</a>).<p>The glasses are Even Realities G1s. They look normal, but they have two microphones, a screen in each lens, and can be even made with a prescription. Every person I’ve met who tried them on was surprised at how good the display is, and the video recordings of them unfortunately don’t do them justice.<p>The software runs on AugmentOS, which is Mentra’s smart glasses operating system that works on various 3rd-party smart glasses, including the G1s. All I had to do to make an app was write and run a typescript file using the AugmentOS SDK. This gives you the voice transcription and raw audio as input, and text or bitmaps available as output to the screens, everything else is completely abstracted away. Your glasses communicate with an AugmentOS app, and then the app communicates with your typescript service.<p>The only hard part was creating a Python script to turn sheet music (MusicXML format) into small, optimized bitmaps to display on the screens. To start, the existing landscape of music-related Python libraries is pretty poorly documented and I ran into multiple never-before-seen error messages. Downscaling to the small size of the glasses screens also meant that stems and staff lines were disappearing, so I thought to use morphological dilation to emphasize those without making the notes unintelligible. The final pipeline was MusicXML -> music21 library to render chunks of bars to png -> dilate with opencv- > downscale -> convert to bitmap with Pillow -> optimize bitmaps with imagemagick. This is far from the best code I’ve ever written, but the LLMs attempt at this whole task was abysmal and my years of Python experience really got to shine here. The code is on GitHub: <a href="https://github.com/kevinlinxc/AugmentedChords">https://github.com/kevinlinxc/AugmentedChords</a>.<p>Putting it together, my typescript service serves these bitmaps locally when requested. I put together a UI where I can navigate menus and sheet music with voice commands (e.g. show catalog, next, select, start, exit, pause) and then I connected foot pedals to my laptop. Because of bitmap sending latency (~3s right now, but future glasses will do better), using foot pedals to turn the bars while playing wasn’t viable, so I instead had one of my pedals toggle autoscrolling, and the other two pedals sped up/temporarily paused the scrolling.<p>After lots of adjustments, I was able to play a full song using just the glasses! It took many takes and there was definitely lots of room for improvement. For example: - Bitmap sending is pretty slow, which is why using the foot pedals to turn bars wasn’t viable. - The resolution is pretty small, I would love to put more bars in at once so I can flip less frequently. - Since foot pedals aren’t portable, it would be cool to have a mode where the audio dictates when the sheet music changes. I tried implementing that with FFT but it was often wrong and more effort is needed. Head tilt controls would be cool too, because full manual control is a hard requirement for practicing.<p>All of these pain points are being targeted by Mentra and other companies competing in the space, and so I’m super excited to see the next generation! Also, feel free to ask me anything!
Show HN: Clippy – 90s UI for local LLMs
Show HN: Clippy – 90s UI for local LLMs
Show HN: Journelly for iOS: like tweeting but for your eyes only (in plain text)
On iOS, I've flip-flopped back and forth between a bunch of note-taking and journaling apps. None would stick.<p>My initial attempts at building such an app faded just the same, until I realized I wanted the same level of low-friction posting and browsing offered by social media apps, but for my quick notes. Not social, just easy posting, search, and a familiar feed.<p>This is how Journelly came to be. I like to describe it as: tweeting, but for your eyes only (fully offline and in plain text).<p>If you’re an Org markup user, you’ll be delighted to know it’s powered by unicorns under the hood.<p>If you’re a Markdown fan, please get in touch! I’m recording interest for journelly + markdown at xenodium.com. The more requests I get, the sooner I’ll get Markdown support out the door.<p>Hope you like the app!
Show HN: Journelly for iOS: like tweeting but for your eyes only (in plain text)
On iOS, I've flip-flopped back and forth between a bunch of note-taking and journaling apps. None would stick.<p>My initial attempts at building such an app faded just the same, until I realized I wanted the same level of low-friction posting and browsing offered by social media apps, but for my quick notes. Not social, just easy posting, search, and a familiar feed.<p>This is how Journelly came to be. I like to describe it as: tweeting, but for your eyes only (fully offline and in plain text).<p>If you’re an Org markup user, you’ll be delighted to know it’s powered by unicorns under the hood.<p>If you’re a Markdown fan, please get in touch! I’m recording interest for journelly + markdown at xenodium.com. The more requests I get, the sooner I’ll get Markdown support out the door.<p>Hope you like the app!
Show HN: Bracket – selfhosted tournament system
Over the last two years, I developed a tournament system called Bracket.
Most (if not all) tournament systems available online are paid and ask tons of money (a typical minimum subscription costs 50 euros per month, and go up to 500 euros per month or so), which is not feasible for many small sport clubs/individuals. So I developed my own system and put it publicly on GitHub. AFAIK this is the only open source tournament system available that has a significant amount of features.<p>I made this tournament system for my badminton club and hosted six paid tournaments successfully.<p>It features flexible setups, where a tournament can have multiple stages and each stage can have multiple "items" (round robin, single elimination or swiss).<p>Backend is written in async Python with FastAPI and frontend in Next.js with the great Mantine library.<p>I would appreciate some feedback!<p>GitHub: <a href="https://github.com/evroon/bracket">https://github.com/evroon/bracket</a><p>Demo: <a href="https://www.bracketapp.nl/demo" rel="nofollow">https://www.bracketapp.nl/demo</a><p>Docs: <a href="https://docs.bracketapp.nl" rel="nofollow">https://docs.bracketapp.nl</a>
Show HN: Bracket – selfhosted tournament system
Over the last two years, I developed a tournament system called Bracket.
Most (if not all) tournament systems available online are paid and ask tons of money (a typical minimum subscription costs 50 euros per month, and go up to 500 euros per month or so), which is not feasible for many small sport clubs/individuals. So I developed my own system and put it publicly on GitHub. AFAIK this is the only open source tournament system available that has a significant amount of features.<p>I made this tournament system for my badminton club and hosted six paid tournaments successfully.<p>It features flexible setups, where a tournament can have multiple stages and each stage can have multiple "items" (round robin, single elimination or swiss).<p>Backend is written in async Python with FastAPI and frontend in Next.js with the great Mantine library.<p>I would appreciate some feedback!<p>GitHub: <a href="https://github.com/evroon/bracket">https://github.com/evroon/bracket</a><p>Demo: <a href="https://www.bracketapp.nl/demo" rel="nofollow">https://www.bracketapp.nl/demo</a><p>Docs: <a href="https://docs.bracketapp.nl" rel="nofollow">https://docs.bracketapp.nl</a>
Show HN: TextQuery – Query CSV, JSON, XLSX Files with SQL
Show HN: TextQuery – Query CSV, JSON, XLSX Files with SQL
Show HN: TextQuery – Query CSV, JSON, XLSX Files with SQL
Show HN: VectorVFS, your filesystem as a vector database
Show HN: VectorVFS, your filesystem as a vector database