The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Git Auto Commit (GAC) – LLM-powered Git commit command line tool

GAC is a tool I built to help users spend less time summing up what was done and more time building. It uses LLMs to generate contextual git commit messages from your code changes. And it can be a drop-in replacement for `git commit -m "..."`.<p>Example:<p><pre><code> feat(auth): add OAuth2 integration with GitHub and Google - Implement OAuth2 authentication flow - Add provider configuration for GitHub and Google - Create callback handler for token exchange - Update login UI with social auth buttons </code></pre> Don't like it? Reroll with 'r', or type `r "focus on xyz"` and it rerolls the commit with your feedback.<p>You can try it out with uvx (no install):<p><pre><code> uvx gac init # config wizard uvx gac </code></pre> <i>Note: `gac init` creates a .gac.env file in your home directory with your chosen provider, model, and API key.</i><p>Tech details:<p><i>14 providers</i> - Supports local (Ollama & LM Studio) and cloud (OpenAI, Anthropic, Gemini, OpenRouter, Groq, Cerebras, Chutes, Fireworks, StreamLake, Synthetic, Together AI, & Z.ai (including their extremely cheap coding plans!)).<p><i>Three verbosity modes</i> - Standard with bullets (default), one-liners (`-o`), or verbose (`-v`) with detailed Motivation/Architecture/Impact sections.<p><i>Secret detection</i> - Scans for API keys, tokens, and credentials before committing. Has caught my API keys on a new project when I hadn't yet gitignored .env.<p><i>Flags</i> - Automate common workflows:<p><pre><code> `gac -h "bug fix"` - pass hints to guide intent `gac -yo` - auto-accept the commit message in one-liner mode `gac -ayp` - stage all files, auto-accept the commit message, and push (yolo mode) </code></pre> Would love to hear your feedback! Give it a try and let me know what you think! <3<p>GitHub: <a href="https://github.com/cellwebb/gac" rel="nofollow">https://github.com/cellwebb/gac</a>

Show HN: Helium Browser for Android with extensions support, based on Vanadium

Been working on an experimental Chromium-based browser that brings 2 major features to your phone/tablet:<p>1. desktop-style extensions: natively install any extensions (like uBO) from the chrome web store, just toggle "desktop site" in the menu first.<p>2. privacy/security hardening: applies the full patch sets from Vanadium (with Helium's currently wip).<p>Means you get both browsers' excellent privacy features, like Vanadium's webrtc IP policy option that protects your real IP by default, and security improvements such as JIT being disabled by default, all while being a reasonably efficient FOSS app that can be installed on any (modern) android.<p>It's still in beta, and as I note in the README, it's not a replacement for the full OS-level security model you'd get from running the GrapheneOS Vanadium combo. However, goal was to combine privacy of Vanadium with the power of desktop extensions and Helium features, and make it accessible to a wider audience. (Passkeys from Bitwarden Mobile should also work straight away once merged in the list of FIDO2 privileged browsers)<p>Build scripts are in the repo if you want to compile it yourself. You can find pre-built releases there too.<p>Would love any feedback/support!

Show HN: Helium Browser for Android with extensions support, based on Vanadium

Been working on an experimental Chromium-based browser that brings 2 major features to your phone/tablet:<p>1. desktop-style extensions: natively install any extensions (like uBO) from the chrome web store, just toggle "desktop site" in the menu first.<p>2. privacy/security hardening: applies the full patch sets from Vanadium (with Helium's currently wip).<p>Means you get both browsers' excellent privacy features, like Vanadium's webrtc IP policy option that protects your real IP by default, and security improvements such as JIT being disabled by default, all while being a reasonably efficient FOSS app that can be installed on any (modern) android.<p>It's still in beta, and as I note in the README, it's not a replacement for the full OS-level security model you'd get from running the GrapheneOS Vanadium combo. However, goal was to combine privacy of Vanadium with the power of desktop extensions and Helium features, and make it accessible to a wider audience. (Passkeys from Bitwarden Mobile should also work straight away once merged in the list of FIDO2 privileged browsers)<p>Build scripts are in the repo if you want to compile it yourself. You can find pre-built releases there too.<p>Would love any feedback/support!

Show HN: Erdos – open-source, AI data science IDE

Hey HN! We’re Jorge and Will from Lotas (<a href="https://www.lotas.ai/">https://www.lotas.ai/</a>), and we’ve built Erdos, a secure AI-powered data science IDE that’s fully open source (<a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>).<p>A few months ago, we shared Rao, an AI coding assistant for RStudio (<a href="https://news.ycombinator.com/item?id=44638510">https://news.ycombinator.com/item?id=44638510</a>). We built Rao to bring the Cursor-like experience to RStudio users. Now we want to take the next step and deliver a tool for the entire data science community that handles Python, R, SQL, and Julia workflows.<p>Erdos is a fork of VS Code designed for data science. It includes:<p>- An AI that can search, read, and write across all file types for Python, R, SQL, and Julia. Also, for Jupyter notebooks, we’ve optimized a jupytext system to allow the AI to make faster edits.<p>- Built-in Python, R, and Julia consoles accessible to both the user and AI<p>- Plot pane that tracks and organizes plots by file and time<p>- Database pane for connecting to and manipulating SQL or FTP data sources<p>- Environment pane for viewing variables, packages, and environments<p>- Help pane for Python, R, and Julia documentation<p>- Remote development via SSH or containers<p>- AI assistant available through a single-click sign-in to our zero data retention backend, bring your own key, or a local model<p>- Open source AGPLv3 license<p>We built Erdos because data scientists are often second-class citizens in modern IDEs. Tools like VS Code, Cursor, and Claude Code are made for software developers, not for people working across Jupyter notebooks, scripts, and SQL. We wanted an IDE that feels native to data scientists, while offering the same AI productivity boosts.<p>You can try Erdos at <a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>, check out our source code on our GitHub (<a href="https://github.com/lotas-ai/erdos" rel="nofollow">https://github.com/lotas-ai/erdos</a>), and let us know what features would make it more useful for your work. We’d love your feedback below!

Show HN: Erdos – open-source, AI data science IDE

Hey HN! We’re Jorge and Will from Lotas (<a href="https://www.lotas.ai/">https://www.lotas.ai/</a>), and we’ve built Erdos, a secure AI-powered data science IDE that’s fully open source (<a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>).<p>A few months ago, we shared Rao, an AI coding assistant for RStudio (<a href="https://news.ycombinator.com/item?id=44638510">https://news.ycombinator.com/item?id=44638510</a>). We built Rao to bring the Cursor-like experience to RStudio users. Now we want to take the next step and deliver a tool for the entire data science community that handles Python, R, SQL, and Julia workflows.<p>Erdos is a fork of VS Code designed for data science. It includes:<p>- An AI that can search, read, and write across all file types for Python, R, SQL, and Julia. Also, for Jupyter notebooks, we’ve optimized a jupytext system to allow the AI to make faster edits.<p>- Built-in Python, R, and Julia consoles accessible to both the user and AI<p>- Plot pane that tracks and organizes plots by file and time<p>- Database pane for connecting to and manipulating SQL or FTP data sources<p>- Environment pane for viewing variables, packages, and environments<p>- Help pane for Python, R, and Julia documentation<p>- Remote development via SSH or containers<p>- AI assistant available through a single-click sign-in to our zero data retention backend, bring your own key, or a local model<p>- Open source AGPLv3 license<p>We built Erdos because data scientists are often second-class citizens in modern IDEs. Tools like VS Code, Cursor, and Claude Code are made for software developers, not for people working across Jupyter notebooks, scripts, and SQL. We wanted an IDE that feels native to data scientists, while offering the same AI productivity boosts.<p>You can try Erdos at <a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>, check out our source code on our GitHub (<a href="https://github.com/lotas-ai/erdos" rel="nofollow">https://github.com/lotas-ai/erdos</a>), and let us know what features would make it more useful for your work. We’d love your feedback below!

Show HN: Erdos – open-source, AI data science IDE

Hey HN! We’re Jorge and Will from Lotas (<a href="https://www.lotas.ai/">https://www.lotas.ai/</a>), and we’ve built Erdos, a secure AI-powered data science IDE that’s fully open source (<a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>).<p>A few months ago, we shared Rao, an AI coding assistant for RStudio (<a href="https://news.ycombinator.com/item?id=44638510">https://news.ycombinator.com/item?id=44638510</a>). We built Rao to bring the Cursor-like experience to RStudio users. Now we want to take the next step and deliver a tool for the entire data science community that handles Python, R, SQL, and Julia workflows.<p>Erdos is a fork of VS Code designed for data science. It includes:<p>- An AI that can search, read, and write across all file types for Python, R, SQL, and Julia. Also, for Jupyter notebooks, we’ve optimized a jupytext system to allow the AI to make faster edits.<p>- Built-in Python, R, and Julia consoles accessible to both the user and AI<p>- Plot pane that tracks and organizes plots by file and time<p>- Database pane for connecting to and manipulating SQL or FTP data sources<p>- Environment pane for viewing variables, packages, and environments<p>- Help pane for Python, R, and Julia documentation<p>- Remote development via SSH or containers<p>- AI assistant available through a single-click sign-in to our zero data retention backend, bring your own key, or a local model<p>- Open source AGPLv3 license<p>We built Erdos because data scientists are often second-class citizens in modern IDEs. Tools like VS Code, Cursor, and Claude Code are made for software developers, not for people working across Jupyter notebooks, scripts, and SQL. We wanted an IDE that feels native to data scientists, while offering the same AI productivity boosts.<p>You can try Erdos at <a href="https://www.lotas.ai/erdos">https://www.lotas.ai/erdos</a>, check out our source code on our GitHub (<a href="https://github.com/lotas-ai/erdos" rel="nofollow">https://github.com/lotas-ai/erdos</a>), and let us know what features would make it more useful for your work. We’d love your feedback below!

Show HN: Write Go code in JavaScript files

I built a Vite plugin that lets you write Go code directly in .js files using a "use golang" directive. It compiles to WebAssembly automatically.

Show HN: Write Go code in JavaScript files

I built a Vite plugin that lets you write Go code directly in .js files using a "use golang" directive. It compiles to WebAssembly automatically.

Show HN: Write Go code in JavaScript files

I built a Vite plugin that lets you write Go code directly in .js files using a "use golang" directive. It compiles to WebAssembly automatically.

Show HN: JSON Query

I'm working on a tool that will probably involve querying JSON documents and I'm asking myself how to expose that functionality to my users.<p>I like the power of `jq` and the fact that LLMs are proficient at it, but I find it right out impossible to come up with the right `jq` incantations myself. Has anyone here been in a similar situation? Which tool / language did you end up exposing to your users?

Show HN: JSON Query

I'm working on a tool that will probably involve querying JSON documents and I'm asking myself how to expose that functionality to my users.<p>I like the power of `jq` and the fact that LLMs are proficient at it, but I find it right out impossible to come up with the right `jq` incantations myself. Has anyone here been in a similar situation? Which tool / language did you end up exposing to your users?

Show HN: JSON Query

I'm working on a tool that will probably involve querying JSON documents and I'm asking myself how to expose that functionality to my users.<p>I like the power of `jq` and the fact that LLMs are proficient at it, but I find it right out impossible to come up with the right `jq` incantations myself. Has anyone here been in a similar situation? Which tool / language did you end up exposing to your users?

Show HN: Status of my favorite bike share stations

Show HN: Status of my favorite bike share stations

Show HN: Random Makers – Show HN and Product Hunt, but Faster and Not Corporate

Show HN: FlashRecord – 2MB Python-native CLI screen recorder

Hi HN — I built FlashRecord, a tiny (≈2MB) Python-native CLI tool for screenshots and GIF recordings aimed at developers who want automation-friendly, scriptable screen capture without a GUI.<p>### What it is<p>- CLI-first and importable (import flashrecord) so you can plug it into scripts, tests, CI pipelines, or docs generation. - Outputs GIFs (and screenshots) with a pure-Pillow/NumPy implementation of a CWAM-inspired compression pipeline (multi-scale saliency, temporal subsampling, adaptive scaling). - Cross-platform (Windows/macOS/Linux), zero-config defaults, and production-ready with tests/docs.<p>---<p>### Why it might be interesting<p>- Tiny install and no heavyweight GUI/tooling to manage. - Designed for automation: generate evidence GIFs in CI, attach demo GIFs to PRs, or create tutorial assets from scripts. - Compression focuses on preserving visually important regions while reducing file size dramatically in typical UI demos.<p>---<p>Repo & license: <a href="https://github.com/Flamehaven/FlashRecord" rel="nofollow">https://github.com/Flamehaven/FlashRecord</a> — MIT licensed.<p>---<p>I’m happy to answer technical questions, performance numbers, cross-platform quirks, or walk through the compression pipeline. Feedback, issues, and PRs welcome. What it is<p>CLI-first and importable (import flashrecord) so you can plug it into scripts, tests, CI pipelines, or docs generation.<p>Outputs GIFs (and screenshots) with a pure-Pillow/NumPy implementation of a CWAM-inspired compression pipeline (multi-scale saliency, temporal subsampling, adaptive scaling).<p>Cross-platform (Windows/macOS/Linux), zero-config defaults, and production-ready with tests/docs.<p>Why it might be interesting<p>Tiny install and no heavyweight GUI/tooling to manage.<p>Designed for automation: generate evidence GIFs in CI, attach demo GIFs to PRs, or create tutorial assets from scripts.<p>Compression focuses on preserving visually important regions while reducing file size dramatically in typical UI demos.<p>Quick try (from source)<p>git clone <a href="https://github.com/Flamehaven/FlashRecord" rel="nofollow">https://github.com/Flamehaven/FlashRecord</a> cd FlashRecord pip install -e . flashrecord @sc # instant screenshot flashrecord @sv 5 10 # 5s GIF at 10 FPS (interactive by default)<p>Repo & license: <a href="https://github.com/Flamehaven/FlashRecord" rel="nofollow">https://github.com/Flamehaven/FlashRecord</a> — MIT licensed.<p>I’m happy to answer technical questions, performance numbers, cross-platform quirks, or walk through the compression pipeline. Feedback, issues, and PRs welcome.

Show HN: Chonky – a neural text semantic chunking goes multilingual

TLDR: I’m expanding the family of text-splitting Chonky models with new multilingual model.<p>You can learn more about this neural approach in a previous post: <a href="https://news.ycombinator.com/item?id=43652968">https://news.ycombinator.com/item?id=43652968</a><p>Since the release of the first distilbert-based model I’ve released two more models based on a ModernBERT. All these models were pre-trained and fine-tuned primary on English texts.<p>But recently mmBERT(<a href="https://huggingface.co/blog/mmbert" rel="nofollow">https://huggingface.co/blog/mmbert</a>) has been released. This model pre-trained on massive dataset that contains 1833 languages. So I had an idea of fine-tuning a new multilingual Chonky model.<p>I’ve expanded training dataset (that previously contained bookcorpus and minipile datasets) with Project Gutenberg dataset which contains books in some widespread languages.<p>To make the model more robust for real-world data I’ve removed punctuation for last word for every training chunk with probability of 0.15 (no ablation was made for this technique though).<p>The hard part is evaluation. The real-world data are typically OCR'ed markdown, transcripts of calls, meeting notes etc. and not a clean book paragraphs. I didn’t find such labeled datasets. So I used what I had: already mentioned bookcorpus and Project Gutenberg validation, Paul Graham essays, concatenated 20_newsgroups.<p>I also tried to fine-tune the bigger mmBERT model (mmbert-base) but unfortunately it didn’t go well — metrics are weirdly lower in comparison with a small model.<p>Please give it a try. I'll appreciate a feedback.<p>The new multilingual model: <a href="https://huggingface.co/mirth/chonky_mmbert_small_multilingual_1" rel="nofollow">https://huggingface.co/mirth/chonky_mmbert_small_multilingua...</a><p>All the Chonky models: <a href="https://huggingface.co/mirth" rel="nofollow">https://huggingface.co/mirth</a><p>Chonky wrapper library: <a href="https://github.com/mirth/chonky" rel="nofollow">https://github.com/mirth/chonky</a>

Show HN: Create-LLM – Train your own LLM in 60 seconds

<a href="https://medium.com/@theaniketgiri/three-months-ago-i-wanted-to-train-my-own-llm-b796aae9aa94" rel="nofollow">https://medium.com/@theaniketgiri/three-months-ago-i-wanted-...</a>

Show HN: Create-LLM – Train your own LLM in 60 seconds

<a href="https://medium.com/@theaniketgiri/three-months-ago-i-wanted-to-train-my-own-llm-b796aae9aa94" rel="nofollow">https://medium.com/@theaniketgiri/three-months-ago-i-wanted-...</a>

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project)

Hi HN, I’m Dvir, a young developer. Last year, I got rejected after a job interview because I lacked some CPU knowledge. After that, I decided to deepen my understanding in the low level world and learn how things work under the hood. I decided to try and create an OS in C and ASM as a way to broaden my knowledge in this area.<p>This took me on the most interesting ride, where I’ve learned about OS theory and low level programming on a whole new level. I’ve spent hours upon hours, blood and tears, reading different OS theory blogs, learning low level concepts, debugging, testing and working on this project.<p>I started by reading University books and online blogs, while also watching videos. Some sources that helped me out were OSDev Wiki (<a href="https://wiki.osdev.org/Expanded_Main_Page" rel="nofollow">https://wiki.osdev.org/Expanded_Main_Page</a>), OSTEP (<a href="https://pages.cs.wisc.edu/~remzi/OSTEP" rel="nofollow">https://pages.cs.wisc.edu/~remzi/OSTEP</a>), open-source repositories like MellOS and LemonOS (more advanced), DoomGeneric, and some friends that have built an OS before.<p>This part was the longest, but also the easiest. I felt like I understood the theory, but still could not connect it into actual code. Sitting down and starting to code was difficult, but I knew that was the next step I needed to take! I began by working on the bootloader, which is optional since you can use a pre-made one (I switched to GRUB later), but implementing it was mainly for learning purposes and to warm up on ASM. These were my steps after that:<p><pre><code> 1) I started implementing the VGA driver, which gave me the ability to display text. 2) Interrupts - IDT, ISR, IRQ, which signal to the CPU that a certain event occurred and needs handling (such as faults, hardware connected device actions, etc). 3) Keyboard driver, which enables me to display the same text I type on my keyboard. 4) PMM (Physical memory management) 5) Paging and virtual memory management 6) RTC driver - clock addition (which was, in my opinion, optional) 7) PIT driver - Ticks every certain amount of time, and also 8) FS (File System) and physical HDD drivers - for the HDD I chose PATA (HDD communication protocol) for simplicity (SATA is a newer but harder option as well). For the FS I chose EXT2 (The Second Extended FileSystem), which is a foundational linux FS structure introduced in 1993. This FS structure is not the simplest, but is very popular in hobby-OS, it is very supported, easy to set up and upgrade to newer EXT versions, it has a lot of materials online, compared to other options. This was probably the longest and largest feature I had worked on. 9) Syscall support. 10) Libc implementation. 11) Processing and scheduling for multiprocessing. 12) Here I also made a shell to test it all. </code></pre> At this point, I had a working shell, but later decided to go further and add a GUI! I was working on the FS (stage 8), when I heard about Hack Club’s Summer of Making (SoM). This was my first time practicing in HackClub, and I want to express my gratitude and share my enjoyment of participating in it.<p>At first I just wanted to declare the OS as finished after completing the FS, and a bit of other drivers, but because of SoM my perspective was changed completely. Because of the competition, I started to think that I needed to ship a complete OS, with processing, GUI and the bare minimum ability to run Doom. I wanted to show the community in SoM how everything works.<p>Then I worked on it for another 2 months, after finishing the shell, just because of SoM!, totalling my project to almost 7 months of work. At this time I added full GUI support, with dirty rectangles and double buffering, I made a GUI mouse driver, and even made a full Doom port! things I would've never even thought about without participating in SoM.<p>This is my SoM project: <a href="https://summer.hackclub.com/projects/5191" rel="nofollow">https://summer.hackclub.com/projects/5191</a>.<p>Every project has challenges, especially in such a low level project. I had to do a lot of debugging while working on this, and it is no easy task. I highly recommend using GDB which helped me debug so many of my problems, especially memory ones.<p>The first major challenge I encountered was during the coding of processes - I realized that a lot of my paging code was completely wrong, poorly tested, and had to be reworked. During this time I was already in the competition and it was difficult keeping up with devlogs and new features while fixing old problems in a code I wrote a few months ago.<p>Some more major problems occurred when trying to run Doom, and unlike the last problem, this was a disaster. I had random PFs and memory problems, one run could work while the next one wouldn’t, and the worst part is that it was only on the Doom, and not on processes I created myself. These issues took a lot of time to figure out. I began to question the Doom code itself, and even thought about giving up on the whole project.<p>After a lot of time spent debugging, I fixed the issues. It was a combination of scheduling issues, Libc issues and the Qemu not having enough (wrongfully assuming 128MB for the whole OS was enough).<p>Finally, I worked throughout all the difficulties, and shipped the project! In the end, the experience working on this project was amazing. I learned a lot, grew and improved as a developer, and I thank SoM for helping to increase my motivation and make the project memorable and unique like I never imagined it would be.<p>The repo is at <a href="https://github.com/dvir-biton/MyraOS" rel="nofollow">https://github.com/dvir-biton/MyraOS</a>. I’d love to discuss any aspect of this with you all in the comments!

< 1 2 3 ... 12 13 14 15 16 ... 899 900 901 >