The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Ki Editor – Multicursor syntactical editor
Hi everyone, I have been developing this editor, Ki, for over a year, and have employed it substantially in all kinds of development (including Ki itself) for at least 3 months.<p>I think it is mostly crystallized, thus I'm happy to share it with you today.<p>Its main strength is first-class multi-cursor and structural (syntax) editing, which is a rare combination in the realm of editors (TUI or GUI alike).<p>Hope you'll enjoy it!
Show HN: Ki Editor – Multicursor syntactical editor
Hi everyone, I have been developing this editor, Ki, for over a year, and have employed it substantially in all kinds of development (including Ki itself) for at least 3 months.<p>I think it is mostly crystallized, thus I'm happy to share it with you today.<p>Its main strength is first-class multi-cursor and structural (syntax) editing, which is a rare combination in the realm of editors (TUI or GUI alike).<p>Hope you'll enjoy it!
Show HN: I mapped HN's favorite books with GPT-4o
Hey HN! I love finding new books to read on here. I wanted to gather the most mentioned books and recreate the serendipity of physical browsing. I scraped 20k comments from HN threads related to reading, extracted the references and opinions using GPT-4o mini, and visualised their embeddings as a map.<p>- OpenAI's embeddings were processed using UMAP and HDBSCAN. A direct 2D projection from the text embeddings didn't yield visually interesting results. Instead, HDBSCAN is first applied on a high-dimensional projection. Those clusters tend to correspond to different genres. The genre memberships are then embedded using a second round of UMAP (using Hellinger distance) which results in pleasingly dense structures.<p>- The books' descriptions are based on extractions from the comments and GPT's general knowledge. Quality levels vary, and it leads to some oddly specific points, but I haven't found any yet that are straight up wrong.<p>- There are multiple books with the same title. Currently, only the most popular one of those makes it onto the map.<p>- It's surprisingly hard to get high quality book cover images. I tried Google Books and a bunch of open APIs, but they all had their issues. In the end, I got the covers from GoodReads through a hacked together process that combines their autocomplete search with GPT for data linkage. Does anyone know of a reliable source?
Show HN: I mapped HN's favorite books with GPT-4o
Hey HN! I love finding new books to read on here. I wanted to gather the most mentioned books and recreate the serendipity of physical browsing. I scraped 20k comments from HN threads related to reading, extracted the references and opinions using GPT-4o mini, and visualised their embeddings as a map.<p>- OpenAI's embeddings were processed using UMAP and HDBSCAN. A direct 2D projection from the text embeddings didn't yield visually interesting results. Instead, HDBSCAN is first applied on a high-dimensional projection. Those clusters tend to correspond to different genres. The genre memberships are then embedded using a second round of UMAP (using Hellinger distance) which results in pleasingly dense structures.<p>- The books' descriptions are based on extractions from the comments and GPT's general knowledge. Quality levels vary, and it leads to some oddly specific points, but I haven't found any yet that are straight up wrong.<p>- There are multiple books with the same title. Currently, only the most popular one of those makes it onto the map.<p>- It's surprisingly hard to get high quality book cover images. I tried Google Books and a bunch of open APIs, but they all had their issues. In the end, I got the covers from GoodReads through a hacked together process that combines their autocomplete search with GPT for data linkage. Does anyone know of a reliable source?
Show HN: I mapped HN's favorite books with GPT-4o
Hey HN! I love finding new books to read on here. I wanted to gather the most mentioned books and recreate the serendipity of physical browsing. I scraped 20k comments from HN threads related to reading, extracted the references and opinions using GPT-4o mini, and visualised their embeddings as a map.<p>- OpenAI's embeddings were processed using UMAP and HDBSCAN. A direct 2D projection from the text embeddings didn't yield visually interesting results. Instead, HDBSCAN is first applied on a high-dimensional projection. Those clusters tend to correspond to different genres. The genre memberships are then embedded using a second round of UMAP (using Hellinger distance) which results in pleasingly dense structures.<p>- The books' descriptions are based on extractions from the comments and GPT's general knowledge. Quality levels vary, and it leads to some oddly specific points, but I haven't found any yet that are straight up wrong.<p>- There are multiple books with the same title. Currently, only the most popular one of those makes it onto the map.<p>- It's surprisingly hard to get high quality book cover images. I tried Google Books and a bunch of open APIs, but they all had their issues. In the end, I got the covers from GoodReads through a hacked together process that combines their autocomplete search with GPT for data linkage. Does anyone know of a reliable source?
Show HN: Automate API Testing with Record and Replay
This is shailendra here. Founder at HyperTest - hypertest.co<p>We are trying to make integration testing easy for developers. A lot of other teams and tools have taken a stab at this problem and having seen them we believe we have improvised the approach to help developers achieve this with minimum effort and pain.<p>How it works:
Developers set-up our SDK (2-lines) in the source code their (backend) services and configure it to record traffic from any environment. When HyperTest works in RECORD mode it collects end to end trace of every incoming request i.e. the request, response and outbound calls.<p>These requests (tests) can be replayed on a new build later (pre-push or at CI) to check for regressions in API responses and outbound calls. In the REPLAY mode HyperTest uses mocked responses of all dependent systems to keep tests non-flakey and results deterministic and consistent.
3-min demo - <a href="https://www.youtube.com/watch?v=x6hmDUNFGW4" rel="nofollow">https://www.youtube.com/watch?v=x6hmDUNFGW4</a><p>What does it do:
HyperTest SDK auto-instruments all key functions and methods across all libraries you use to make outbound calls. This helps HyperTest mock these calls in REPLAY without asking developers to make any change in their source code.<p>How is this better:
1. Set up is just like how you will set up an APM, i.e., 5 mins adding 2-lines of the SDK.<p>2. Support all protocols like http, graphQL, gRPC, Kafka and AMQP to cater to more use cases. Adding more as we speak<p>3. Test can be generated from any environment can be run anywhere even locally.<p>4. Active de-duplication to reduce the number of requests run on REPLAY. Optimise for code coverage & filter requests that don't cover additional lines of code<p>5. Distributed tracing to help developers debug root cause faster<p>6. Auto-updates mocks as dependencies change to keep test results trustworthy.<p>HyperTest is currently available only for node projects. We work the teams with 5 or more services at the moment and have 50+ teams using it actively.<p>If this seems valuable can set-up a quick intro and explain how to get started here -<a href="https://calendly.com/shailendra-hypertest/30min" rel="nofollow">https://calendly.com/shailendra-hypertest/30min</a><p>Would love feedback!
Show HN: SFTP Bridge to S3
Hey HN,<p>After seeing all the cool solopreneurs on X I decided to try and see for myself what this is all about. 9 months later here I am with my first project.<p>I decided to scratch my own itch, creating SFTP servers from a simple S3 bucket. I was tired of all my employer's customers asking for SFTP access when all I wanted was to use S3. There I have my lifecycle rules, proper access control, lambda triggers. All the cool stuff. But they keep asking for SFTP and let's be honest SFTP isn't cool.<p>So I created this bridge, they get SFTP, I get modern tech. I hope this tool can help you feel something when using SFTP too. Would love your feedback.<p>Paul-Henri
Show HN: DeutschlandAPI – A modern REST API to access information of Germany
Show HN: Dump entire Git repos into a single file for LLM prompts
Hey! I wanted to share a tool I've been working on. It's still very early and a work in progress, but I've found it incredibly helpful when working with Claude and OpenAI's models.<p>What it does:
I created a Python script that dumps your entire Git repository into a single file. This makes it much easier to use with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems.<p>Key Features:
- Respects .gitignore patterns
- Generates a tree-like directory structure
- Includes file contents for all non-excluded files
- Customizable file type filtering<p>Why I find it useful for LLM/RAG:
- Full Context: It gives LLMs a complete picture of my project structure and implementation details.
- RAG-Ready: The dumped content serves as a great knowledge base for retrieval-augmented generation.
- Better Code Suggestions: LLMs seem to understand my project better and provide more accurate suggestions.
- Debugging Aid: When I ask for help with bugs, I can provide the full context easily.<p>How to use it:
Example: python dump.py /path/to/your/repo output.txt .gitignore py js tsx<p>Again, it's still a work in progress, but I've found it really helpful in my workflow with AI coding assistants (Claude/Openai). I'd love to hear your thoughts, suggestions, or if anyone else finds this useful!<p><a href="https://github.com/artkulak/repo2file">https://github.com/artkulak/repo2file</a><p>P.S. If anyone wants to contribute or has ideas for improvement, I'm all ears!
Show HN: Dump entire Git repos into a single file for LLM prompts
Hey! I wanted to share a tool I've been working on. It's still very early and a work in progress, but I've found it incredibly helpful when working with Claude and OpenAI's models.<p>What it does:
I created a Python script that dumps your entire Git repository into a single file. This makes it much easier to use with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems.<p>Key Features:
- Respects .gitignore patterns
- Generates a tree-like directory structure
- Includes file contents for all non-excluded files
- Customizable file type filtering<p>Why I find it useful for LLM/RAG:
- Full Context: It gives LLMs a complete picture of my project structure and implementation details.
- RAG-Ready: The dumped content serves as a great knowledge base for retrieval-augmented generation.
- Better Code Suggestions: LLMs seem to understand my project better and provide more accurate suggestions.
- Debugging Aid: When I ask for help with bugs, I can provide the full context easily.<p>How to use it:
Example: python dump.py /path/to/your/repo output.txt .gitignore py js tsx<p>Again, it's still a work in progress, but I've found it really helpful in my workflow with AI coding assistants (Claude/Openai). I'd love to hear your thoughts, suggestions, or if anyone else finds this useful!<p><a href="https://github.com/artkulak/repo2file">https://github.com/artkulak/repo2file</a><p>P.S. If anyone wants to contribute or has ideas for improvement, I'm all ears!
Show HN: Dump entire Git repos into a single file for LLM prompts
Hey! I wanted to share a tool I've been working on. It's still very early and a work in progress, but I've found it incredibly helpful when working with Claude and OpenAI's models.<p>What it does:
I created a Python script that dumps your entire Git repository into a single file. This makes it much easier to use with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems.<p>Key Features:
- Respects .gitignore patterns
- Generates a tree-like directory structure
- Includes file contents for all non-excluded files
- Customizable file type filtering<p>Why I find it useful for LLM/RAG:
- Full Context: It gives LLMs a complete picture of my project structure and implementation details.
- RAG-Ready: The dumped content serves as a great knowledge base for retrieval-augmented generation.
- Better Code Suggestions: LLMs seem to understand my project better and provide more accurate suggestions.
- Debugging Aid: When I ask for help with bugs, I can provide the full context easily.<p>How to use it:
Example: python dump.py /path/to/your/repo output.txt .gitignore py js tsx<p>Again, it's still a work in progress, but I've found it really helpful in my workflow with AI coding assistants (Claude/Openai). I'd love to hear your thoughts, suggestions, or if anyone else finds this useful!<p><a href="https://github.com/artkulak/repo2file">https://github.com/artkulak/repo2file</a><p>P.S. If anyone wants to contribute or has ideas for improvement, I'm all ears!
Show HN: Shelly – A pure and vanilla shell-like interface for the web
shelly is a shell-like inteface for the web made with pure and vanilla HTML, CSS and JavaScript.
It's completely configurable and should run decently on any browser.
Show HN: Retronews – TUI for HN and Lobsters emulating classical Usenet readers
Show HN: Retronews – TUI for HN and Lobsters emulating classical Usenet readers
Show HN: Retronews – TUI for HN and Lobsters emulating classical Usenet readers
Show HN: Pulsar, micro creative coding playground
Show HN: Using SQL's Turing completeness to build Tetris
Show HN: Using SQL's Turing completeness to build Tetris
Show HN: Using SQL's Turing completeness to build Tetris
Show HN: Using SQL's Turing completeness to build Tetris