The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I’ve made a cheaper SEO research tool

In the last 13 months I've spent total $1297 for Ahrefs subscription. Sounds like a little too much so I've build my own Keywords Research tool - Telescope.<p>While building it my total bill was $51 for 2 months and 41k+ keywords found. Every page of keywords costs $0.03 - $0.05. 2 payment options - usage-based subscription or just top up your balance with the amount you'd like to.<p>Telescope includes a couple of things:<p>- Keywords Explorer - finding keywords based on seed phrase and filters<p>- Keywords Ideas - keywords on interception on provided keywords<p>- Ranked Keywords - keywords a domain you specified with their positions and change since last DB update<p>- Saved Keywords - to store found keywords and plan the SEO strategy<p>I've put a lot of love into it and would love to get some feedback. IMPORTANT: every new account gets some free balance to start with. Appreciate it!

Show HN: I’ve made a cheaper SEO research tool

In the last 13 months I've spent total $1297 for Ahrefs subscription. Sounds like a little too much so I've build my own Keywords Research tool - Telescope.<p>While building it my total bill was $51 for 2 months and 41k+ keywords found. Every page of keywords costs $0.03 - $0.05. 2 payment options - usage-based subscription or just top up your balance with the amount you'd like to.<p>Telescope includes a couple of things:<p>- Keywords Explorer - finding keywords based on seed phrase and filters<p>- Keywords Ideas - keywords on interception on provided keywords<p>- Ranked Keywords - keywords a domain you specified with their positions and change since last DB update<p>- Saved Keywords - to store found keywords and plan the SEO strategy<p>I've put a lot of love into it and would love to get some feedback. IMPORTANT: every new account gets some free balance to start with. Appreciate it!

Show HN: I built a full-text search for your browsing history

Hi, I’m Peter, co-founder of Browspilot.<p>I built Browspilot, a minimalistic tool, to help you recall anything you have ever seen online using any clues you remember, or just scroll through your past activity.<p>Browspilot is perfect for popping up pages you use a lot but don’t want to keep open all the time or digging up stuff from way back. Whether you’re a student researching papers, a professional balancing multiple projects just type in what bit you recall in the moment, and boom, it’s there.<p>Looking ahead, we’re excited to take search to the next level. We’re working on features that’ll let you integrate and search across different apps and find things based on meaning—including images—using advanced vector search techniques.

Show HN: I built a full-text search for your browsing history

Hi, I’m Peter, co-founder of Browspilot.<p>I built Browspilot, a minimalistic tool, to help you recall anything you have ever seen online using any clues you remember, or just scroll through your past activity.<p>Browspilot is perfect for popping up pages you use a lot but don’t want to keep open all the time or digging up stuff from way back. Whether you’re a student researching papers, a professional balancing multiple projects just type in what bit you recall in the moment, and boom, it’s there.<p>Looking ahead, we’re excited to take search to the next level. We’re working on features that’ll let you integrate and search across different apps and find things based on meaning—including images—using advanced vector search techniques.

Show HN: Open Sourcing Our No-Code WebXR Editor After 5 Years of Development

Transfer Thought is a No-Code platform that makes it so anyone can build VR apps directly in their browser.<p>We started this company part-time, building it during commutes to and from work on the train. Over the last 5 years, we've experienced many ups and downs:<p>- Gained early customers<p>- Quit our day jobs<p>- Secured angel funding<p>- Survived with a short runway<p>- Accepted into Techstars Chicago<p>- Survived with a short runway (again)<p>- Landed our biggest client ever, a Fortune 100 company<p>- Despite our highest revenue, our burn rate caught up to us<p>We looked at different ways to wind down the company and ultimately felt open sourcing the platform was the best way to do right by our customers.<p>Now, anyone who is interested in starting a VR company or just building an app can pick up where we left off. I'm excited about this space, if you need help with a VR app, or want to talk tech, please reach out.<p>Check out the repo: <a href="https://github.com/transferthought/transfer-thought">https://github.com/transferthought/transfer-thought</a><p>Contact me at keenan [at] transferthought [dot] com.

Show HN: Sonatino – small audio dev board based on ESP32-S3

Hi!<p>My name is Ben, and I recently updated my audio dev board "Sonatino" after receiving a lot of good feedback from the initial launch a year ago.<p>I began working on this after building a few projects that required audio capabilities. I was getting tired of wiring up external DACs and amplifiers to ESP32 boards, so I decided to look for a more compact, integrated solution. The available options either had larger footprints, non-standard connectors, or features that I didn't typically need for my projects. That's when I started working on a custom PCB that could be a sort of "audio swiss army knife". The result was Sonatino.<p>Some have criticized the use of a DAC and ADC that support HD sample rates and bit depths, especially when other factors will limit the usefulness of anything over 44.1kHz/16-bit audio. I actually agree - HD audio in this context is mostly overkill, but most modern audio chips support it and it's entirely optional. My primary goal was for the ADC and DAC to be easy to use - no I2C configuration required (like many of the CODECs available). It needed to be easy to use from an Arduino programming environment. The chips I selected (from Cirrus Logic) were a good fit; they just also happened to support higher sample rates / bit depths.<p>The latest revision drops the built-in antenna in favor of an external one. It also has a better speaker amp (3.2 W), an RGB LED, and improved power circuity. It's been a fun little board to work with!<p><a href="https://sonatino.com/" rel="nofollow">https://sonatino.com/</a><p>Check it out and let me know if you have feedback. Price is currently higher than I'd like, but that's a result of it being manufactured at low volumes.

Show HN: Sonatino – small audio dev board based on ESP32-S3

Hi!<p>My name is Ben, and I recently updated my audio dev board "Sonatino" after receiving a lot of good feedback from the initial launch a year ago.<p>I began working on this after building a few projects that required audio capabilities. I was getting tired of wiring up external DACs and amplifiers to ESP32 boards, so I decided to look for a more compact, integrated solution. The available options either had larger footprints, non-standard connectors, or features that I didn't typically need for my projects. That's when I started working on a custom PCB that could be a sort of "audio swiss army knife". The result was Sonatino.<p>Some have criticized the use of a DAC and ADC that support HD sample rates and bit depths, especially when other factors will limit the usefulness of anything over 44.1kHz/16-bit audio. I actually agree - HD audio in this context is mostly overkill, but most modern audio chips support it and it's entirely optional. My primary goal was for the ADC and DAC to be easy to use - no I2C configuration required (like many of the CODECs available). It needed to be easy to use from an Arduino programming environment. The chips I selected (from Cirrus Logic) were a good fit; they just also happened to support higher sample rates / bit depths.<p>The latest revision drops the built-in antenna in favor of an external one. It also has a better speaker amp (3.2 W), an RGB LED, and improved power circuity. It's been a fun little board to work with!<p><a href="https://sonatino.com/" rel="nofollow">https://sonatino.com/</a><p>Check it out and let me know if you have feedback. Price is currently higher than I'd like, but that's a result of it being manufactured at low volumes.

Show HN: Improve LLM Performance by Maximizing Iterative Development

I have been working in AI space for a while now, first at FAANG with ML since 2021, then with LLM in start-ups since early 2023. I think LLM Application development is extremely iterative, more so than any other types of development. This is because to improve an LLM application performance (accuracy, hallucinations, latency, cost), you need to try various combinations of LLM models, prompt templates (e.g., few-shot, chain-of-thought), prompt context with different RAG architecture, different agent architecture, and more. There are thousands of possible combinations and you need a process that let’s you quickly test and evaluate these different combinations.<p>I have had the chance to talk with many companies working on AI products. The biggest mistake I see is a lack of standard process that allows them to rapidly iterate towards their performance goal. Using my learnings, I’m working on an Open Source Framework that structures your application development for rapid iteration so you can easily test different combination of your LLM application components and quickly iterate towards your accuracy goals.<p>You can checkout the project at <a href="https://github.com/palico-ai/palico-ai">https://github.com/palico-ai/palico-ai</a><p>You can locally setup a complete LLM Chat App with us with a single command. Stars are always appreciated!<p>Would love any feedback or your thoughts around LLM Development.

Show HN: I made a split keyboard for large palms

I had an issue a few years ago - every ergonomic keyboard I tried had the switches too close to each other and my fingers were cramped in that small space. Then I decided to create a keyboard which is suitable for larger hands and eliminates most of the wrist movement. 34 keys was the most optimized version for achieving as little wrist movement as possible. You can try the fitment for your palm IRL with the printable template on the website.

Show HN: I made a split keyboard for large palms

I had an issue a few years ago - every ergonomic keyboard I tried had the switches too close to each other and my fingers were cramped in that small space. Then I decided to create a keyboard which is suitable for larger hands and eliminates most of the wrist movement. 34 keys was the most optimized version for achieving as little wrist movement as possible. You can try the fitment for your palm IRL with the printable template on the website.

Show HN: I made a split keyboard for large palms

I had an issue a few years ago - every ergonomic keyboard I tried had the switches too close to each other and my fingers were cramped in that small space. Then I decided to create a keyboard which is suitable for larger hands and eliminates most of the wrist movement. 34 keys was the most optimized version for achieving as little wrist movement as possible. You can try the fitment for your palm IRL with the printable template on the website.

Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON

jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.<p>I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.<p>If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).<p>If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.<p>I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...

Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON

jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.<p>I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.<p>If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).<p>If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.<p>I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...

Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON

jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.<p>I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.<p>If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).<p>If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.<p>I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...

Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON

jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.<p>I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.<p>If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).<p>If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.<p>I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...

Show HN: Xcapture-BPF – like Linux top, but with Xray vision

Show HN: Xcapture-BPF – like Linux top, but with Xray vision

Show HN: Xcapture-BPF – like Linux top, but with Xray vision

Show HN: Xcapture-BPF – like Linux top, but with Xray vision

Show HN: I Made an Open Source Platform for Structuring Any Unstructured Data

Hey HN,<p>I'm Adithya, a 20-year-old dev from India. I have been working with GenAI for the past year, and I've found it really painful to deal with the many different forms of data out there and get the best representation of it for my AI applications.<p>That's why I built OmniParse—an open-source platform designed to handle any unstructured data and transform it into optimized, structured representations.<p>Key Features: - Completely local processing—no external APIs - Supports ~20 file types - Converts documents, multimedia, and web pages to high-quality structured markdown - Table extraction, image extraction/captioning, audio/video transcription, web page crawling - Fits in a T4 GPU - Easily deployable with Docker and Skypilot - Colab friendly with an interactive UI powered by Gradio<p>Why OmniParse? I wanted a platform that could take any kind of data—documents, images, videos, audio files, web pages, and more—and make it clean and structured, ready for AI applications.<p>Check it out on GitHub: <a href="https://git.new/omniparse" rel="nofollow">https://git.new/omniparse</a>

< 1 2 3 ... 181 182 183 184 185 ... 830 831 832 >