The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: While the world builds AI Agents, I'm just building calculators

I figured I needed to work on my coding skills before building the next groundbreaking AI app, so I started working on this free tool site. Its basically just an aggregation of various commonly used calculators and unit convertors.<p>Link: <a href="https://www.calcverse.live" rel="nofollow">https://www.calcverse.live</a><p>Tech Stack: Next, React, Typescript, shadcn UI, Tailwind CSS<p>Would greatly appreciate your feedback on the UI/UX and accessibilty. I struggled the most with navigation. I've added a search box, a sidebar, breadcrumbs and pages with grids of cards leading to the respective calculator or unit convertor, but not sure if this is good enough.

Show HN: While the world builds AI Agents, I'm just building calculators

I figured I needed to work on my coding skills before building the next groundbreaking AI app, so I started working on this free tool site. Its basically just an aggregation of various commonly used calculators and unit convertors.<p>Link: <a href="https://www.calcverse.live" rel="nofollow">https://www.calcverse.live</a><p>Tech Stack: Next, React, Typescript, shadcn UI, Tailwind CSS<p>Would greatly appreciate your feedback on the UI/UX and accessibilty. I struggled the most with navigation. I've added a search box, a sidebar, breadcrumbs and pages with grids of cards leading to the respective calculator or unit convertor, but not sure if this is good enough.

Show HN: Electro – A hyper-fast Windows image viewer with a built-in terminal

This is my first major OSS release! I was always so frustrated by how slow image viewers were on Windows so I built one from the ground up with Rust & Tauri v2.0!<p>Electro also has a very unique feature: a built-in terminal. I was always mesmerised by merging CLI tools with GUI based systems and this is my first go at it!<p>I have big plans on expanding the terminal functionality with built-in image editing commands, command chaining, file handling etc.

Show HN: Bracket City – A daily, exploded (?) crossword puzzle

Hi hn - I co-own a diner where I co-host a puzzle night that is kind of like a diner-themed escape room. At the last one, I made a puzzle that was crossword-like clues nested in brackets. People at the diner seemed to like it, so I resolved to make it a real game and Bracket City was born: <a href="https://bracket.city" rel="nofollow">https://bracket.city</a>.<p>I love crosswords, so it's been fun to write crossword-like clues:<p><pre><code> [it contains MSG] </code></pre> as well as clues that would not make it into a crossword:<p><pre><code> [___ <=== you ===> hard place] </code></pre> I write all the puzzles and post a new one at midnight ET every day of the week.<p>Still working on a lot of features/fixes. I'm aware that scoring based on keystrokes is pretty unfair, especially given not-ideal custom keyboard on mobile! Still thinking through the best solution there.<p>Also fun fact: if you sign up for the email list, you get a special "Word of the Day" email written by James Somers (of <a href="https://jsomers.net" rel="nofollow">https://jsomers.net</a>). The only way to sign up for the email list is to finish a puzzle!<p>**<p>(answer key: NYC, ROCK)

Show HN: Bracket City – A daily, exploded (?) crossword puzzle

Hi hn - I co-own a diner where I co-host a puzzle night that is kind of like a diner-themed escape room. At the last one, I made a puzzle that was crossword-like clues nested in brackets. People at the diner seemed to like it, so I resolved to make it a real game and Bracket City was born: <a href="https://bracket.city" rel="nofollow">https://bracket.city</a>.<p>I love crosswords, so it's been fun to write crossword-like clues:<p><pre><code> [it contains MSG] </code></pre> as well as clues that would not make it into a crossword:<p><pre><code> [___ <=== you ===> hard place] </code></pre> I write all the puzzles and post a new one at midnight ET every day of the week.<p>Still working on a lot of features/fixes. I'm aware that scoring based on keystrokes is pretty unfair, especially given not-ideal custom keyboard on mobile! Still thinking through the best solution there.<p>Also fun fact: if you sign up for the email list, you get a special "Word of the Day" email written by James Somers (of <a href="https://jsomers.net" rel="nofollow">https://jsomers.net</a>). The only way to sign up for the email list is to finish a puzzle!<p>**<p>(answer key: NYC, ROCK)

Show HN: I scrape Steam data every month and it's yours to download for free

Yeah, there's AI, but I added it because I found it easier to find answers I'm looking for. For the data scientists, you can download the CSV and go crazy. Would love to know what discoveries or learnings can be found from it.<p>To download the raw scraped data you need to become a paid member but you don't really need it unless you're wanting to finesse a table of data for a particular need. The cost is mostly just an incentive to help me pay the bills for running the website.<p>The bunch of available CSV files contain large amounts of data which has everything from tags, genres, pricing, wishlists, estimated revenue, etc. It's what the AI is reading from.<p>Hope you find it useful :-)

Show HN: I scrape Steam data every month and it's yours to download for free

Yeah, there's AI, but I added it because I found it easier to find answers I'm looking for. For the data scientists, you can download the CSV and go crazy. Would love to know what discoveries or learnings can be found from it.<p>To download the raw scraped data you need to become a paid member but you don't really need it unless you're wanting to finesse a table of data for a particular need. The cost is mostly just an incentive to help me pay the bills for running the website.<p>The bunch of available CSV files contain large amounts of data which has everything from tags, genres, pricing, wishlists, estimated revenue, etc. It's what the AI is reading from.<p>Hope you find it useful :-)

Show HN: I made a site to tell the time in corporate

Show HN: I made a site to tell the time in corporate

Show HN: I made a site to tell the time in corporate

Show HN: I built an app to stop me doomscrolling by touching grass

i wanted to change the habit of reaching for my phone in the morning and doomscrolling away an hour so i built an app to help me. now i have to literally touch grass before accessing my most distracting apps<p>the app is built in swiftui, uses the screen time apis provided by apple and google vision to recognise grass or not<p>i'd love to get your thoughts on the concept.

Show HN: I built an app to stop me doomscrolling by touching grass

i wanted to change the habit of reaching for my phone in the morning and doomscrolling away an hour so i built an app to help me. now i have to literally touch grass before accessing my most distracting apps<p>the app is built in swiftui, uses the screen time apis provided by apple and google vision to recognise grass or not<p>i'd love to get your thoughts on the concept.

Show HN: I built an app to stop me doomscrolling by touching grass

i wanted to change the habit of reaching for my phone in the morning and doomscrolling away an hour so i built an app to help me. now i have to literally touch grass before accessing my most distracting apps<p>the app is built in swiftui, uses the screen time apis provided by apple and google vision to recognise grass or not<p>i'd love to get your thoughts on the concept.

Show HN: We made a Meta Quest3 see through walls

Show HN: AI-native browser game that users can craft unlimited 3D items

Most games have limits. You can only use preset features, need coding for customization & adding mods, require expensive extra devices. We wanted to remove those barriers.<p>That’s why we are building space zero—a browser based 3D world powered by AI. We plan to build players can freely mix items to generate unexpected creations with unique properties and sounds. Also the world itself is dynamically generated, evolving endlessly.<p>I uploaded a demo version I’ve been working on for the past month! I hope to get any feedbacks or comments :)

Show HN: Benchmarking VLMs vs. Traditional OCR

Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.<p>We've been continuously evaluating different models since we released the Zerox package last year (<a href="https://github.com/getomni-ai/zerox">https://github.com/getomni-ai/zerox</a>). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.<p>Full writeup + data explorer here: <a href="https://getomni.ai/ocr-benchmark">https://getomni.ai/ocr-benchmark</a><p>Github: <a href="https://github.com/getomni-ai/benchmark">https://github.com/getomni-ai/benchmark</a><p>Huggingface: <a href="https://huggingface.co/datasets/getomni-ai/ocr-benchmark" rel="nofollow">https://huggingface.co/datasets/getomni-ai/ocr-benchmark</a><p>Couple notes on the methodology:<p>1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.<p>2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.<p>3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.<p>This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.<p>The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.<p>- A multilingual comparison. Right now the evaluation data is english only.<p>- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)

Show HN: Benchmarking VLMs vs. Traditional OCR

Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.<p>We've been continuously evaluating different models since we released the Zerox package last year (<a href="https://github.com/getomni-ai/zerox">https://github.com/getomni-ai/zerox</a>). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.<p>Full writeup + data explorer here: <a href="https://getomni.ai/ocr-benchmark">https://getomni.ai/ocr-benchmark</a><p>Github: <a href="https://github.com/getomni-ai/benchmark">https://github.com/getomni-ai/benchmark</a><p>Huggingface: <a href="https://huggingface.co/datasets/getomni-ai/ocr-benchmark" rel="nofollow">https://huggingface.co/datasets/getomni-ai/ocr-benchmark</a><p>Couple notes on the methodology:<p>1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.<p>2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.<p>3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.<p>This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.<p>The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.<p>- A multilingual comparison. Right now the evaluation data is english only.<p>- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)

Show HN: Benchmarking VLMs vs. Traditional OCR

Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.<p>We've been continuously evaluating different models since we released the Zerox package last year (<a href="https://github.com/getomni-ai/zerox">https://github.com/getomni-ai/zerox</a>). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.<p>Full writeup + data explorer here: <a href="https://getomni.ai/ocr-benchmark">https://getomni.ai/ocr-benchmark</a><p>Github: <a href="https://github.com/getomni-ai/benchmark">https://github.com/getomni-ai/benchmark</a><p>Huggingface: <a href="https://huggingface.co/datasets/getomni-ai/ocr-benchmark" rel="nofollow">https://huggingface.co/datasets/getomni-ai/ocr-benchmark</a><p>Couple notes on the methodology:<p>1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.<p>2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.<p>3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.<p>This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.<p>The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.<p>- A multilingual comparison. Right now the evaluation data is english only.<p>- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)

Show HN: Jq-Like Tool for Markdown

There have been a few times I wanted the ability to select some text out of a Markdown doc. For example, a GitHub CI check to ensure that PRs / issues / etc are properly formatted.<p>This can be done to some extent with regex, but those expressions are brittle and hard to read or edit later. mdq uses a familiar pipe syntax to navigate the Markdown in a structured way.<p>It's in 0.x because I don't want to fully commit to the syntax being stable, in case real-world testing shows that the syntax needs tweaking. But I think the project is in a pretty good spot overall, and would be interested in feedback!

Show HN: Jq-Like Tool for Markdown

There have been a few times I wanted the ability to select some text out of a Markdown doc. For example, a GitHub CI check to ensure that PRs / issues / etc are properly formatted.<p>This can be done to some extent with regex, but those expressions are brittle and hard to read or edit later. mdq uses a familiar pipe syntax to navigate the Markdown in a structured way.<p>It's in 0.x because I don't want to fully commit to the syntax being stable, in case real-world testing shows that the syntax needs tweaking. But I think the project is in a pretty good spot overall, and would be interested in feedback!

< 1 2 3 ... 26 27 28 29 30 ... 792 793 794 >