The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I've been using AI to analyze every supplement on the market
Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies.<p>My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps:<p>1.) I index every supplement on the market (extract each ingredient, normalize by quantity)<p>2.) I index every research paper on supplementation (rank
every claim by effect type and effect size)<p>3.) I link data between supplements and research papers<p>Earlier last year, I took pause on a project because I've ran into a few issues:<p>Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio.<p>Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data.<p>Business: I still haven't figured out how to monetize it or even who the target customer is.<p>Despite these challenges, I decided to restart my journey.<p>My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation).<p>Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before.<p>I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated.<p>Some key areas of the website:<p>* Example of navigating supplements by ingredient <a href="https://pillser.com/search?q="Vitamin+D"&s=jho4espsuc" rel="nofollow">https://pillser.com/search?q="Vitamin+D"&s=jho4espsuc</a><p>* Example of research paper analyzed using AI <a href="https://pillser.com/research-papers/effect-of-lactobacillus-gasseri-pa-168-bifidobacterium-longum-sp-073-b-bifidum-mf-205-on-common-cold-episodes-a-double-blind-randomized-controlled-trial-767" rel="nofollow">https://pillser.com/research-papers/effect-of-lactobacillus-...</a><p>* Example of looking for very specific strains or ingredients <a href="https://pillser.com/probiotics/bifidobacterium-bifidum" rel="nofollow">https://pillser.com/probiotics/bifidobacterium-bifidum</a><p>* Example of navigating research by health-outcomes <a href="https://pillser.com/health-outcomes/improved-intestinal-barrier-function" rel="nofollow">https://pillser.com/health-outcomes/improved-intestinal-barr...</a><p>* Example of product listing <a href="https://pillser.com/supplements/pb-8-probiotic-663" rel="nofollow">https://pillser.com/supplements/pb-8-probiotic-663</a>
Show HN: I've been using AI to analyze every supplement on the market
Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies.<p>My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps:<p>1.) I index every supplement on the market (extract each ingredient, normalize by quantity)<p>2.) I index every research paper on supplementation (rank
every claim by effect type and effect size)<p>3.) I link data between supplements and research papers<p>Earlier last year, I took pause on a project because I've ran into a few issues:<p>Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio.<p>Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data.<p>Business: I still haven't figured out how to monetize it or even who the target customer is.<p>Despite these challenges, I decided to restart my journey.<p>My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation).<p>Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before.<p>I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated.<p>Some key areas of the website:<p>* Example of navigating supplements by ingredient <a href="https://pillser.com/search?q="Vitamin+D"&s=jho4espsuc" rel="nofollow">https://pillser.com/search?q="Vitamin+D"&s=jho4espsuc</a><p>* Example of research paper analyzed using AI <a href="https://pillser.com/research-papers/effect-of-lactobacillus-gasseri-pa-168-bifidobacterium-longum-sp-073-b-bifidum-mf-205-on-common-cold-episodes-a-double-blind-randomized-controlled-trial-767" rel="nofollow">https://pillser.com/research-papers/effect-of-lactobacillus-...</a><p>* Example of looking for very specific strains or ingredients <a href="https://pillser.com/probiotics/bifidobacterium-bifidum" rel="nofollow">https://pillser.com/probiotics/bifidobacterium-bifidum</a><p>* Example of navigating research by health-outcomes <a href="https://pillser.com/health-outcomes/improved-intestinal-barrier-function" rel="nofollow">https://pillser.com/health-outcomes/improved-intestinal-barr...</a><p>* Example of product listing <a href="https://pillser.com/supplements/pb-8-probiotic-663" rel="nofollow">https://pillser.com/supplements/pb-8-probiotic-663</a>
Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)
Writeup (includes good/bad sample generations): <a href="https://www.linum.ai/field-notes/launch-linum-v2">https://www.linum.ai/field-notes/launch-linum-v2</a><p>We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.<p>These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.<p>We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.<p>Why train a model from scratch?<p>We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.<p>For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)<p>The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).<p>What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.<p>Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.<p>What’s next?
- Post-training for physics/deformations
- Distillation for speed
- Audio capabilities
- Model scaling<p>We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!
Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)
Writeup (includes good/bad sample generations): <a href="https://www.linum.ai/field-notes/launch-linum-v2">https://www.linum.ai/field-notes/launch-linum-v2</a><p>We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.<p>These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.<p>We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.<p>Why train a model from scratch?<p>We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.<p>For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)<p>The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).<p>What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.<p>Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.<p>What’s next?
- Post-training for physics/deformations
- Distillation for speed
- Audio capabilities
- Model scaling<p>We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!
Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go
Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go
Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go
Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete
Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here (<a href="https://huggingface.co/sweepai/sweep-next-edit-1.5b" rel="nofollow">https://huggingface.co/sweepai/sweep-next-edit-1.5b</a>) or try it in our JetBrains plugin (<a href="https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomplete--coding-agent" rel="nofollow">https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomp...</a>).<p>Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy.<p>We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small.<p>Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand.<p>Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs.<p>We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it!
Show HN: isometric.nyc – giant isometric pixel art map of NYC
Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.<p>I didn't write a single line of code.<p>Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!<p>I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:<p><a href="http://cannoneyed.com/projects/isometric-nyc" rel="nofollow">http://cannoneyed.com/projects/isometric-nyc</a>
Show HN: isometric.nyc – giant isometric pixel art map of NYC
Hey HN! I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents.<p>I didn't write a single line of code.<p>Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!<p>I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity:<p><a href="http://cannoneyed.com/projects/isometric-nyc" rel="nofollow">http://cannoneyed.com/projects/isometric-nyc</a>
Show HN: See the carbon impact of your cloud as you code
Hey folks, I’m Hassan, one of the co-founders of Infracost (<a href="https://www.infracost.io">https://www.infracost.io</a>). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code.
The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure.<p>We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs.<p>It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John.<p>Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it.<p>My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor.<p>And now, it is here, and I’d love your feedback. Try it out by going to <a href="https://dashboard.infracost.io/">https://dashboard.infracost.io/</a>, create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it.<p>I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon).<p>AMA - I’ll be monitoring the thread :)<p>Thanks
Show HN: yolo-cage – AI coding agents that can't exfiltrate secrets
I made this for myself, and it seemed like it might be useful to others. I'd love some feedback, both on the threat model and the tool itself. I hope you find it useful!<p>Backstory: I've been using many agents in parallel as I work on a somewhat ambitious financial analysis tool. I was juggling agents working on epics for the linear solver, the persistence layer, the front-end, and planning for the second-generation solver. I was losing my mind playing whack-a-mole with the permission prompts. YOLO mode felt so tempting. And yet.<p>Then it occurred to me: what if YOLO mode isn't so bad? Decision fatigue is a thing. If I could cap the blast radius of a confused agent, maybe I could just review once. Wouldn't that be safer?<p>So that day, while my kids were taking a nap, I decided to see if I could put YOLO-mode Claude inside a sandbox that blocks exfiltration and regulates git access. The result is yolo-cage.<p>Also: the AI wrote its own containment system from inside the system's own prototype. Which is either very aligned or very meta, depending on how you look at it.
Show HN: RatatuiRuby wraps Rust Ratatui as a RubyGem – TUIs with the joy of Ruby
TerabyteDeals – Compare storage prices by $/TB
I built a simple tool to compare hard drive and SSD prices by price-per-terabyte.<p>I kept having to calculate $/TB manually when shopping for NAS drives, so I made this to save myself the trouble.<p>It pulls prices from Amazon (US, CA, AU, and EU stores), calculates $/TB, and lets you filter by drive type, interface, form factor, and capacity.<p>Nothing fancy — just a sortable table updated daily.<p>Any feedback is more than welcome, I hope someone will find it useful!
Show HN: Rails UI
Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)
Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.<p>The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:<p>- LTTB downsampling runs as a compute shader
- Hit-testing for tooltips/hover is GPU-accelerated
- Rendering uses instanced draws (one draw call per series)<p>The result: 1M points at 60fps with smooth zoom/pan.<p>Live demo: <a href="https://chartgpu.github.io/ChartGPU/examples/million-points/" rel="nofollow">https://chartgpu.github.io/ChartGPU/examples/million-points/</a><p>Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`<p>Happy to answer questions about WebGPU internals or architecture decisions.
Show HN: An interactive physics simulator with 1000’s of balls, in your terminal
IP over Avian Carriers with Quality of Service (1999)
g(old)
Show HN: Artificial Ivy in the Browser
This is just a goofy thing I cooked up over the weekend. It's kind of like a screensaver, but with more reading and sliders. (It's not terribly efficient, so expect phone batteries to take a hit!)
Show HN: Ocrbase – pdf → .md/.json document OCR and structured extraction API