The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I built a dashboard to compare mortgage rates across 120 credit unions

When I bought my home, the big bank I'd been using for years quoted me 7% APR. A local credit union was offering 5.5% for the exact same mortgage.<p>I was surprised until I learned that mortgages are basically standardized products – the government buys almost all of them (see Bits About Money: <a href="https://www.bitsaboutmoney.com/archive/mortgages-are-a-manufactured-product/" rel="nofollow">https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...</a>). So what's the price difference paying for? A recent Bloomberg Odd Lots episode makes the case that it's largely advertising and marketing (<a href="https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-this-is-why-credit-card-rates-are-so-high-podcast" rel="nofollow">https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-thi...</a>). Credit unions are non-profits without big marketing budgets, so they can pass those savings on, but a lot of people don't know about them.<p>I built this dashboard to make it easier to shop around. I pull public rates from 120+ credit union websites and compares against the weekly FRED national benchmark.<p>Features:<p>- Filter by loan type (30Y/15Y/etc.), eligibility (the hardest part tbh), and rate type - Payment calculator with refi mode (CUs can be a bit slower than big lenders, but that makes them great for refi) - Links to each CU's rates page and eligibility requirements - Toggle to show/hide statistical outliers<p>At the time of writing, the average CU rate is 5.91% vs. 6.23% national average. about $37k difference in total interest on a $500k loan. I actually used seaborn to visualize the rate spread against the four big banks: <a href="https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc_the_high_cost_of_big_banks_i_tracked_daily/" rel="nofollow">https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc...</a><p>Stack: Python for the data/backend, Svelte/SvelteKit for the frontend. No signup, no ads, no referral fees.<p>Happy to answer questions about the methodology or add CUs people suggest.

Show HN: I built a dashboard to compare mortgage rates across 120 credit unions

When I bought my home, the big bank I'd been using for years quoted me 7% APR. A local credit union was offering 5.5% for the exact same mortgage.<p>I was surprised until I learned that mortgages are basically standardized products – the government buys almost all of them (see Bits About Money: <a href="https://www.bitsaboutmoney.com/archive/mortgages-are-a-manufactured-product/" rel="nofollow">https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...</a>). So what's the price difference paying for? A recent Bloomberg Odd Lots episode makes the case that it's largely advertising and marketing (<a href="https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-this-is-why-credit-card-rates-are-so-high-podcast" rel="nofollow">https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-thi...</a>). Credit unions are non-profits without big marketing budgets, so they can pass those savings on, but a lot of people don't know about them.<p>I built this dashboard to make it easier to shop around. I pull public rates from 120+ credit union websites and compares against the weekly FRED national benchmark.<p>Features:<p>- Filter by loan type (30Y/15Y/etc.), eligibility (the hardest part tbh), and rate type - Payment calculator with refi mode (CUs can be a bit slower than big lenders, but that makes them great for refi) - Links to each CU's rates page and eligibility requirements - Toggle to show/hide statistical outliers<p>At the time of writing, the average CU rate is 5.91% vs. 6.23% national average. about $37k difference in total interest on a $500k loan. I actually used seaborn to visualize the rate spread against the four big banks: <a href="https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc_the_high_cost_of_big_banks_i_tracked_daily/" rel="nofollow">https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc...</a><p>Stack: Python for the data/backend, Svelte/SvelteKit for the frontend. No signup, no ads, no referral fees.<p>Happy to answer questions about the methodology or add CUs people suggest.

Show HN: I built a dashboard to compare mortgage rates across 120 credit unions

When I bought my home, the big bank I'd been using for years quoted me 7% APR. A local credit union was offering 5.5% for the exact same mortgage.<p>I was surprised until I learned that mortgages are basically standardized products – the government buys almost all of them (see Bits About Money: <a href="https://www.bitsaboutmoney.com/archive/mortgages-are-a-manufactured-product/" rel="nofollow">https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...</a>). So what's the price difference paying for? A recent Bloomberg Odd Lots episode makes the case that it's largely advertising and marketing (<a href="https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-this-is-why-credit-card-rates-are-so-high-podcast" rel="nofollow">https://www.bloomberg.com/news/audio/2025-11-28/odd-lots-thi...</a>). Credit unions are non-profits without big marketing budgets, so they can pass those savings on, but a lot of people don't know about them.<p>I built this dashboard to make it easier to shop around. I pull public rates from 120+ credit union websites and compares against the weekly FRED national benchmark.<p>Features:<p>- Filter by loan type (30Y/15Y/etc.), eligibility (the hardest part tbh), and rate type - Payment calculator with refi mode (CUs can be a bit slower than big lenders, but that makes them great for refi) - Links to each CU's rates page and eligibility requirements - Toggle to show/hide statistical outliers<p>At the time of writing, the average CU rate is 5.91% vs. 6.23% national average. about $37k difference in total interest on a $500k loan. I actually used seaborn to visualize the rate spread against the four big banks: <a href="https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc_the_high_cost_of_big_banks_i_tracked_daily/" rel="nofollow">https://www.reddit.com/r/dataisbeautiful/comments/1pcj9t7/oc...</a><p>Stack: Python for the data/backend, Svelte/SvelteKit for the frontend. No signup, no ads, no referral fees.<p>Happy to answer questions about the methodology or add CUs people suggest.

Show HN: Doomscrolling Research Papers

Hi HN,<p>Would love your thoughts on Open Paper Digest. It’s a mobile feed that let’s you “doomscroll” through summaries of popular papers that were published recently.<p><i>Backstory</i> There’s a combination of factors lead me to build this:<p>1. Quality of content social media apps has decreased, but I still notice that it is harder than ever for me to stay away from these apps. 2. I’ve been saying for a while now that I should start reading papers to keep up with what’s going on in AI-world.<p>Initially, I set out to build something solely for point 2. This version was more search-focussed, and focussed on simplifying the whole text of a paper, not summarizing. Still, I wasn’t using it. After yet another 30 min doomscroll on a bus last month, point 1 came into the picture and I changed how Open Paper Digest worked. That’s what you can see today!<p><i>How it works</i> It is checking Huggingface Trending Papers and the large research labs daily to find papers to add to the index. The PDFs gets converted to markdown using Mistral OCR, this is then given to Gemini 2.5 to create a 5 minute summary.<p>I notice that I am now going to the site daily, so that’s a good sign. I’m curious what you all think, and what feedback you might have.<p>Cheers, Arthur

Show HN: FFmpeg Engineering Handbook

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. <a href="https://news.ycombinator.com/item?id=44972919">https://news.ycombinator.com/item?id=44972919</a><p>Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.<p>Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.<p>On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):<p>* 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test.<p>* 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat ≈ 0.68 s * PyTorch ≈ 1.20 s * NumPy ≈ 7.0 s → ~1.8× faster than PyTorch and ~10× faster than NumPy.<p>* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat ≈ 0.14 s * PyTorch ≈ 20.8 s * NumPy ≈ 11.9 s → ~140× faster than PyTorch and ~80× faster than NumPy.<p>If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: <a href="https://runmat.org/blog/runmat-accel-intro-blog" rel="nofollow">https://runmat.org/blog/runmat-accel-intro-blog</a><p>You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and “here’s where it breaks or is slow” examples are very welcome.

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. <a href="https://news.ycombinator.com/item?id=44972919">https://news.ycombinator.com/item?id=44972919</a><p>Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.<p>Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.<p>On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):<p>* 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test.<p>* 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat ≈ 0.68 s * PyTorch ≈ 1.20 s * NumPy ≈ 7.0 s → ~1.8× faster than PyTorch and ~10× faster than NumPy.<p>* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat ≈ 0.14 s * PyTorch ≈ 20.8 s * NumPy ≈ 11.9 s → ~140× faster than PyTorch and ~80× faster than NumPy.<p>If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: <a href="https://runmat.org/blog/runmat-accel-intro-blog" rel="nofollow">https://runmat.org/blog/runmat-accel-intro-blog</a><p>You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and “here’s where it breaks or is slow” examples are very welcome.

Show HN: Webclone.js – A simple tool to clone websites

I needed a lightweight way to archive documentation from a website. wget and similar tools failed to clone the site reliably (missing assets, broken links, etc.), so I ended up building a full website-cloning tool using Node.js + Puppeteer.<p>Repo: <a href="https://github.com/jademsee/webclone" rel="nofollow">https://github.com/jademsee/webclone</a><p>Feedback, issues, and PRs are very welcome.

Show HN: Webclone.js – A simple tool to clone websites

I needed a lightweight way to archive documentation from a website. wget and similar tools failed to clone the site reliably (missing assets, broken links, etc.), so I ended up building a full website-cloning tool using Node.js + Puppeteer.<p>Repo: <a href="https://github.com/jademsee/webclone" rel="nofollow">https://github.com/jademsee/webclone</a><p>Feedback, issues, and PRs are very welcome.

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

Show HN: Flowctl – Open-source self-service workflow automation platform

Flowctl is a self-service platform that gives users secure access to complex workflows, all in a single binary. These workflows could be anything, granting SSH access to an instance, provisioning infra, or custom business process automation. The executor paradigm in flowctl makes it domain-agnostic.<p>This initial release includes: - SSO with OIDC and RBAC - Execution on remote nodes via SSH (fully agentless) - Approvals - Cron-based scheduling - Flow editor UI - Encrypted credentials and secrets store - Docker and Script executors - Namespaces<p>I built this because I needed a simple tool to manage my homelab while traveling, something that acts as a UI for scripts. At work, I was also looking for tools to turn repetitive ops/infra tasks into self-service offerings. I tried tools like Backstage and Rundeck, but they were either too complex, or the OSS versions lacked important features.<p>Flowctl can simply be described as a pipeline (like CI/CD systems) that people can trigger on-demand with custom inputs.<p>Would love to hear how you might use something like this!<p>Demo - <a href="https://demo.flowctl.net" rel="nofollow">https://demo.flowctl.net</a><p>Homepage - <a href="https://flowctl.net" rel="nofollow">https://flowctl.net</a><p>GitHub - <a href="https://github.com/cvhariharan/flowctl" rel="nofollow">https://github.com/cvhariharan/flowctl</a>

Show HN: I Built Tinyfocus – A Minimal Tool to Help Solo Founders Focus

Hi HN,<p>I just launched Tinyfocus, a small productivity tool designed specifically for solo founders and builders. The goal is simple: help you focus on what matters and get more done in less time.<p>Here’s what Tinyfocus does:<p>Lets you track your top tasks and prioritize efficiently.<p>Provides micro dashboards to keep your daily focus in check.<p>Lightweight, no distractions, no fluff.<p>I built it entirely by myself, iterating in public, and I wanted to share it with the community to get feedback.<p>It’s been crazy seeing how a simple tool can make such a difference in daily focus, especially when you’re juggling multiple projects as a solo founder.<p>Check it out here: tinyfoc.us<p>I’d love to hear your thoughts – any feedback, feature ideas, or bugs you notice.<p>Thanks!

Show HN: RFC Hub

I've worked at several companies during the past two decades and I kept encountering the same issues with internal technical proposals:<p>- Authors would change a spec after I started writing code<p>- It's hard to find what proposals would benefit from my review<p>- It's hard to find the right person to review my proposals<p>- It's not always obvious if a proposal has reached consensus (e.g. buried comments)<p>- I'm not notified if a proposal I approved is now ready to be worked on<p>And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.<p>RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.<p>The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.<p>Please let me know what you think!

Show HN: RFC Hub

I've worked at several companies during the past two decades and I kept encountering the same issues with internal technical proposals:<p>- Authors would change a spec after I started writing code<p>- It's hard to find what proposals would benefit from my review<p>- It's hard to find the right person to review my proposals<p>- It's not always obvious if a proposal has reached consensus (e.g. buried comments)<p>- I'm not notified if a proposal I approved is now ready to be worked on<p>And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.<p>RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.<p>The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.<p>Please let me know what you think!

Show HN: RFC Hub

I've worked at several companies during the past two decades and I kept encountering the same issues with internal technical proposals:<p>- Authors would change a spec after I started writing code<p>- It's hard to find what proposals would benefit from my review<p>- It's hard to find the right person to review my proposals<p>- It's not always obvious if a proposal has reached consensus (e.g. buried comments)<p>- I'm not notified if a proposal I approved is now ready to be worked on<p>And that's just scratching the surface. The most popular solutions (like Notion or Google Drive + Docs) mostly lack semantics. For example it's easy as a human to see a table in a document with rows representing reviewers and a checkbox representing review acceptance but it's hard to formally extract meaning and prevent a document from "being published" when criteria isn't met.<p>RFC Hub aims to solve these issues by building an easy to use interface around all the metadata associated with technical proposals instead of containing it textually within the document itself.<p>The project is still under heavy development as I work on it most nights and weekends. The next big feature I'm planning is proposal templates and the ability to refer to documents as something other than RFCs (Request for Comments). E.g. a company might have a UIRFC for GUI work (User Interface RFCs), a DBADR (Database Architecture Decision Record), etc. And while there's a built-in notification system I'm still working on a Slack integration. Auth works by sending tokens via email but of course RFC Hub needs Google auth.<p>Please let me know what you think!

Show HN: I built a 1.8MB native app with self-built UI, vision and AI libraries

Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs

Hey HN! Over the weekend (leaning heavily on Opus 4.5) I wrote Jargon - an AI-managed zettelkasten that reads articles, papers, and YouTube videos, extracts the key ideas, and automatically links related concepts together.<p>Demo video: <a href="https://youtu.be/W7ejMqZ6EUQ" rel="nofollow">https://youtu.be/W7ejMqZ6EUQ</a><p>Repo: <a href="https://github.com/schoblaska/jargon" rel="nofollow">https://github.com/schoblaska/jargon</a><p>You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline.<p>You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results.<p>Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa for neural web search, crawl4ai as a fallback scraper, and pdftotext for academic papers.

Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs

Hey HN! Over the weekend (leaning heavily on Opus 4.5) I wrote Jargon - an AI-managed zettelkasten that reads articles, papers, and YouTube videos, extracts the key ideas, and automatically links related concepts together.<p>Demo video: <a href="https://youtu.be/W7ejMqZ6EUQ" rel="nofollow">https://youtu.be/W7ejMqZ6EUQ</a><p>Repo: <a href="https://github.com/schoblaska/jargon" rel="nofollow">https://github.com/schoblaska/jargon</a><p>You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline.<p>You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results.<p>Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa for neural web search, crawl4ai as a fallback scraper, and pdftotext for academic papers.

< 1 2 3 4 ... 905 906 907 >