The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Anchor Relay – A faster, easier way to get Let's Encrypt certificates

From the cryptic terminal commands to the innumerable ways to shoot yourself in the foot, I always struggled to use TLS certificates. I love how much easier (and cheaper) Let's Encrypt made it to get certificates, but there are still plenty of things to struggle with.<p>That's why we built Relay: a free, browser-based tool that streamlines the ACME workflow, especially for tricky setups like homelabs. Relay acts as a secure intermediary between your ACME client and public certificate authorities like Let's Encrypt.<p>Some ways Relay provides a better experience:<p><pre><code> - really fast, streamlined certificates in minutes, with any ACME client - one-time upfront DNS delegation without inbound traffic or DNS credentials sprinkled everywhere - clear insights into the whole ACME process and renewal reminders </code></pre> Try Relay now: <a href="https://anchor.dev/relay" rel="nofollow">https://anchor.dev/relay</a><p>Or read our blog post: <a href="https://anchor.dev/blog/lets-get-your-homelab-https-certified" rel="nofollow">https://anchor.dev/blog/lets-get-your-homelab-https-certifie...</a><p>Please give it a try (it only takes a couple minutes) and let me know what you think.

Show HN: PlutoPrint – Generate PDFs and PNGs from HTML with Python

Hi everyone, I built PlutoPrint because I needed a simple way to generate beautiful PDFs and images directly from HTML with Python. Most of the tools I tried felt heavy, tricky to set up, or produced results that didn’t look great, so I wanted something lightweight, modern, and fast. PlutoPrint is built on top of PlutoBook’s rendering engine, which is designed for paged media, and then wrapped with a Python API that makes it easy to turn HTML or XML into crisp PDFs and PNGs. I’ve used it for things like invoices, reports, tickets, and even snapshots, and it can also integrate with Matplotlib to render charts directly into documents.<p>I’d be glad to hear what you think. If you’ve ever had to wrestle with generating PDFs or images from HTML, I hope this feels like a smoother option. Feedback, ideas, or even just impressions are all very welcome, and I’d love to learn how PlutoPrint could be more useful for you.

Show HN: PlutoPrint – Generate PDFs and PNGs from HTML with Python

Hi everyone, I built PlutoPrint because I needed a simple way to generate beautiful PDFs and images directly from HTML with Python. Most of the tools I tried felt heavy, tricky to set up, or produced results that didn’t look great, so I wanted something lightweight, modern, and fast. PlutoPrint is built on top of PlutoBook’s rendering engine, which is designed for paged media, and then wrapped with a Python API that makes it easy to turn HTML or XML into crisp PDFs and PNGs. I’ve used it for things like invoices, reports, tickets, and even snapshots, and it can also integrate with Matplotlib to render charts directly into documents.<p>I’d be glad to hear what you think. If you’ve ever had to wrestle with generating PDFs or images from HTML, I hope this feels like a smoother option. Feedback, ideas, or even just impressions are all very welcome, and I’d love to learn how PlutoPrint could be more useful for you.

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: Project management system for Claude Code

I built a lightweight project management workflow to keep AI-driven development organized.<p>The problem was that context kept disappearing between tasks. With multiple Claude agents running in parallel, I’d lose track of specs, dependencies, and history. External PM tools didn’t help because syncing them with repos always created friction.<p>The solution was to treat GitHub Issues as the database. The "system" is ~50 bash scripts and markdown configs that:<p>- Brainstorm with you to create a markdown PRD, spins up an epic, and decomposes it into tasks and syncs them with GitHub issues - Track progress across parallel streams - Keep everything traceable back to the original spec - Run fast from the CLI (commands finish in seconds)<p>We’ve been using it internally for a few months and it’s cut our shipping time roughly in half. Repo: <a href="https://github.com/automazeio/ccpm" rel="nofollow">https://github.com/automazeio/ccpm</a><p>It’s still early and rough around the edges, but has worked well for us. I’d love feedback from others experimenting with GitHub-centric project management or AI-driven workflows.

Show HN: OS X Mavericks Forever

Show HN: OS X Mavericks Forever

Show HN: OS X Mavericks Forever

Show HN: OS X Mavericks Forever

Show HN: I was curious about spherical helix, ended up making this visualization

I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!

Show HN: I was curious about spherical helix, ended up making this visualization

I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!

Show HN: I was curious about spherical helix, ended up making this visualization

I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!

Show HN: I was curious about spherical helix, ended up making this visualization

I was wondering how I can arrange objects along a spherical helix path, and read some articles on it.<p>I ended up learning about parametric equations again, and make this visualization to document what I learned:<p><a href="https://visualrambling.space/moving-objects-in-3d/" rel="nofollow">https://visualrambling.space/moving-objects-in-3d/</a><p>feel free to visit and let me know what you think!

< 1 2 3 ... 42 43 44 45 46 ... 896 897 898 >