The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Clyp – Clipboard Manager for Linux

Show HN: JavaScript-free (X)HTML Includes

(spoiler: its XSLT)<p>I've been working on a little demo for how to avoid copy-pasting header/footer boilerplate on a simple static webpage. My goal is to approximate the experience of Jekyll/Hugo but eliminate the need for a build step before publishing. This demo shows how to get basic templating features with XSL so you could write a blog post which looks like<p><pre><code> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="/template.xsl"?> <page> <title>My Article</title> <content> some content <ul> <li>hello</li> <li>hello</li> </ul> </content> </page> </code></pre> Some properties which set this approach apart from other methods:<p><pre><code> - no build step (no need to setup Jekyll on the client or configure Github/Gitlab actions) - works on any webserver (e.g. as opposed to server-side includes, actions) - normal looking URLs (e.g. `example.com/foobar` as opposed to `example.com/#page=foobar`) </code></pre> There's been some talk about removing XSLT support from the HTML spec [0], so I figured I would show this proof of concept while it still works.<p>[0]: <a href="https://news.ycombinator.com/item?id=44952185">https://news.ycombinator.com/item?id=44952185</a><p>See also: grug-brain XSLT <a href="https://news.ycombinator.com/item?id=44393817">https://news.ycombinator.com/item?id=44393817</a>

Show HN: JavaScript-free (X)HTML Includes

(spoiler: its XSLT)<p>I've been working on a little demo for how to avoid copy-pasting header/footer boilerplate on a simple static webpage. My goal is to approximate the experience of Jekyll/Hugo but eliminate the need for a build step before publishing. This demo shows how to get basic templating features with XSL so you could write a blog post which looks like<p><pre><code> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="/template.xsl"?> <page> <title>My Article</title> <content> some content <ul> <li>hello</li> <li>hello</li> </ul> </content> </page> </code></pre> Some properties which set this approach apart from other methods:<p><pre><code> - no build step (no need to setup Jekyll on the client or configure Github/Gitlab actions) - works on any webserver (e.g. as opposed to server-side includes, actions) - normal looking URLs (e.g. `example.com/foobar` as opposed to `example.com/#page=foobar`) </code></pre> There's been some talk about removing XSLT support from the HTML spec [0], so I figured I would show this proof of concept while it still works.<p>[0]: <a href="https://news.ycombinator.com/item?id=44952185">https://news.ycombinator.com/item?id=44952185</a><p>See also: grug-brain XSLT <a href="https://news.ycombinator.com/item?id=44393817">https://news.ycombinator.com/item?id=44393817</a>

Show HN: JavaScript-free (X)HTML Includes

(spoiler: its XSLT)<p>I've been working on a little demo for how to avoid copy-pasting header/footer boilerplate on a simple static webpage. My goal is to approximate the experience of Jekyll/Hugo but eliminate the need for a build step before publishing. This demo shows how to get basic templating features with XSL so you could write a blog post which looks like<p><pre><code> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="/template.xsl"?> <page> <title>My Article</title> <content> some content <ul> <li>hello</li> <li>hello</li> </ul> </content> </page> </code></pre> Some properties which set this approach apart from other methods:<p><pre><code> - no build step (no need to setup Jekyll on the client or configure Github/Gitlab actions) - works on any webserver (e.g. as opposed to server-side includes, actions) - normal looking URLs (e.g. `example.com/foobar` as opposed to `example.com/#page=foobar`) </code></pre> There's been some talk about removing XSLT support from the HTML spec [0], so I figured I would show this proof of concept while it still works.<p>[0]: <a href="https://news.ycombinator.com/item?id=44952185">https://news.ycombinator.com/item?id=44952185</a><p>See also: grug-brain XSLT <a href="https://news.ycombinator.com/item?id=44393817">https://news.ycombinator.com/item?id=44393817</a>

Show HN: ChartDB Cloud – Visualize and Share Database Diagrams

Me and Guy (@guyb3) built ChartDB to generate ER diagrams from your database without a need of any database access (via query/sql/dbml). We started with an open-source version, and after seeing a lot of use we decided to make a cloud version.<p>Our OSS launch (1y ago) - <a href="https://news.ycombinator.com/item?id=41339308">https://news.ycombinator.com/item?id=41339308</a><p>Now we’re launching ChartDB Cloud - built for teams:<p>- Embed ERDs into docs, dev portals, or Miro/Notion etc.<p>- Collaborate in real-time (with live cursors like Figma)<p>- Keep diagrams always in sync with your database<p>- Organize large, messy schemas without pain<p>- Export DDL in multiple SQL dialects (solved deterministically)<p>- AI assistant to brainstorm and generate new schema objects or schema changes<p>We designed it so working with databases feels less like a chore and more like a creative process.<p>Would love feedback - especially from teams dealing with messy schemas or outdated docs.<p><a href="https://app.chartdb.io" rel="nofollow">https://app.chartdb.io</a>

Show HN: ChartDB Cloud – Visualize and Share Database Diagrams

Me and Guy (@guyb3) built ChartDB to generate ER diagrams from your database without a need of any database access (via query/sql/dbml). We started with an open-source version, and after seeing a lot of use we decided to make a cloud version.<p>Our OSS launch (1y ago) - <a href="https://news.ycombinator.com/item?id=41339308">https://news.ycombinator.com/item?id=41339308</a><p>Now we’re launching ChartDB Cloud - built for teams:<p>- Embed ERDs into docs, dev portals, or Miro/Notion etc.<p>- Collaborate in real-time (with live cursors like Figma)<p>- Keep diagrams always in sync with your database<p>- Organize large, messy schemas without pain<p>- Export DDL in multiple SQL dialects (solved deterministically)<p>- AI assistant to brainstorm and generate new schema objects or schema changes<p>We designed it so working with databases feels less like a chore and more like a creative process.<p>Would love feedback - especially from teams dealing with messy schemas or outdated docs.<p><a href="https://app.chartdb.io" rel="nofollow">https://app.chartdb.io</a>

Show HN: What country you would hit if you went straight where you're pointing

This app was designed to answer my wife’s question “what country would we hit if we went straight” (generally posed while pointing her phone)<p>But with two additional twists:<p>1. It loads up historical maps from different years (right now 1 BC, 700 AD, 1000 AD, 1300 AD, 1800 AD, 1900 AD) so you can see what you would hit if you had a time machine AND you went in the direction your phone is pointing<p>2. Tap a country/territory for an (AI-generated) blurb on what you are pointing at<p>How it works: Starting from your phone’s bearing, we trace the great-circle in 200 km steps, prefilter candidate countries with bounding boxes (~5–10 instead of ~200), then check ~20 km points along each segment to catch coastlines and stop when the path first enters another country.<p>Great-circles (<a href="https://www.movable-type.co.uk/scripts/latlong.html" rel="nofollow">https://www.movable-type.co.uk/scripts/latlong.html</a>) are why you can hit Australia from NYC, even though when you look at a flat map that can be hard to see.<p>There might be some weird stuff in the explanations, I haven’t read all 1,400 of them. If you see something weird let me know and I will update it!<p>The app is free and doesn’t have ads or tracking — your location and bearing are only used locally to figure out where you are and what you’re pointing at<p>Probably will work best if you hold your phone pretty flat :)<p>Thank you to André Ourednik and all the contributors to the Historical Basemaps project: <a href="https://github.com/aourednik/historical-basemaps" rel="nofollow">https://github.com/aourednik/historical-basemaps</a>)

Show HN: Using Common Lisp from Inside the Browser

Show HN: Using Common Lisp from Inside the Browser

Show HN: Using Common Lisp from Inside the Browser

Show HN: Anchor Relay – A faster, easier way to get Let's Encrypt certificates

From the cryptic terminal commands to the innumerable ways to shoot yourself in the foot, I always struggled to use TLS certificates. I love how much easier (and cheaper) Let's Encrypt made it to get certificates, but there are still plenty of things to struggle with.<p>That's why we built Relay: a free, browser-based tool that streamlines the ACME workflow, especially for tricky setups like homelabs. Relay acts as a secure intermediary between your ACME client and public certificate authorities like Let's Encrypt.<p>Some ways Relay provides a better experience:<p><pre><code> - really fast, streamlined certificates in minutes, with any ACME client - one-time upfront DNS delegation without inbound traffic or DNS credentials sprinkled everywhere - clear insights into the whole ACME process and renewal reminders </code></pre> Try Relay now: <a href="https://anchor.dev/relay" rel="nofollow">https://anchor.dev/relay</a><p>Or read our blog post: <a href="https://anchor.dev/blog/lets-get-your-homelab-https-certified" rel="nofollow">https://anchor.dev/blog/lets-get-your-homelab-https-certifie...</a><p>Please give it a try (it only takes a couple minutes) and let me know what you think.

Show HN: PlutoPrint – Generate PDFs and PNGs from HTML with Python

Hi everyone, I built PlutoPrint because I needed a simple way to generate beautiful PDFs and images directly from HTML with Python. Most of the tools I tried felt heavy, tricky to set up, or produced results that didn’t look great, so I wanted something lightweight, modern, and fast. PlutoPrint is built on top of PlutoBook’s rendering engine, which is designed for paged media, and then wrapped with a Python API that makes it easy to turn HTML or XML into crisp PDFs and PNGs. I’ve used it for things like invoices, reports, tickets, and even snapshots, and it can also integrate with Matplotlib to render charts directly into documents.<p>I’d be glad to hear what you think. If you’ve ever had to wrestle with generating PDFs or images from HTML, I hope this feels like a smoother option. Feedback, ideas, or even just impressions are all very welcome, and I’d love to learn how PlutoPrint could be more useful for you.

Show HN: PlutoPrint – Generate PDFs and PNGs from HTML with Python

Hi everyone, I built PlutoPrint because I needed a simple way to generate beautiful PDFs and images directly from HTML with Python. Most of the tools I tried felt heavy, tricky to set up, or produced results that didn’t look great, so I wanted something lightweight, modern, and fast. PlutoPrint is built on top of PlutoBook’s rendering engine, which is designed for paged media, and then wrapped with a Python API that makes it easy to turn HTML or XML into crisp PDFs and PNGs. I’ve used it for things like invoices, reports, tickets, and even snapshots, and it can also integrate with Matplotlib to render charts directly into documents.<p>I’d be glad to hear what you think. If you’ve ever had to wrestle with generating PDFs or images from HTML, I hope this feels like a smoother option. Feedback, ideas, or even just impressions are all very welcome, and I’d love to learn how PlutoPrint could be more useful for you.

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: Luminal – Open-source, search-based GPU compiler

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (<a href="https://luminalai.com/">https://luminalai.com/</a>), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.<p>We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.<p>You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: <a href="https://youtu.be/P2oNR8zxSAA" rel="nofollow">https://youtu.be/P2oNR8zxSAA</a><p>Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.<p>We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.<p>We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: <a href="https://github.com/luminal-ai/luminal" rel="nofollow">https://github.com/luminal-ai/luminal</a> and I’d love to hear your thoughts!

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show HN: I replaced vector databases with Git for AI memory (PoC)

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.<p>The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?<p>How it works:<p>Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.<p>This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.<p>The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.<p>GitHub: <a href="https://github.com/Growth-Kinetics/DiffMem" rel="nofollow">https://github.com/Growth-Kinetics/DiffMem</a><p>Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.<p>Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

< 1 2 3 ... 10 11 12 13 14 ... 864 865 866 >