The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Undercutf1 – F1 Live Timing TUI with Driver Tracker, Variable Delay

undercutf1 is a F1 live timing app, built as a TUI. It contains traditional timing pages like a Driver Tracker, Timing Tower, Race Control, along with some more detailed analysis like lap and gap history, so that you can see strategies unfolding.<p>I started to build undercutf1 almost two years ago, after becoming increasingly frustrated with the TV direction and lack of detailed information coming out of the live feed. Overtakes were often missed and strategies were often ill-explained or missed. I discovered that F1 live timing data is available over a simple SignalR stream, so I set out building an app that would let me see all the information I could dream of. Now undercutf1 serves as the perfect companion (like a second Martin Brundle) when I'm watching the sessions live.<p>If you want to test it out, you replay the Suzuka race easily by downloading the timing data, then starting a simulated session:<p>1. Download undercutf1 using the installation instructions in the README.<p>2. Import the Suzuka race session data using `undercutf1 import 2025 -m 1256 -s 10006`.<p>3. Start the app (`undercutf1`) then press S (Session) then F (Simulated Session), then select Suzuka then Race using the arrow keys, then press Enter.<p>4. Use arrow keys to navigate between the timing pages, and use N / Shift+N to fast-forward through the session.<p>If you want to test it out during this weekends Jeddah GP, simply install as in the README then start a live session by pressing S (Session) then L (Live Session).<p>The app is built for a terminal of roughly 110x30 cells, which probably seems an odd size but just so happens to be the size of a fullscreen terminal on a MBP zoomed in far enough that the text is easily glanceable when the laptop is placed on a coffee table some distance away from me :) Other terminal sizes will work fine, but information density/scaling may not be ideal.<p>If you're using the TUI during a live session, you'll want to synchronise the delay of the timing feed to your TV feed. Use the N/M keys to increase/decrease the delay. During non-race session, I find it fairly easy to sync the session clock on TV with the session clock on the bottom left of the timing screen. For race sessions, synchronisation is a little harder. I usually aim to sync the start of the race time (e.g. 13:00 on the timing screen clock) with the start of the formation lap, where the live feed helpfully shows the clock tick over to 0 minutes. I usually delay the feed by 30 to 60 seconds.

Show HN: Goldbach Conjecture up to 4*10^18+7*10^13

Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).<p>My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.<p>[Website] <a href="https://gridbach.com/" rel="nofollow">https://gridbach.com/</a><p>[Medium] <a href="https://medium.com/@jay_gridbach/grid-computing-shatters-world-record-for-goldbach-conjecture-verification-1ef3dc58a38d" rel="nofollow">https://medium.com/@jay_gridbach/grid-computing-shatters-wor...</a>

Show HN: Goldbach Conjecture up to 4*10^18+7*10^13

Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).<p>My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.<p>[Website] <a href="https://gridbach.com/" rel="nofollow">https://gridbach.com/</a><p>[Medium] <a href="https://medium.com/@jay_gridbach/grid-computing-shatters-world-record-for-goldbach-conjecture-verification-1ef3dc58a38d" rel="nofollow">https://medium.com/@jay_gridbach/grid-computing-shatters-wor...</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: I built an AI that turns GitHub codebases into easy tutorials

<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>

Show HN: LTE-connected IoT module with remote programming and NL data analysis

Hi HN! I've been working on this IoT platform that aims to simplify deploying remote sensor networks by combining pre-configured LTE hardware with a cloud platform for remote programming and AI-based analysis.<p>The main challenges I'm trying to solve are:<p>1. Eliminating infrastructure setup headaches for IoT deployments 2. Making remote programming and debugging practical for devices that might be difficult to access physically 3. Using natural language for analyzing sensor data, and possibly taking actions based on the analysis<p>I'd really appreciate feedback on:<p>1. Is this approach to IoT development interesting to you? 2. What use cases would you want to explore with this kind of platform? 3. What concerns would you have about adopting something like this? 4. Could anyone recommend workflows or tools for making the AI agent more reliable? Currently using LLMs to generate isolated SQL queries to extract data, but ensuring consistent responses has been challenging.<p>Thanks for any thoughts, and feel free to ask any questions about how the hardware or platform works. Happy to dive into the details!

Show HN: LTE-connected IoT module with remote programming and NL data analysis

Hi HN! I've been working on this IoT platform that aims to simplify deploying remote sensor networks by combining pre-configured LTE hardware with a cloud platform for remote programming and AI-based analysis.<p>The main challenges I'm trying to solve are:<p>1. Eliminating infrastructure setup headaches for IoT deployments 2. Making remote programming and debugging practical for devices that might be difficult to access physically 3. Using natural language for analyzing sensor data, and possibly taking actions based on the analysis<p>I'd really appreciate feedback on:<p>1. Is this approach to IoT development interesting to you? 2. What use cases would you want to explore with this kind of platform? 3. What concerns would you have about adopting something like this? 4. Could anyone recommend workflows or tools for making the AI agent more reliable? Currently using LLMs to generate isolated SQL queries to extract data, but ensuring consistent responses has been challenging.<p>Thanks for any thoughts, and feel free to ask any questions about how the hardware or platform works. Happy to dive into the details!

Show HN: (bits) of a Libc, Optimized for Wasm

I make a no-CGO Go SQLite driver, by compiling the amalgamation to Wasm, then loading the result with wazero (a CGO-free Wasm runtime).<p>To compile SQLite, I use wasi-sdk, which uses wasi-libc, which is based on musl. It's been said that musl is slow(er than glibc), which is true, to a point.<p>musl uses SWAR on a size_t to implement various functions in string.h. This is fine, except size_t is just 32-bit on Wasm.<p>I found that implementing a few of those functions with Wasm SIMD128 can make them go around 4x faster.<p>Other functions don't even use SWAR; redoing <i>those</i> can make them 16x faster.<p>Smooth sort also has trouble pulling its own weight; a Shell sort seems both simpler and faster, while similarly avoiding recursion, allocations and the addressable stack.<p>I found that using SIMD intrinsics (rather than SWAR) makes it easier to avoid UB, but the code would definitely benefit from more eyeballs.<p>See this for some benchmarks on both x86-64 and Aarch64: <a href="https://github.com/ncruces/go-sqlite3/actions/runs/14516931864">https://github.com/ncruces/go-sqlite3/actions/runs/145169318...</a>

Show HN: (bits) of a Libc, Optimized for Wasm

I make a no-CGO Go SQLite driver, by compiling the amalgamation to Wasm, then loading the result with wazero (a CGO-free Wasm runtime).<p>To compile SQLite, I use wasi-sdk, which uses wasi-libc, which is based on musl. It's been said that musl is slow(er than glibc), which is true, to a point.<p>musl uses SWAR on a size_t to implement various functions in string.h. This is fine, except size_t is just 32-bit on Wasm.<p>I found that implementing a few of those functions with Wasm SIMD128 can make them go around 4x faster.<p>Other functions don't even use SWAR; redoing <i>those</i> can make them 16x faster.<p>Smooth sort also has trouble pulling its own weight; a Shell sort seems both simpler and faster, while similarly avoiding recursion, allocations and the addressable stack.<p>I found that using SIMD intrinsics (rather than SWAR) makes it easier to avoid UB, but the code would definitely benefit from more eyeballs.<p>See this for some benchmarks on both x86-64 and Aarch64: <a href="https://github.com/ncruces/go-sqlite3/actions/runs/14516931864">https://github.com/ncruces/go-sqlite3/actions/runs/145169318...</a>

Show HN: (bits) of a Libc, Optimized for Wasm

I make a no-CGO Go SQLite driver, by compiling the amalgamation to Wasm, then loading the result with wazero (a CGO-free Wasm runtime).<p>To compile SQLite, I use wasi-sdk, which uses wasi-libc, which is based on musl. It's been said that musl is slow(er than glibc), which is true, to a point.<p>musl uses SWAR on a size_t to implement various functions in string.h. This is fine, except size_t is just 32-bit on Wasm.<p>I found that implementing a few of those functions with Wasm SIMD128 can make them go around 4x faster.<p>Other functions don't even use SWAR; redoing <i>those</i> can make them 16x faster.<p>Smooth sort also has trouble pulling its own weight; a Shell sort seems both simpler and faster, while similarly avoiding recursion, allocations and the addressable stack.<p>I found that using SIMD intrinsics (rather than SWAR) makes it easier to avoid UB, but the code would definitely benefit from more eyeballs.<p>See this for some benchmarks on both x86-64 and Aarch64: <a href="https://github.com/ncruces/go-sqlite3/actions/runs/14516931864">https://github.com/ncruces/go-sqlite3/actions/runs/145169318...</a>

Show HN: Attune - Build and publish APT repositories in seconds

Hey HN, we're Eliza and Xin, and we’ve been working on Attune. Attune is a tool for publishing Linux packages.<p>Previously, we worked at other startups building open source developer tools that ran on our customers’ CI and development machines. For many of them, being able to `apt-get install` our tools was a requirement.<p>When we went to actually set up APT repositories, we were really surprised by the state of tooling around package publishing. The open source tools we found were old, slow, and difficult to figure out how to run in CI. The commercial tools we found were not much better. The cloud-hosted vendors required us to provide our signing keys to a cloud vendor (which was a non-starter), while the self-hosted vendors required us to operate our own specialized hosting servers.<p>We just wanted something simple: sign locally, run quickly, be easy to use, and deploy to managed object storage.<p>We couldn’t find it, so we built it. If you want to try it out, you can create a repository with three commands:<p><pre><code> attune repo create --uri https://apt.releases.example.com attune repo pkg add --repo-id 123 package.deb attune repo sync --repo-id 123 </code></pre> You can get the tool at <a href="https://github.com/attunehq/attune">https://github.com/attunehq/attune</a>. There are a lot of rough edges right now since it's so new - sorry in advance, we're working on sanding those down.<p>It’s fully open source under Apache 2. We’re also working with some early customers to build enterprise features like audit logging, RBAC, and HSM integrations, and we’re thinking about building a managed cloud hosting service as well.<p>We’d love your feedback on whether this is useful for you, and what you’d like to see next. We’re well aware that publishing is a small piece of CI/CD, but we think a lot of the tooling in this area (publishing, artifact registries, package repositories) could really use some love.<p>What do you think? Comment here, or email us at founders@attunehq.com.

Show HN: Attune - Build and publish APT repositories in seconds

Hey HN, we're Eliza and Xin, and we’ve been working on Attune. Attune is a tool for publishing Linux packages.<p>Previously, we worked at other startups building open source developer tools that ran on our customers’ CI and development machines. For many of them, being able to `apt-get install` our tools was a requirement.<p>When we went to actually set up APT repositories, we were really surprised by the state of tooling around package publishing. The open source tools we found were old, slow, and difficult to figure out how to run in CI. The commercial tools we found were not much better. The cloud-hosted vendors required us to provide our signing keys to a cloud vendor (which was a non-starter), while the self-hosted vendors required us to operate our own specialized hosting servers.<p>We just wanted something simple: sign locally, run quickly, be easy to use, and deploy to managed object storage.<p>We couldn’t find it, so we built it. If you want to try it out, you can create a repository with three commands:<p><pre><code> attune repo create --uri https://apt.releases.example.com attune repo pkg add --repo-id 123 package.deb attune repo sync --repo-id 123 </code></pre> You can get the tool at <a href="https://github.com/attunehq/attune">https://github.com/attunehq/attune</a>. There are a lot of rough edges right now since it's so new - sorry in advance, we're working on sanding those down.<p>It’s fully open source under Apache 2. We’re also working with some early customers to build enterprise features like audit logging, RBAC, and HSM integrations, and we’re thinking about building a managed cloud hosting service as well.<p>We’d love your feedback on whether this is useful for you, and what you’d like to see next. We’re well aware that publishing is a small piece of CI/CD, but we think a lot of the tooling in this area (publishing, artifact registries, package repositories) could really use some love.<p>What do you think? Comment here, or email us at founders@attunehq.com.

Show HN: Attune - Build and publish APT repositories in seconds

Hey HN, we're Eliza and Xin, and we’ve been working on Attune. Attune is a tool for publishing Linux packages.<p>Previously, we worked at other startups building open source developer tools that ran on our customers’ CI and development machines. For many of them, being able to `apt-get install` our tools was a requirement.<p>When we went to actually set up APT repositories, we were really surprised by the state of tooling around package publishing. The open source tools we found were old, slow, and difficult to figure out how to run in CI. The commercial tools we found were not much better. The cloud-hosted vendors required us to provide our signing keys to a cloud vendor (which was a non-starter), while the self-hosted vendors required us to operate our own specialized hosting servers.<p>We just wanted something simple: sign locally, run quickly, be easy to use, and deploy to managed object storage.<p>We couldn’t find it, so we built it. If you want to try it out, you can create a repository with three commands:<p><pre><code> attune repo create --uri https://apt.releases.example.com attune repo pkg add --repo-id 123 package.deb attune repo sync --repo-id 123 </code></pre> You can get the tool at <a href="https://github.com/attunehq/attune">https://github.com/attunehq/attune</a>. There are a lot of rough edges right now since it's so new - sorry in advance, we're working on sanding those down.<p>It’s fully open source under Apache 2. We’re also working with some early customers to build enterprise features like audit logging, RBAC, and HSM integrations, and we’re thinking about building a managed cloud hosting service as well.<p>We’d love your feedback on whether this is useful for you, and what you’d like to see next. We’re well aware that publishing is a small piece of CI/CD, but we think a lot of the tooling in this area (publishing, artifact registries, package repositories) could really use some love.<p>What do you think? Comment here, or email us at founders@attunehq.com.

Less Slow C++

Earlier this year, I took a month to reexamine my coding habits and rethink some past design choices. I hope to rewrite and improve my FOSS libraries this year, and I needed answers to a few questions first. Perhaps some of these questions will resonate with others in the community, too.<p><pre><code> - Are coroutines viable for high-performance work? - Should I use SIMD intrinsics for clarity or drop to assembly for easier library distribution? - Has hardware caught up with vectorized scatter/gather in AVX-512 & SVE? - How do secure enclaves & pointer tagging differ on Intel, Arm, & AMD? - What's the throughput gap between CPU and GPU Tensor Cores (TCs)? - How costly are misaligned memory accesses & split-loads, and what gains do non-temporal loads/stores offer? - Which parts of the standard library hit performance hardest? - How do error-handling strategies compare overhead-wise? - What's the compile-time vs. run-time trade-off for lazily evaluated ranges? - What practical, non-trivial use cases exist for meta-programming? - How challenging is Linux Kernel bypass with io_uring vs. POSIX sockets? - How close are we to effectively using Networking TS or heterogeneous Executors in C++? - What are best practices for propagating stateful allocators in nested containers, and which libraries support them? </code></pre> These questions span from micro-kernel optimizations (nanoseconds) to distributed systems (micro/millisecond latencies). Rather than tackling them all in one post, I compiled my explorations into a repository—extending my previous Google Benchmark tutorial (<<a href="https://ashvardanian.com/posts/google-benchmark" rel="nofollow">https://ashvardanian.com/posts/google-benchmark</a>>)—to serve as a sandbox for performance experimentation.<p>Some fun observations:<p><pre><code> - Compilers now vectorize 3x3x3 and 4x4x4 single/double precision multiplications well! The smaller one is ~60% slower despite 70% fewer operations, outperforming my vanilla SSE/AVX and coming within 10% of AVX-512. - Nvidia TCs vary dramatically across generations in numeric types, throughput, tile shapes, thread synchronization (thread/quad-pair/warp/warp-groups), and operand storage. Post-Volta, manual PTX is often needed (as intrinsics lag), though the new TileIR (introduced at GTC) promises improvements for dense linear algebra kernels. - The AI wave drives CPUs and GPUs to converge in mat-mul throughput & programming complexity. It took me a day to debug TMM register initialization, and SME is equally odd. Sierra Forest packs 288 cores/socket, and AVX10.2 drops 256-bit support for 512-bit... I wonder if discrete Intel GPUs are even needed, given CPU advances? - In common floating-point ranges, scalar sine approximations can be up to 40x faster than standard implementations, even without SIMD. It's a bit hand-wavy, though; I wish more projects documented error bounds and had 1 & 3.5 ULP variants like Sleef. - Meta-programming tools like CTRE can outperform typical RegEx engines by 5x and simplify building parsers compared to hand-crafted FSMs. - Once clearly distinct in complexity and performance (DPDK/SPDK vs. io_uring), the gap is narrowing. While pre-5.5 io_uring can boost UDP throughput by 4x on loopback IO, newer zero-copy and concurrency optimizations remain challenging. </code></pre> The repository is loaded with links to favorite CppCon lectures, GitHub snippets, and tech blog posts. Recognizing that many high-level concepts are handled differently across languages, I've also started porting examples to Rust & Python in separate repos. Coroutines look bad everywhere :(<p>Overall, this research project was rewarding! Most questions found answers in code — except pointer tagging and secure enclaves, which still elude me in public cloud. I'd love to hear from others, especially on comparing High-Level Synthesis for small matrix multiplications on FPGAs versus hand-written VHDL/Verilog for integral types. Let me know if you have ideas for other cool, obscure topics to cover!

Show HN: I made a Doom-like game fit inside a QR code

I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha) I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a> <a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>

Show HN: I made a Doom-like game fit inside a QR code

I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha) I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a> <a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>

< 1 2 3 ... 23 24 25 26 27 ... 816 817 818 >