The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Boing
Show HN: Explore what the browser exposes about you
I built a tool that reveals the data your browser exposes automatically every time you visit a website.<p>GitHub: <a href="https://github.com/neberej/exposedbydefault" rel="nofollow">https://github.com/neberej/exposedbydefault</a><p>Demo: <a href="https://neberej.github.io/exposedbydefault/" rel="nofollow">https://neberej.github.io/exposedbydefault/</a><p>Note: No data is sent anywhere. Everything runs in your browser.
Anthony Bourdain's Lost Li.st's
I read through the years about Bourdain's content on the defunct li.st service, but was never able to find an archive of it. A more thorough perusing of archive.org and a pointer from an Internet stranger led me to create this site. Cheers
Show HN: Glasses to detect smart-glasses that have cameras
Hi! Recently smart-glasses with cameras like the Meta Ray-bans seem to be getting more popular. As does some people's desire to remove/cover up the recording indicator LED. I wanted to see if there's a way to detect when people are recording with these types of glasses, so a little bit ago I started working this project. I've hit a little bit of a wall though so I'm very much open to ideas!<p>I've written a bunch more on the link (+photos are there), but essentially this uses 2 fingerprinting approaches:
- retro-reflectivity of the camera sensor by looking at IR reflections. mixed results here.
- wireless traffic (primarily BLE, also looking into BTC and wifi)<p>For the latter, I'm currently just using an ESP32, and I can consistently detect when the Meta Raybans are 1) pairing, 2) first powered on, 3) (less consistently) when they're taken out of the charging case. When they do detect something, it plays a little jingle next to your ear.<p>Ideally I want to be able to detect them when they're in use, and not just at boot. I've come across the nRF52840, which seems like it can follow directed BLE traffic beyond the initial broadcast, but from my understanding it would still need to catch the first CONNECT_REQ event regardless. On the bluetooth classic side of things, all the hardware looks really expensive! Any ideas are appreciated. Thanks!
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>
Show HN: We built an open source, zero webhooks payment processor
Hi HN! For the past bit we’ve been building Flowglad (<a href="https://flowglad.com">https://flowglad.com</a>) and can now feel it’s just gotten good enough to share with you all:<p>Repo: <a href="https://github.com/flowglad/flowglad" rel="nofollow">https://github.com/flowglad/flowglad</a><p>Demo video: <a href="https://www.youtube.com/watch?v=G6H0c1Cd2kU" rel="nofollow">https://www.youtube.com/watch?v=G6H0c1Cd2kU</a><p>Flowglad is a payment processor that you integrate without writing any glue code. Along with processing your payments, it tells you in real time the features and usage credit balances that your customers have available to you based on their billing state. The DX feels like React, because we wanted to bring the reactive programming paradigm to payments.<p>We make it easy to spin up full-fledged pricing models (including usage meters, feature gates and usage credit grants) in a few clicks. We schematize these pricing models into a pricing.yaml file that’s kinda like Terraform but for your pricing.<p>The result is a payments layer that AI coding agents have a substantially easier time one-shotting (for now the happiest path is a fullstack Typescript + React app).<p>Why we built this:<p>- After a decade of building on Stripe, we found it powerful but underopinionated. It left us doing a lot of rote work to set up fairly standard use cases
- That meant more code to maintain, much of which is brittle because it crosses so many server-client boundaries
- Not to mention choreographing the lifecycle of our business domain with the Stripe checkout flow and webhook event types, of which there are 250+
- Payments online has gotten complex - not just new pricing models for AI products, but also cross border sales tax, etc. You either need to handle significant chunks of it yourself, or sign up for and compose multiple services<p>This all feels unduly clunky, esp when compared to how easy other layers like hosting and databases have gotten in recent years.<p>These patterns haven’t changed much in a decade. And while coding agents can nail every other rote part of an app (auth, db, analytics), payments is the scariest to tab-tab-tab your way through. Because the the existing integration patterns are difficult to reason about, difficult to verify correctness, and absolutely mission critical.<p>Our beta version lets you:<p>- Spin up common pricing models in just a few clicks, and customize them as needed
- Clone pricing models between testmode and live mode, and import / export via pricing.yaml
- Check customer usage credits and feature access in real time on your backend and React frontend
- Integrate without any DB schema changes - you reference your customers via your ids, and reference prices, products, features and usage meters via slugs that you define<p>We’re still early in our journey so would love your feedback and opinions. Billing has a lot of use cases, so if you see anything that you wish we supported, please let us know!
Show HN: Stun LLMs with thousands of invisible Unicode characters
I made a free tool that stuns LLMs with invisible Unicode characters.<p>*Use cases:* Anti-plagiarism, text obfuscation against LLM scrapers, or just for fun!<p>Even just one word's worth of “gibberified” text is enough to block most LLMs from responding coherently.
Show HN: I built an interactive HN Simulator
Hey HN! Just for fun, I built an interactive Hacker News Simulator.<p>You can submit text posts and links, just like the real HN. But on HN Simulator, all of the comments are generated by LLMs + generate instantly.<p>The best way to use it (IMHO) is to submit a text post or a curl-able URL here: <a href="https://news.ysimulator.run/submit" rel="nofollow">https://news.ysimulator.run/submit</a>. You don't need an account to post.<p>When you do that, various prompts will be built from a library of commenter archetypes, moods, and shapes. The AI commenters will actually respond to your text post and/or submitted link.<p>I really wanted it to feel real, and I think the project mostly delivers on that. When I was developing it, I kept getting confused between which tab was the "real" HN and which was the simulator, and accidentally submitted some junk to HN. (Sorry dang and team – I did clean up after myself).<p>The app itself is built with Node + Express + Postgres, and all of the inference runs on Replicate.<p>Speaking of Replicate, they generously loaded me up with some free credits for the inference – so shoutout to the team there.<p>The most technically interesting part of the app is how the comments work. You can read more about it here, as well as explore all of the available archetypes, moods, and shapes that get combined into prompts: <a href="https://news.ysimulator.run/comments.html" rel="nofollow">https://news.ysimulator.run/comments.html</a><p>I hope you all have as much fun playing with it as I did making it!
Ask HN: How are Markov chains so different from tiny LLMs?
I polished a Markov chain generator and trained it on an article by Uri Alon and al (<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7963340/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC7963340/</a>).<p>It generates text that seems to me at least on par with tiny LLMs, such as demonstrated by NanoGPT. Here is an example:<p><pre><code> jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$
./SLM10b_train UriAlon.txt 3
Training model with order 3...
Skip-gram detection: DISABLED (order < 5)
Pruning is disabled
Calculating model size for JSON export...
Will export 29832 model entries
Exporting vocabulary (1727 entries)...
Vocabulary export complete.
Exporting model entries...
Processed 12000 contexts, written 28765 entries (96.4%)...
JSON export complete: 29832 entries written to model.json
Model trained and saved to model.json
Vocabulary size: 1727
jplr@mypass:~/Documenti/2025/SimpleModels/v3_very_good$ ./SLM9_gen model.json
</code></pre>
<i>Aging cell model requires comprehensive incidence data. To obtain such a large medical database of the joints are risk factors. Therefore, the theory might be extended to describe the evolution of atherosclerosis and metabolic syndrome. For example, late‐stage type 2 diabetes is associated with collapse of beta‐cell function. This collapse has two parameters: the fraction of the senescent cells are predicted to affect disease threshold . For each individual, one simulates senescent‐cell abundance using the SR model has an approximately exponential incidence curve with a decline at old ages In this section, we simulated a wide range of age‐related incidence curves. The next sections provide examples of classes of diseases, which show improvement upon senolytic treatment tends to qualitatively support such a prediction. model different disease thresholds as values of the disease occurs when a physiological parameter ϕ increases due to the disease. Increasing susceptibility parameter s, which varies about 3‐fold between BMI below 25 (male) and 54 (female) are at least mildly age‐related and 25 (male) and 28 (female) are strongly age‐related, as defined above. Of these, we find that 66 are well described by the model as a wide range of feedback mechanisms that can provide homeostasis to a half‐life of days in young mice, but their removal rate slows down in old mice to a given type of cancer have strong risk factors should increase the removal rates of the joint that bears the most common biological process of aging that governs the onset of pathology in the records of at least 104 people, totaling 877 disease category codes (See SI section 9), increasing the range of 6–8% per year. The two‐parameter model describes well the strongly age‐related ICD9 codes: 90% of the codes show R 2 > 0.9) (Figure 4c). This agreement is similar to that of the previously proposed IMII model for cancer, major fibrotic diseases, and hundreds of other age‐related disease states obtained from 10−4 to lower cancer incidence. A better fit is achieved when allowing to exceed its threshold mechanism for classes of disease, providing putative etiologies for diseases with unknown origin, such as bone marrow and skin. Thus, the sudden collapse of the alveoli at the outer parts of the immune removal capacity of cancer. For example, NK cells remove senescent cells also to other forms of age‐related damage and decline contribute (De Bourcy et al., 2017). There may be described as a first‐passage‐time problem, asking when mutated, impair particle removal by the bronchi and increase damage to alveolar cells (Yang et al., 2019; Xu et al., 2018), and immune therapy that causes T cells to target senescent cells (Amor et al., 2020). Since these treatments are predicted to have an exponential incidence curve that slows at very old ages. Interestingly, the main effects are opposite to the case of cancer growth rate to removal rate We next consider the case of frontline tissues discussed above.</i>
Show HN: Forty.News – Daily news, but on a 40-year delay
This started as a reaction to a conversational trope. Despite being a tranquil place, even conversations at my yoga studio often start with, "Can you believe what's going on right now?" with that angry/scared undertone.<p>I'm a news avoider, so I usually feel some smug self-satisfaction in those instances, but I wondered if there was a way to satisfy the urge to doomscroll without the anxiety.<p>My hypothesis: Apply a 40-year latency buffer. You get the intellectual stimulation of "Big Events" without the fog of war, because you know the world didn't end.<p>40 years creates a mirror between the Reagan Era and today. The parallels include celebrity populism, Cold War tensions (Soviets vs. Russia), and inflation economics.<p>The system ingests raw newspaper scans and uses a multi-step LLM pipeline to generate the daily edition:<p>OCR & Ingestion: Converts raw pixels to text.<p>Scoring: Grades events on metrics like Dramatic Irony and Name Recognition to surface stories that are interesting with hindsight. For example, a dry business blurb about Steve Jobs leaving Apple scores highly because the future context creates a narrative arc.<p>Objective Fact Extraction: Extracts a list of discrete, verifiable facts from the raw text.<p>Generation: Uses those extracted facts as the ground truth to write new headlines and story summaries.<p>I expected a zen experience. Instead, I got an entertaining docudrama. Historical events are surprisingly compelling when serialized over weeks.<p>For example, on Oct 7, 1985, Palestinian hijackers took over the cruise ship Achille Lauro. Reading this on a delay in 2025, the story unfolded over weeks: first they threw an American in a wheelchair overboard, then US fighter jets forced the escape plane to land, leading to a military standoff between US Navy SEALs and the Italian Air Force. Unbelievably, the US backed down, but the later diplomatic fallout led the Italian Prime Minister to resign.<p>It hits the dopamine receptors of the news cycle, but with the comfort of a known outcome.<p>Stack: React, Node.js (Caskada for the LLM pipeline orchestration), Gemini for OCR/Scoring.<p>Link: <a href="https://forty.news" rel="nofollow">https://forty.news</a> (No signup required, it's only if you want the stories emailed to you daily/weekly)
Show HN: Forty.News – Daily news, but on a 40-year delay
This started as a reaction to a conversational trope. Despite being a tranquil place, even conversations at my yoga studio often start with, "Can you believe what's going on right now?" with that angry/scared undertone.<p>I'm a news avoider, so I usually feel some smug self-satisfaction in those instances, but I wondered if there was a way to satisfy the urge to doomscroll without the anxiety.<p>My hypothesis: Apply a 40-year latency buffer. You get the intellectual stimulation of "Big Events" without the fog of war, because you know the world didn't end.<p>40 years creates a mirror between the Reagan Era and today. The parallels include celebrity populism, Cold War tensions (Soviets vs. Russia), and inflation economics.<p>The system ingests raw newspaper scans and uses a multi-step LLM pipeline to generate the daily edition:<p>OCR & Ingestion: Converts raw pixels to text.<p>Scoring: Grades events on metrics like Dramatic Irony and Name Recognition to surface stories that are interesting with hindsight. For example, a dry business blurb about Steve Jobs leaving Apple scores highly because the future context creates a narrative arc.<p>Objective Fact Extraction: Extracts a list of discrete, verifiable facts from the raw text.<p>Generation: Uses those extracted facts as the ground truth to write new headlines and story summaries.<p>I expected a zen experience. Instead, I got an entertaining docudrama. Historical events are surprisingly compelling when serialized over weeks.<p>For example, on Oct 7, 1985, Palestinian hijackers took over the cruise ship Achille Lauro. Reading this on a delay in 2025, the story unfolded over weeks: first they threw an American in a wheelchair overboard, then US fighter jets forced the escape plane to land, leading to a military standoff between US Navy SEALs and the Italian Air Force. Unbelievably, the US backed down, but the later diplomatic fallout led the Italian Prime Minister to resign.<p>It hits the dopamine receptors of the news cycle, but with the comfort of a known outcome.<p>Stack: React, Node.js (Caskada for the LLM pipeline orchestration), Gemini for OCR/Scoring.<p>Link: <a href="https://forty.news" rel="nofollow">https://forty.news</a> (No signup required, it's only if you want the stories emailed to you daily/weekly)
Show HN: Wealthfolio 2.0- Open source investment tracker. Now Mobile and Docker
Hi HN, creator of Wealthfolio here.<p>A year ago, I posted the first version. Since then, the app has matured significantly with two major updates:<p>1. Multi-platform Support:
Now available on Mobile (iOS), Desktop (macOS, Windows, Linux), and as a Self-hosted Docker image.
(Android coming soon).<p>2. Addons System:
We added explicit support for extensions so you can hack around, vibe code your own integrations, and customize the app to fit your needs.<p>The core philosophy remains the same: Always private, transparent, and open source.
Show HN: Wealthfolio 2.0- Open source investment tracker. Now Mobile and Docker
Hi HN, creator of Wealthfolio here.<p>A year ago, I posted the first version. Since then, the app has matured significantly with two major updates:<p>1. Multi-platform Support:
Now available on Mobile (iOS), Desktop (macOS, Windows, Linux), and as a Self-hosted Docker image.
(Android coming soon).<p>2. Addons System:
We added explicit support for extensions so you can hack around, vibe code your own integrations, and customize the app to fit your needs.<p>The core philosophy remains the same: Always private, transparent, and open source.
Show HN: My hobby OS that runs Minecraft
Show HN: F32 – An Extremely Small ESP32 Board
As part of a little research and also some fun I decided to try my hand at seeing how small of an ESP32 board I can make with functioning WiFi.
Show HN: Browser-based interactive 3D Three-Body problem simulator
Features include:<p><pre><code> - Several preset periodic orbits: the classic Figure-8, plus newly discovered 3D solutions from Li and Liao's recent database of 10,000+ orbits (https://arxiv.org/html/2508.08568v1)
- Full 3D camera controls (rotate/pan/zoom) with body-following mode
- Force and velocity vector visualization
- Timeline scrubbing to explore the full orbital period
</code></pre>
The 3D presets are particularly interesting. Try "O₂(1.2)" or "Piano O₆(0.6)" from the Load Presets menu to see configurations where bodies weave in and out of the orbital plane. Most browser simulators I've seen have been 2D.<p>Built with Three.js. Open to suggestions for additional presets or features!
Show HN: I made a down detector for down detector
After down detector went down with the rest of the internet during the Cloudflare outage today I decided to build a robust, independent tool which checks if down detector is down. Enjoy!!
Show HN: ESPectre – Motion detection based on Wi-Fi spectre analysis
Hi everyone, I'm the author of ESPectre.<p>This is an open-source (GPLv3) project that uses Wi-Fi signal analysis to detect motion using CSI data, and it has already garnered almost 2,000 stars in two weeks.<p>Key technical details:<p>- The system does NOT use Machine Learning, it relies purely on Math.
— Runs in real-time on a super affordable chip like the ESP32.
- It integrates seamlessly with Home Assistant via MQTT.
Show HN: I built a synth for my daughter
Show HN: I built a synth for my daughter