The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Era – Open-source local sandbox for AI agents

Just watched this video by ThePrimeagen (<a href="https://www.youtube.com/watch?v=efwDZw7l2Nk" rel="nofollow">https://www.youtube.com/watch?v=efwDZw7l2Nk</a>) about attackers jailbreaking Claude to run cyber attacks. The core issue: AI agents need isolation.<p>We built ERA to fix this – local microVM-based sandboxing for AI-generated code with hardware-level security. Think containers, but safer. Such attacks wouldn't touch your host if running in ERA.<p>GitHub: <a href="https://github.com/BinSquare/ERA" rel="nofollow">https://github.com/BinSquare/ERA</a><p>Quick start: <a href="https://github.com/BinSquare/ERA/tree/main/era-agent/tutorials" rel="nofollow">https://github.com/BinSquare/ERA/tree/main/era-agent/tutoria...</a><p>Would love your thoughts and feedback!

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Yolodex – real-time customer enrichment API

hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback.<p><i>what it does</i><p>send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests.<p>It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like):<p><pre><code> curl https://api.yolodex.ai/api/v1/email-enrichment \ --request POST \ --header 'Content-Type: application/json' \ --data '{"email": "john.smith@example.com"}' </code></pre> everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you.<p><i>why i built it / what’s different</i><p>i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found:<p>1. <i>outdated data</i> - the data about was out-of-date and misleading, emails didn’t work, etc<p>2. <i>dubious data</i> - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on<p>3. <i>aggressive pricing</i> - monthly/annual commitments, large gaps between plans, pay the same for empty profiles<p>4. <i>painful setup</i> - hard to find the right api, set it up, test it out etc<p>i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is:<p>1. real-time<p>2. public info only (osint)<p>3. transparent simple pricing<p>4. 1 min to setup<p><i>what i’d love feedback on</i><p>* <i>speed</i>: are responses fast enough? would you trade-off speed for better data coverage?<p>* <i>coverage</i>: which fields will you use (or others you need)?<p>* <i>pricing</i>: is the pricing model sane?<p>* <i>use-cases</i>: what you need this type data for (i.e. example use cases)?<p>* <i>accuracy</i>: any examples where i got it badly wrong?<p>happy to answer technical questions in the thread and give more free credits to help anyone test

Show HN: Yolodex – real-time customer enrichment API

hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback.<p><i>what it does</i><p>send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests.<p>It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like):<p><pre><code> curl https://api.yolodex.ai/api/v1/email-enrichment \ --request POST \ --header 'Content-Type: application/json' \ --data '{"email": "john.smith@example.com"}' </code></pre> everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you.<p><i>why i built it / what’s different</i><p>i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found:<p>1. <i>outdated data</i> - the data about was out-of-date and misleading, emails didn’t work, etc<p>2. <i>dubious data</i> - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on<p>3. <i>aggressive pricing</i> - monthly/annual commitments, large gaps between plans, pay the same for empty profiles<p>4. <i>painful setup</i> - hard to find the right api, set it up, test it out etc<p>i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is:<p>1. real-time<p>2. public info only (osint)<p>3. transparent simple pricing<p>4. 1 min to setup<p><i>what i’d love feedback on</i><p>* <i>speed</i>: are responses fast enough? would you trade-off speed for better data coverage?<p>* <i>coverage</i>: which fields will you use (or others you need)?<p>* <i>pricing</i>: is the pricing model sane?<p>* <i>use-cases</i>: what you need this type data for (i.e. example use cases)?<p>* <i>accuracy</i>: any examples where i got it badly wrong?<p>happy to answer technical questions in the thread and give more free credits to help anyone test

Show HN: A WordPress plugin that rewrites image URLs for near-zero-cost delivery

Hi HN,<p>I built a WordPress plugin called Bandwidth Saver. It takes the images your site already has and serves them through Cloudflare R2 and Workers, which means zero egress fees and extremely low storage cost. The goal is to make image delivery fast and cheap without adding any of the complexity of traditional optimization plugins.<p>The idea is simple. WordPress keeps generating images normally. The plugin rewrites the URLs on the frontend so images are served from a Cloudflare Worker. On the first request, the Worker fetches the original image and stores it in R2. After that, Cloudflare’s edge serves the image from its global cache with no egress charges. There’s no need to preload or sync anything, and if something fails, the original image loads. That’s the entire system.<p>I built this because most image CDN plugins try to do everything: compression, resizing, AI transforms, asset management, custom dashboards, and monthly fees. That’s useful for some users, but it’s unnecessary for most sites that just want their existing media to load faster without breaking the bank. Bandwidth Saver focuses only on delivery, not transformations. It’s intentionally minimal.<p>There are two ways to use it. The plugin is completely free if you want to run your own Cloudflare Worker. I included the Worker code and the steps needed to deploy it. If you don’t want to deal with any Cloudflare setup, there’s a managed option for $2.99 per month that uses my Worker and my R2 bucket. I’m trying to keep it accessible while also covering operational costs.<p>The plugin works with any theme or builder and doesn’t modify the database. It only rewrites URLs on output. WordPress remains the system of record for all media. R2 simply becomes a cheap, durable cache layer backed by Cloudflare’s edge.<p>I’m especially interested in feedback about the approach. Does the fetch-on-first-request model make sense? Is the pricing fair for a plugin of this scope? Should I prioritize allowing users to connect their own R2 buckets or the managed service? And for those with experience in edge compute or CDNs, I would love thoughts on how to improve the Worker or the rewrite strategy.<p>Thanks for reading, happy to answer any questions.

Show HN: Safe-NPM – only install packages that are +90 days old

This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm Usage: safe-npm install react@^18 lodash<p>How it works: - Queries npm registry for all versions matching your semver range - Filters out anything published in the last 90 days - Installs the newest "aged" version<p>Limitations: - Won't protect against packages malicious from day one - Doesn't control transitive dependencies (yet - looking into overrides) - Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.

Show HN: Safe-NPM – only install packages that are +90 days old

This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm Usage: safe-npm install react@^18 lodash<p>How it works: - Queries npm registry for all versions matching your semver range - Filters out anything published in the last 90 days - Installs the newest "aged" version<p>Limitations: - Won't protect against packages malicious from day one - Doesn't control transitive dependencies (yet - looking into overrides) - Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.

Show HN: Safe-NPM – only install packages that are +90 days old

This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm Usage: safe-npm install react@^18 lodash<p>How it works: - Queries npm registry for all versions matching your semver range - Filters out anything published in the last 90 days - Installs the newest "aged" version<p>Limitations: - Won't protect against packages malicious from day one - Doesn't control transitive dependencies (yet - looking into overrides) - Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.

Show HN: I turned algae into a bio-altimeter and put it on a weather balloon

Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!

Show HN: I turned algae into a bio-altimeter and put it on a weather balloon

Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!

Show HN: I turned algae into a bio-altimeter and put it on a weather balloon

Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!

Show HN: KiDoom – Running DOOM on PCB Traces

I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>

Show HN: KiDoom – Running DOOM on PCB Traces

I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>

< 1 2 3 4 ... 903 904 905 >