The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Yolodex – real-time customer enrichment API
hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback.<p><i>what it does</i><p>send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests.<p>It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like):<p><pre><code> curl https://api.yolodex.ai/api/v1/email-enrichment \
--request POST \
--header 'Content-Type: application/json' \
--data '{"email": "john.smith@example.com"}'
</code></pre>
everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you.<p><i>why i built it / what’s different</i><p>i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found:<p>1. <i>outdated data</i> - the data about was out-of-date and misleading, emails didn’t work, etc<p>2. <i>dubious data</i> - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on<p>3. <i>aggressive pricing</i> - monthly/annual commitments, large gaps between plans, pay the same for empty profiles<p>4. <i>painful setup</i> - hard to find the right api, set it up, test it out etc<p>i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is:<p>1. real-time<p>2. public info only (osint)<p>3. transparent simple pricing<p>4. 1 min to setup<p><i>what i’d love feedback on</i><p>* <i>speed</i>: are responses fast enough? would you trade-off speed for better data coverage?<p>* <i>coverage</i>: which fields will you use (or others you need)?<p>* <i>pricing</i>: is the pricing model sane?<p>* <i>use-cases</i>: what you need this type data for (i.e. example use cases)?<p>* <i>accuracy</i>: any examples where i got it badly wrong?<p>happy to answer technical questions in the thread and give more free credits to help anyone test
Show HN: A WordPress plugin that rewrites image URLs for near-zero-cost delivery
Hi HN,<p>I built a WordPress plugin called Bandwidth Saver. It takes the images your site already has and serves them through Cloudflare R2 and Workers, which means zero egress fees and extremely low storage cost. The goal is to make image delivery fast and cheap without adding any of the complexity of traditional optimization plugins.<p>The idea is simple. WordPress keeps generating images normally. The plugin rewrites the URLs on the frontend so images are served from a Cloudflare Worker. On the first request, the Worker fetches the original image and stores it in R2. After that, Cloudflare’s edge serves the image from its global cache with no egress charges. There’s no need to preload or sync anything, and if something fails, the original image loads. That’s the entire system.<p>I built this because most image CDN plugins try to do everything: compression, resizing, AI transforms, asset management, custom dashboards, and monthly fees. That’s useful for some users, but it’s unnecessary for most sites that just want their existing media to load faster without breaking the bank. Bandwidth Saver focuses only on delivery, not transformations. It’s intentionally minimal.<p>There are two ways to use it. The plugin is completely free if you want to run your own Cloudflare Worker. I included the Worker code and the steps needed to deploy it. If you don’t want to deal with any Cloudflare setup, there’s a managed option for $2.99 per month that uses my Worker and my R2 bucket. I’m trying to keep it accessible while also covering operational costs.<p>The plugin works with any theme or builder and doesn’t modify the database. It only rewrites URLs on output. WordPress remains the system of record for all media. R2 simply becomes a cheap, durable cache layer backed by Cloudflare’s edge.<p>I’m especially interested in feedback about the approach. Does the fetch-on-first-request model make sense? Is the pricing fair for a plugin of this scope? Should I prioritize allowing users to connect their own R2 buckets or the managed service? And for those with experience in edge compute or CDNs, I would love thoughts on how to improve the Worker or the rewrite strategy.<p>Thanks for reading, happy to answer any questions.
Show HN: Safe-NPM – only install packages that are +90 days old
This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm
Usage: safe-npm install react@^18 lodash<p>How it works:
- Queries npm registry for all versions matching your semver range
- Filters out anything published in the last 90 days
- Installs the newest "aged" version<p>Limitations:
- Won't protect against packages malicious from day one
- Doesn't control transitive dependencies (yet - looking into overrides)
- Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.
Show HN: Safe-NPM – only install packages that are +90 days old
This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm
Usage: safe-npm install react@^18 lodash<p>How it works:
- Queries npm registry for all versions matching your semver range
- Filters out anything published in the last 90 days
- Installs the newest "aged" version<p>Limitations:
- Won't protect against packages malicious from day one
- Doesn't control transitive dependencies (yet - looking into overrides)
- Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.
Show HN: Safe-NPM – only install packages that are +90 days old
This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud](<a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="nofollow">https://www.cisa.gov/news-events/alerts/2025/09/23/widesprea...</a>() and the [Chalk/debug Compromise](<a href="https://www.wiz.io/blog/widespread-npm-supply-chain-attack-breaking-down-impact-scope-across-debug-chalk" rel="nofollow">https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b...</a>). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older).<p>Install: npm install -g @dendronhq/safe-npm
Usage: safe-npm install react@^18 lodash<p>How it works:
- Queries npm registry for all versions matching your semver range
- Filters out anything published in the last 90 days
- Installs the newest "aged" version<p>Limitations:
- Won't protect against packages malicious from day one
- Doesn't control transitive dependencies (yet - looking into overrides)
- Delays access to legitimate new features<p>This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback.
Show HN: I turned algae into a bio-altimeter and put it on a weather balloon
Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!
Show HN: I turned algae into a bio-altimeter and put it on a weather balloon
Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!
Show HN: I turned algae into a bio-altimeter and put it on a weather balloon
Hi HN - My name is Andrew, and I'm a high school student.<p>This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).<p>The blog post covers the full engineering mess/process, including:<p>- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.<p>-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.<p>- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.<p>The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: <a href="https://github.com/radeeyate/stratospore" rel="nofollow">https://github.com/radeeyate/stratospore</a><p>I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>
Show HN: KiDoom – Running DOOM on PCB Traces
I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels.<p>Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player.<p>How I did it:<p>Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities.<p>Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories.<p>The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display.<p>Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer.<p>Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously.<p>Follow-up: ScopeDoom<p>After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead.<p>The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples.<p>Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing).<p>Links:<p>KiDoom GitHub: <a href="https://github.com/MichaelAyles/KiDoom" rel="nofollow">https://github.com/MichaelAyles/KiDoom</a>, writeup: <a href="https://www.mikeayles.com/#kidoom" rel="nofollow">https://www.mikeayles.com/#kidoom</a><p>ScopeDoom GitHub: <a href="https://github.com/MichaelAyles/ScopeDoom" rel="nofollow">https://github.com/MichaelAyles/ScopeDoom</a>, writeup: <a href="https://www.mikeayles.com/#scopedoom" rel="nofollow">https://www.mikeayles.com/#scopedoom</a>
Show HN: Datamorph – A clean JSON ⇄ CSV converter with auto-detect
Hi everyone,<p>I built a small web tool called Datamorph because I kept running into JSON/CSV converters that either broke with nested data, required login, or added weird formatting.<p>Datamorph is a minimal, fast, no-login tool that can:<p>• Convert JSON → CSV and CSV → JSON
• Auto-detect structure (arrays, nested objects, mixed data)
• Handle uploads or manual text input
• Beautify / fix invalid JSON
• Give clean, flat CSV output for real-world messy data<p>It’s built with React + Supabase + serverless functions.
Everything runs client-side except file parsing, so nothing is stored.<p>I know there are many similar tools, but I tried focusing on:<p>• better handling of nested JSON,
• simpler UI,
• zero ads / zero login,
• instant conversion without waiting.<p>Would love feedback on edge cases it fails on, or features you think would make this actually useful for devs and analysts.<p>Live tool:
<a href="https://datamorphio.vercel.app/" rel="nofollow">https://datamorphio.vercel.app/</a><p>Thanks for checking it out!
Show HN: Virtual SLURM HPC cluster in a Docker Compose
I'm the main developer behind vHPC, a SLURM HPC cluster in a docker compose.<p>As part of my job, I'm working on a software solution that needs to interact with one of the largest Italian HPC clusters (Cineca Leonardo, 270 PFLOPS). Of course developing on the production system was out of question, as it would have led to unbearably long feedback loops. I thus started looking around for existing containerised solutions, which were always lacking some key ingredient in order to suitably mock our target system (accounting, MPI, out of date software, ...).<p>I thus decided that it was worth it to make my own virtual cluster from scratch, learning a thing or two about SLURM in the process. Even though it satisfies the particular needs of the project I'm working on, I tried to keep vHPC as simple and versatile as possible.<p>I proposed the company to open source it, and as of this morning (CET) vHPC is FLOSS for others to use and tweak. I am around to answer any question.
Show HN: We built an open source, zero webhooks payment processor
Hi HN! For the past bit we’ve been building Flowglad (<a href="https://flowglad.com">https://flowglad.com</a>) and can now feel it’s just gotten good enough to share with you all:<p>Repo: <a href="https://github.com/flowglad/flowglad" rel="nofollow">https://github.com/flowglad/flowglad</a><p>Demo video: <a href="https://www.youtube.com/watch?v=G6H0c1Cd2kU" rel="nofollow">https://www.youtube.com/watch?v=G6H0c1Cd2kU</a><p>Flowglad is a payment processor that you integrate without writing any glue code. Along with processing your payments, it tells you in real time the features and usage credit balances that your customers have available to you based on their billing state. The DX feels like React, because we wanted to bring the reactive programming paradigm to payments.<p>We make it easy to spin up full-fledged pricing models (including usage meters, feature gates and usage credit grants) in a few clicks. We schematize these pricing models into a pricing.yaml file that’s kinda like Terraform but for your pricing.<p>The result is a payments layer that AI coding agents have a substantially easier time one-shotting (for now the happiest path is a fullstack Typescript + React app).<p>Why we built this:<p>- After a decade of building on Stripe, we found it powerful but underopinionated. It left us doing a lot of rote work to set up fairly standard use cases
- That meant more code to maintain, much of which is brittle because it crosses so many server-client boundaries
- Not to mention choreographing the lifecycle of our business domain with the Stripe checkout flow and webhook event types, of which there are 250+
- Payments online has gotten complex - not just new pricing models for AI products, but also cross border sales tax, etc. You either need to handle significant chunks of it yourself, or sign up for and compose multiple services<p>This all feels unduly clunky, esp when compared to how easy other layers like hosting and databases have gotten in recent years.<p>These patterns haven’t changed much in a decade. And while coding agents can nail every other rote part of an app (auth, db, analytics), payments is the scariest to tab-tab-tab your way through. Because the the existing integration patterns are difficult to reason about, difficult to verify correctness, and absolutely mission critical.<p>Our beta version lets you:<p>- Spin up common pricing models in just a few clicks, and customize them as needed
- Clone pricing models between testmode and live mode, and import / export via pricing.yaml
- Check customer usage credits and feature access in real time on your backend and React frontend
- Integrate without any DB schema changes - you reference your customers via your ids, and reference prices, products, features and usage meters via slugs that you define<p>We’re still early in our journey so would love your feedback and opinions. Billing has a lot of use cases, so if you see anything that you wish we supported, please let us know!
Show HN: We built an open source, zero webhooks payment processor
Hi HN! For the past bit we’ve been building Flowglad (<a href="https://flowglad.com">https://flowglad.com</a>) and can now feel it’s just gotten good enough to share with you all:<p>Repo: <a href="https://github.com/flowglad/flowglad" rel="nofollow">https://github.com/flowglad/flowglad</a><p>Demo video: <a href="https://www.youtube.com/watch?v=G6H0c1Cd2kU" rel="nofollow">https://www.youtube.com/watch?v=G6H0c1Cd2kU</a><p>Flowglad is a payment processor that you integrate without writing any glue code. Along with processing your payments, it tells you in real time the features and usage credit balances that your customers have available to you based on their billing state. The DX feels like React, because we wanted to bring the reactive programming paradigm to payments.<p>We make it easy to spin up full-fledged pricing models (including usage meters, feature gates and usage credit grants) in a few clicks. We schematize these pricing models into a pricing.yaml file that’s kinda like Terraform but for your pricing.<p>The result is a payments layer that AI coding agents have a substantially easier time one-shotting (for now the happiest path is a fullstack Typescript + React app).<p>Why we built this:<p>- After a decade of building on Stripe, we found it powerful but underopinionated. It left us doing a lot of rote work to set up fairly standard use cases
- That meant more code to maintain, much of which is brittle because it crosses so many server-client boundaries
- Not to mention choreographing the lifecycle of our business domain with the Stripe checkout flow and webhook event types, of which there are 250+
- Payments online has gotten complex - not just new pricing models for AI products, but also cross border sales tax, etc. You either need to handle significant chunks of it yourself, or sign up for and compose multiple services<p>This all feels unduly clunky, esp when compared to how easy other layers like hosting and databases have gotten in recent years.<p>These patterns haven’t changed much in a decade. And while coding agents can nail every other rote part of an app (auth, db, analytics), payments is the scariest to tab-tab-tab your way through. Because the the existing integration patterns are difficult to reason about, difficult to verify correctness, and absolutely mission critical.<p>Our beta version lets you:<p>- Spin up common pricing models in just a few clicks, and customize them as needed
- Clone pricing models between testmode and live mode, and import / export via pricing.yaml
- Check customer usage credits and feature access in real time on your backend and React frontend
- Integrate without any DB schema changes - you reference your customers via your ids, and reference prices, products, features and usage meters via slugs that you define<p>We’re still early in our journey so would love your feedback and opinions. Billing has a lot of use cases, so if you see anything that you wish we supported, please let us know!
Show HN: We built an open source, zero webhooks payment processor
Hi HN! For the past bit we’ve been building Flowglad (<a href="https://flowglad.com">https://flowglad.com</a>) and can now feel it’s just gotten good enough to share with you all:<p>Repo: <a href="https://github.com/flowglad/flowglad" rel="nofollow">https://github.com/flowglad/flowglad</a><p>Demo video: <a href="https://www.youtube.com/watch?v=G6H0c1Cd2kU" rel="nofollow">https://www.youtube.com/watch?v=G6H0c1Cd2kU</a><p>Flowglad is a payment processor that you integrate without writing any glue code. Along with processing your payments, it tells you in real time the features and usage credit balances that your customers have available to you based on their billing state. The DX feels like React, because we wanted to bring the reactive programming paradigm to payments.<p>We make it easy to spin up full-fledged pricing models (including usage meters, feature gates and usage credit grants) in a few clicks. We schematize these pricing models into a pricing.yaml file that’s kinda like Terraform but for your pricing.<p>The result is a payments layer that AI coding agents have a substantially easier time one-shotting (for now the happiest path is a fullstack Typescript + React app).<p>Why we built this:<p>- After a decade of building on Stripe, we found it powerful but underopinionated. It left us doing a lot of rote work to set up fairly standard use cases
- That meant more code to maintain, much of which is brittle because it crosses so many server-client boundaries
- Not to mention choreographing the lifecycle of our business domain with the Stripe checkout flow and webhook event types, of which there are 250+
- Payments online has gotten complex - not just new pricing models for AI products, but also cross border sales tax, etc. You either need to handle significant chunks of it yourself, or sign up for and compose multiple services<p>This all feels unduly clunky, esp when compared to how easy other layers like hosting and databases have gotten in recent years.<p>These patterns haven’t changed much in a decade. And while coding agents can nail every other rote part of an app (auth, db, analytics), payments is the scariest to tab-tab-tab your way through. Because the the existing integration patterns are difficult to reason about, difficult to verify correctness, and absolutely mission critical.<p>Our beta version lets you:<p>- Spin up common pricing models in just a few clicks, and customize them as needed
- Clone pricing models between testmode and live mode, and import / export via pricing.yaml
- Check customer usage credits and feature access in real time on your backend and React frontend
- Integrate without any DB schema changes - you reference your customers via your ids, and reference prices, products, features and usage meters via slugs that you define<p>We’re still early in our journey so would love your feedback and opinions. Billing has a lot of use cases, so if you see anything that you wish we supported, please let us know!
Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows
Easy to use, portable app to play midi music files on all flavours of Microsoft Windows.<p>Brief Background - Used midi playback way back in the days of Windows 95 for some fun and entertaining apps, but as Windows progressed, it seemed their midi support (for Win32 anyway) regressed in both startup speed and reliability. Midi playback used to be near instant on Windows 95, but on later versions of Windows this was delayed to about 5-7 seconds. And reliability became somewhat patchy. This made working with midi a real headache.<p>Cynthia was built to test and enjoy midi music once again. It's taken over a year of solid coding, recoding, testing, re-testing, and a lot more testing, and some hair pulling along the way, but finally Cynthia works pretty solidly on Windows now.<p>Some of Cynthia's Key Features:
* 25 built-in sample midis on a virtual disk - play right out-of-the box
* Play Modes: Once, Repeat One, Repeat All, All Once, Random
* Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats
* Realtime track data indicators, channel output volume indicators with peak hold, 128 note usage indicators
* Volume Bars to display realtime average volume and bass volume levels
* Use an Xbox Controller to control Cynthia's main functions
* Large list capacity for handling thousands of midi files
* Switch between up to 10 midi playback devices in realtime
* Playback through a single midi device, or multiple simultaneous midi devices with lag and channel output support
* Custom built midi playback engine for high playback stability
* Custom built codebase for low-level work to GUI level
* Also runs on Linux/Mac (including apple silicon) via Wine
* Smart Source Code - compiles in Borland Delphi 3 and Lazarus 2
* MIT License<p>YouTube Video of Cynthia playing a midi:
<a href="https://youtu.be/IDEOQUboTvQ" rel="nofollow">https://youtu.be/IDEOQUboTvQ</a><p>GitHub Repo:
<a href="https://github.com/blaiz2023/Cynthia" rel="nofollow">https://github.com/blaiz2023/Cynthia</a>
Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows
Easy to use, portable app to play midi music files on all flavours of Microsoft Windows.<p>Brief Background - Used midi playback way back in the days of Windows 95 for some fun and entertaining apps, but as Windows progressed, it seemed their midi support (for Win32 anyway) regressed in both startup speed and reliability. Midi playback used to be near instant on Windows 95, but on later versions of Windows this was delayed to about 5-7 seconds. And reliability became somewhat patchy. This made working with midi a real headache.<p>Cynthia was built to test and enjoy midi music once again. It's taken over a year of solid coding, recoding, testing, re-testing, and a lot more testing, and some hair pulling along the way, but finally Cynthia works pretty solidly on Windows now.<p>Some of Cynthia's Key Features:
* 25 built-in sample midis on a virtual disk - play right out-of-the box
* Play Modes: Once, Repeat One, Repeat All, All Once, Random
* Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats
* Realtime track data indicators, channel output volume indicators with peak hold, 128 note usage indicators
* Volume Bars to display realtime average volume and bass volume levels
* Use an Xbox Controller to control Cynthia's main functions
* Large list capacity for handling thousands of midi files
* Switch between up to 10 midi playback devices in realtime
* Playback through a single midi device, or multiple simultaneous midi devices with lag and channel output support
* Custom built midi playback engine for high playback stability
* Custom built codebase for low-level work to GUI level
* Also runs on Linux/Mac (including apple silicon) via Wine
* Smart Source Code - compiles in Borland Delphi 3 and Lazarus 2
* MIT License<p>YouTube Video of Cynthia playing a midi:
<a href="https://youtu.be/IDEOQUboTvQ" rel="nofollow">https://youtu.be/IDEOQUboTvQ</a><p>GitHub Repo:
<a href="https://github.com/blaiz2023/Cynthia" rel="nofollow">https://github.com/blaiz2023/Cynthia</a>
Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows
Easy to use, portable app to play midi music files on all flavours of Microsoft Windows.<p>Brief Background - Used midi playback way back in the days of Windows 95 for some fun and entertaining apps, but as Windows progressed, it seemed their midi support (for Win32 anyway) regressed in both startup speed and reliability. Midi playback used to be near instant on Windows 95, but on later versions of Windows this was delayed to about 5-7 seconds. And reliability became somewhat patchy. This made working with midi a real headache.<p>Cynthia was built to test and enjoy midi music once again. It's taken over a year of solid coding, recoding, testing, re-testing, and a lot more testing, and some hair pulling along the way, but finally Cynthia works pretty solidly on Windows now.<p>Some of Cynthia's Key Features:
* 25 built-in sample midis on a virtual disk - play right out-of-the box
* Play Modes: Once, Repeat One, Repeat All, All Once, Random
* Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats
* Realtime track data indicators, channel output volume indicators with peak hold, 128 note usage indicators
* Volume Bars to display realtime average volume and bass volume levels
* Use an Xbox Controller to control Cynthia's main functions
* Large list capacity for handling thousands of midi files
* Switch between up to 10 midi playback devices in realtime
* Playback through a single midi device, or multiple simultaneous midi devices with lag and channel output support
* Custom built midi playback engine for high playback stability
* Custom built codebase for low-level work to GUI level
* Also runs on Linux/Mac (including apple silicon) via Wine
* Smart Source Code - compiles in Borland Delphi 3 and Lazarus 2
* MIT License<p>YouTube Video of Cynthia playing a midi:
<a href="https://youtu.be/IDEOQUboTvQ" rel="nofollow">https://youtu.be/IDEOQUboTvQ</a><p>GitHub Repo:
<a href="https://github.com/blaiz2023/Cynthia" rel="nofollow">https://github.com/blaiz2023/Cynthia</a>