The best Hacker News stories from Show from the past day
Latest posts:
Show HN: GDSL – 800 line kernel: Lisp subset in 500, C subset in 1300
Show HN: Lux – Drop-in Redis replacement in Rust. 5.6x faster, ~1MB Docker image
Show HN: Learn Arabic with spaced repetition and comprehensible input
Sharing a friends first-ever Rails application, dedicated to Arabic learning, from 0 to 1. Pulls language learning methods from Anki, comprehensible input and more.
Show HN: What if your synthesizer was powered by APL (or a dumb K clone)?
I built k-synth as an experiment to see if a minimalist, K-inspired array language could make sketching waveforms faster and more intuitive than traditional code. I’ve put together a web-based toolkit so you can try the syntax directly in the browser without having to touch a compiler:<p>Live Toolkit: <a href="https://octetta.github.io/k-synth/" rel="nofollow">https://octetta.github.io/k-synth/</a><p>If you visit the page, here is a quick path to an audio payoff:<p>- Click "patches" and choose dm-bell.ks.<p>- Click "run"—the notebook area will update. Click the waveform to hear the result.<p>- Click the "->0" button below the waveform to copy it into slot 0 at the top (slots are also clickable).<p>- Click "pads" in the entry area to show a performance grid.<p>- Click "melodic" to play slot 0's sample at different intervals across the grid.<p>The 'Weird' Stack:<p>- The Language: A simplified, right-associative array language (e.g., s for sine, p for pi).<p>- The Web Toolkit: Built using WASM and Web Audio for live-coding samples.<p>- AI Pair-Programming: I used AI agents to bootstrap the parser and web boilerplate, which let me vet the language design in weeks rather than months.<p>The Goal: This isn't meant to replace a DAW. It’s a compact way to generate samples for larger projects. It’s currently in a "will-it-blend" state. I’m looking for feedback from the array language and DSP communities—specifically on the operator choices and the right-to-left evaluation logic.<p>Source (MIT): <a href="https://github.com/octetta/k-synth" rel="nofollow">https://github.com/octetta/k-synth</a>
Show HN: Signet – Autonomous wildfire tracking from satellite and weather data
I built Signet in Go to see if an autonomous system could handle the wildfire monitoring loop that people currently run by hand - checking satellite feeds, pulling up weather, looking at terrain and fuels, deciding whether a detection is actually a fire worth tracking.<p>All the data already exists: NASA FIRMS thermal detections, GOES-19 imagery, NWS forecasts, LANDFIRE fuel models, USGS elevation, Census population data, OpenStreetMap. The problem is it arrives from different sources on different cadences in different formats.<p>Most of the system is deterministic plumbing - ingestion, spatial indexing, deduplication. I use Gemini to orchestrate 23 tools across weather, terrain, imagery, and incident tracking for the part where clean rules break down: deciding which weak detections are worth investigating, what context to pull next, and how to synthesize noisy evidence into a structured assessment.<p>It also records time-bounded predictions and scores them against later data, so the system is making falsifiable claims instead of narrating after the fact. The current prediction metrics are visible on the site even though the sample is still small.<p>It's already opening incidents from raw satellite detections and matching some to official NIFC reporting. But false positives, detection latency, and incident matching can still be rough.<p>I'd especially welcome criticism on: where should this be more deterministic instead of LLM-driven? And is this kind of autonomous monitoring actually useful, or just noisier than doing it by hand?
Show HN: Context Gateway – Compress agent context before it hits the LLM
We built an open-source proxy that sits between coding agents (Claude Code, OpenClaw, etc.) and the LLM, compressing tool outputs before they enter the context window.<p>Demo: <a href="https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s" rel="nofollow">https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s</a>.<p>Motivation: Agents are terrible at managing context. A single file read or grep can dump thousands of tokens into the window, most of it noise. This isn't just expensive — it actively degrades quality. Long-context benchmarks consistently show steep accuracy drops as context grows (OpenAI's GPT-5.4 eval goes from 97.2% at 32k to 36.6% at 1M <a href="https://openai.com/index/introducing-gpt-5-4/" rel="nofollow">https://openai.com/index/introducing-gpt-5-4/</a>).<p>Our solution uses small language models (SLMs): we look at model internals and train classifiers to detect which parts of the context carry the most signal. When a tool returns output, we compress it conditioned on the intent of the tool call—so if the agent called grep looking for error handling patterns, the SLM keeps the relevant matches and strips the rest.<p>If the model later needs something we removed, it calls expand() to fetch the original output. We also do background compaction at 85% window capacity and lazy-load tool descriptions so the model only sees tools relevant to the current step.<p>The proxy also gives you spending caps, a dashboard for tracking running and past sessions, and Slack pings when an agent is sitting there waiting on you.<p>Repo is here: <a href="https://github.com/Compresr-ai/Context-Gateway" rel="nofollow">https://github.com/Compresr-ai/Context-Gateway</a>. You can try it with:<p><pre><code> curl -fsSL https://compresr.ai/api/install | sh
</code></pre>
Happy to go deep on any of it: the compression model, how the lazy tool loading works, or anything else about the gateway. Try it out and let us know how you like it!
Show HN: GitAgent – An open standard that turns any Git repo into an AI agent
We built GitAgent because we kept seeing the same problem: every agent framework defines agents differently, and switching frameworks means rewriting everything.<p>GitAgent is a spec that defines an AI agent as files in a git repo.<p>Three core files — agent.yaml (config), SOUL.md (personality/instructions), and SKILL.md (capabilities) — and you get a portable agent definition that exports to Claude Code, OpenAI Agents SDK, CrewAI, Google ADK, LangChain, and others.<p>What you get for free by being git-native:<p>1. Version control for agent behavior (roll back a bad prompt like you'd revert a bad commit)
2. Branching for environment promotion (dev → staging → main)
3. Human-in-the-loop via PRs (agent learns a skill → opens a branch → human reviews before merge)
4. Audit trail via git blame and git diff
5. Agent forking and remixing (fork a public agent, customize it, PR improvements back)
6. CI/CD with GitAgent validate in GitHub Actions<p>The CLI lets you run any agent repo directly:<p>npx @open-gitagent/gitagent run -r <a href="https://github.com/user/agent" rel="nofollow">https://github.com/user/agent</a> -a claude<p>The compliance layer is optional, but there if you need it — risk tiers, regulatory mappings (FINRA, SEC, SR 11-7), and audit reports via GitAgent audit.<p>Spec is at <a href="https://gitagent.sh" rel="nofollow">https://gitagent.sh</a>, code is on GitHub.<p>Would love feedback on the schema design and what adapters people would want next.
Show HN: Ichinichi – One note per day, E2E encrypted, local-first
Look, every journaling app out there wants you to organize things into folders and tags and templates. I just wanted to write something down every day.<p>So I built this. One note per day. That's the whole deal.<p>- Can't edit yesterday. What's done is done. Keeps you from fussing over old entries instead of writing today's.<p>- Year view with dots showing which days you actually wrote. It's a streak chart. Works better than it should.<p>- No signup required. Opens right up, stores everything locally in your browser. Optional cloud sync if you want it<p>- E2E encrypted with AES-GCM, zero-knowledge, the whole nine yards.<p>Tech-wise: React, TypeScript, Vite, Zustand, IndexedDB. Supabase for optional sync. Deployed on Cloudflare. PWA-capable.<p>The name means "one day" in Japanese (いちにち).<p>The read-only past turned out to be the thing that actually made me stick with it. Can't waste time perfecting yesterday if yesterday won't let you in.<p>Live at <a href="https://ichinichi.app" rel="nofollow">https://ichinichi.app</a> | Source: <a href="https://github.com/katspaugh/ichinichi" rel="nofollow">https://github.com/katspaugh/ichinichi</a>
Show HN: Han – A Korean programming language written in Rust
A few weeks ago I saw a post about someone converting an entire C++ codebase to Rust using AI in under two weeks.<p>That inspired me — if AI can rewrite a whole language stack that fast, I wanted to try building a programming language from scratch with AI assistance.<p>I've also been noticing growing global interest in Korean language and culture, and I wondered: what would a programming language look like if every keyword was in Hangul (the Korean writing system)?<p>Han is the result. It's a statically-typed language written in Rust with a full compiler pipeline (lexer → parser → AST → interpreter + LLVM IR codegen).<p>It supports arrays, structs with impl blocks, closures, pattern matching, try/catch, file I/O, module imports, a REPL, and a basic LSP server.<p>This is a side project, not a "you should use this instead of Python" pitch.
Feedback on language design, compiler architecture, or the Korean keyword choices is very welcome.<p><a href="https://github.com/xodn348/han" rel="nofollow">https://github.com/xodn348/han</a>
Show HN: Channel Surfer – Watch YouTube like it’s cable TV
I know, it's a very first-world problem. But in my house, we have a hard time deciding what to watch. Too many options!<p>So I made this to recreate Cable TV for YouTube. I made it so it runs in the browser. Quickly import your subscriptions in the browser via a bookmarklet. No accounts, no sign-ins. Just quickly import your data locally.
Show HN: Understudy – Teach a desktop agent by demonstrating a task once
I built Understudy because a lot of real work still spans native desktop apps, browser tabs, terminals, and chat tools. Most current agents live in only one of those surfaces.<p>Understudy is a local-first desktop agent runtime that can operate GUI apps, browsers, shell tools, files, and messaging in one session. The part I'm most interested in feedback on is teach-by-demonstration: you do a task once, the agent records screen video + semantic events, extracts the intent rather than coordinates, and turns it into a reusable skill.<p>Demo video: <a href="https://www.youtube.com/watch?v=3d5cRGnlb_0" rel="nofollow">https://www.youtube.com/watch?v=3d5cRGnlb_0</a><p>In the demo I teach it: Google Image search -> download a photo -> remove background in Pixelmator Pro -> export -> send via Telegram. Then I ask it to do the same for Elon Musk. The replay isn't a brittle macro: the published skill stores intent steps, route options, and GUI hints only as a fallback. In this example it can also prefer faster routes when they are available instead of repeating every GUI step.<p>Current state: macOS only. Layers 1-2 are working today; Layers 3-4 are partial and still early.<p><pre><code> npm install -g @understudy-ai/understudy
understudy wizard
</code></pre>
GitHub: <a href="https://github.com/understudy-ai/understudy" rel="nofollow">https://github.com/understudy-ai/understudy</a><p>Happy to answer questions about the architecture, teach-by-demonstration, or the limits of the current implementation.
Show HN: OneCLI – Vault for AI Agents in Rust
We built OneCLI because AI agents are being given raw API keys. And it's going about as well as you'd expect. We figured the answer isn't "don't give agents access," it's "give them access without giving them secrets."<p>OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI's encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host/path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal.<p>Try it in one line:
docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli<p>The proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY).<p>We started with what felt most urgent: agents shouldn't be holding raw credentials.
The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through.<p>It's Apache-2.0 licensed. We'd love feedback on the approach, and we're especially curious how people are handling agent auth today.<p>GitHub: <a href="https://github.com/onecli/onecli" rel="nofollow">https://github.com/onecli/onecli</a>
Site: <a href="https://onecli.sh" rel="nofollow">https://onecli.sh</a>
Show HN: A context-aware permission guard for Claude Code
We needed something like --dangerously-skip-permissions that doesn’t nuke your untracked files, exfiltrate your keys, or install malware.<p>Claude Code's permission system is allow-or-deny per tool, but that doesn’t really scale. Deleting some files is fine sometimes. And git checkout is sometimes not fine. Even when you curate permissions, 200 IQ Opus can find a way around it. Maintaining a deny list is a fool's errand.<p>nah is a PreToolUse hook that classifies every tool call by what it actually does, using a deterministic classifier that runs in milliseconds. It maps commands to action types like filesystem_read, package_run, db_write, git_history_rewrite, and applies policies: allow, context (depends on the target), ask, or block.<p>Not everything can be classified, so you can optionally escalate ambiguous stuff to an LLM, but that’s not required. Anything unresolved you can approve, and configure the taxonomy so you don’t get asked again.<p>It works out of the box with sane defaults, no config needed. But you can customize it fully if you want to.<p>No dependencies, stdlib Python, MIT.<p>pip install nah && nah install<p><a href="https://github.com/manuelschipper/nah" rel="nofollow">https://github.com/manuelschipper/nah</a>
Show HN: Rudel – Claude Code Session Analytics
We built rudel.ai after realizing we had no visibility into our own Claude Code sessions. We were using it daily but had no idea which sessions were efficient, why some got abandoned, or whether we were actually improving over time.<p>So we built an analytics layer for it. After connecting our own sessions, we ended up with a dataset of 1,573 real Claude Code sessions, 15M+ tokens, 270K+ interactions.<p>Some things we found that surprised us:
- Skills were only being used in 4% of our sessions
- 26% of sessions are abandoned, most within the first 60 seconds
- Session success rate varies significantly by task type (documentation scores highest, refactoring lowest)
- Error cascade patterns appear in the first 2 minutes and predict abandonment with reasonable accuracy
- There is no meaningful benchmark for 'good' agentic session performance, we are building one.<p>The tool is free to use and fully open source, happy to answer questions about the data or how we built it.
Show HN: Axe – A 12MB binary that replaces your AI framework
I built Axe because I got tired of every AI tool trying to be a chatbot.<p>Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.<p>Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.<p>What Axe is:<p>- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)<p>- Stdin piping, something like `git diff | axe run reviewer` just works<p>- Sub-agent delegation. Where agents call other agents via tool use, depth-limited<p>- Persistent memory. If you want, agents can remember across runs without you managing state<p>- MCP support. Axe can connect any MCP server to your agents<p>- Built-in tools. Such as web_search and url_fetch out of the box<p>- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format<p>- Path-sandboxed file ops. Keeps agents locked to a working directory<p>Written in Go. No daemon, no GUI.<p>What would you automate first?
Show HN: s@: decentralized social networking over static sites
Show HN: I built an ISP infrastructure emulator from scratch with a custom vBNG
Demo: <a href="https://aether.saphal.me" rel="nofollow">https://aether.saphal.me</a>
GitHub: <a href="https://github.com/saphalpdyl/Aether" rel="nofollow">https://github.com/saphalpdyl/Aether</a><p>Aether is a multi-BNG (Broadband Network Gateway) ISP infrastructure lab built almost from scratch that emulates IPoE IPv4 subscriber management end-to-end. It supports IPoE/Ipv4 networks and runs a python-based vBNG with RADIUS AAA, per-subscriber traffic shaping, and traffic simulation emulated on Containerlab. It is also my first personal networking project, built roughly over a month.<p>Motivations behind the project<p>I'm a CS sophomore. About three years ago, I was assigned, as an intern, to build a OSS/BSS platform for a regional ISP by myself without mentoring. Referencing demo.splynx.com , I developed most of the BSS side ( bookkeeping, accounting, inventory management ), but, in terms of networking, I managed to install and setup RADIUS and that was about it. I didn't have anyone to mentor me or ask questions to, so I had given up then.<p>Three years later, I decided to try cracking it again. This project is meant to serve as a learning reference for anyone who's been in that same position i.e staring at closed-source vendor stacks without proper guidance. This is absolutely not production-grade, but I hope it gives someone a place to start.<p>Architecture overview<p>The core component, the BNG, runs on an event-driven architecture where state changes are passed around as messages to avoid handling mutexes and locks. The session manager is the sole owner of the session state. To keep it clean and predictable, the direBNG never accepts external inputctly. The one exception is the Go RADIUS CoA daemon, which passes CoA messages in via IPC sockets. Everything the BNG produces(events, session snapshots) gets pushed to Redis Streams, where the bng-ingestor picks them up, processes them, and persists them.<p>Simulation and meta-configs<p>I am generating traffic through a simulator node that mounts the host's docker socket and runs docker exec commands on selected hosts. The topology.yaml used by Containerlab to define the network topology grows bigger as more BNG's and access nodes are added. So aether.config.yaml, a simpler configuration, is consumed by the configuration pipeline to generate the topology.yaml and other files (nginx.conf, kea-dhcp.conf, RADIUS clients.conf etc.)<p>Known Limitations<p>- Multiple veth hops through the emulated topology add significant overhead. Profiling with iperf3 (-P 10 -t 10, 9500 MTU, 24 vCPUs) shows BNG→upstream at ~24 Gbit/s, but host→BNG→upstream drops to ~3.5 Gbit/s. The 9500 MTU also isn't representative of real ISP deployments. This gets worse when the actual network is reintroduced capping my throughput to 1.6 Gbits/sec in local.
- The circuit ID format (1/0/X) is non-standard. I simplified it for clarity.
- No iBGP or VLAN support.
- No Ipv6 support. I wanted to target IPv4 networks from the start to avoid getting too much breadth without a lot of depth.<p>Nearly everything I know about networking (except some sections from AWS) I learned building this. A lot was figured out on the fly, so engineers will likely spot questionable decisions in the codebase. I'd genuinely appreciate that feedback.<p>Questions<p>- Currently, the circuit where the user connects is arbitrarily decided by the demo user. In a real system with thousands of circuits, it'd be very difficult to properly assess which circuit the customer might connect to. When adding a new customer to a service, how does the operator decide, based on customer's location, which circuit to provide the service to ?
Show HN: DD Photos – open-source photo album site generator (Go and SvelteKit)
I was frustrated with photo sharing sites. Apple's iCloud shared albums take 20+ seconds to load, and everything else comes with ads, cumbersome UIs, or social media distractions. I just want to share photos with friends and family: fast, mobile-friendly, distraction-free.<p>So I built DD Photos. You export photos from whatever you already use (Lightroom, Apple Photos, etc.) into folders, run `photogen` (a Go CLI) to resize them to WebP and generate JSON indexes, then deploy the SvelteKit static site anywhere that serves files. Apache, S3, whatever. No server-side code, no database.<p>Built over several weeks with heavy use of Claude Code, which I found genuinely useful for this kind of full-stack project spanning Go, SvelteKit/TypeScript, Apache config, Docker, and Playwright tests. Happy to discuss that experience too.<p>Live example: <a href="https://photos.donohoe.info" rel="nofollow">https://photos.donohoe.info</a>
Repo: <a href="https://github.com/dougdonohoe/ddphotos" rel="nofollow">https://github.com/dougdonohoe/ddphotos</a>
Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids
Hi HN,
I’m a chemical engineer and I manage logistics at a refinery down in Texas. Whenever I try to explain downstream operations to people outside the industry (including my kids), I usually get blank stares. I wanted to build something that visualizes the concepts and chemistry of a plant without completely dumbing down the science, so I put together this 5-minute browser game.<p>Here's a simple runthrough: <a href="https://www.youtube.com/watch?v=is-moBz6upU" rel="nofollow">https://www.youtube.com/watch?v=is-moBz6upU</a>. I pushed to get through a full product pathway to show the V-804 replay.<p>I am not a software developer by trade, so I relied heavily on LLMs (Claude, Copilot, Gemini) to help write the code. What started as a simple concept turned into a 9,000-line single-page app built with vanilla HTML, CSS, and JavaScript. I used Matter.js for the 2D physics minigames.<p>A few technical takeaways from building this as a non-dev:
* Managing the LLM workflow: Once the script.js file got large, letting the models output full file rewrites was a disaster (truncations, hallucinations, invisible curly-quote replacements that broke the JS). I started forcing them to act like patch files, strictly outputting "Find this exact block" and "Replace with this exact block." This was the only way to maintain improvements without breaking existing logic.<p>* Mapping physics to CSS: I wanted the minigames to visually sit inside circular CSS containers (border-radius: 50%). Matter.js doesn't natively care about your CSS. Getting the rigid body physics to respect a dynamic, responsive DOM boundary across different screen sizes required running an elliptical boundary equation (dx * dx) / (rx * rx) + (dy * dy) / (ry * ry) > 1 on every single frame. Maybe this was overkill to try to handle the resizing between phones and PCs.<p>* Mobile browser events: Forcing iOS Safari to ignore its default behaviors (double-tap zoom, swipe-to-scroll) while still allowing the user to tap and drag Matter.js objects required a ridiculous amount of custom event listener management and CSS (touch-action: manipulation; user-select: none;). I also learned that these actions very easily kill the mouse scroll making it very frustrating for PC users. I am hoping I hit a good middle ground.<p>* State management: Since I didn't use React or any frameworks, I had to rely on a global state object. Because the game jumps between different phases/minigames, I ran into massive memory leaks from old setInterval loops and Matter.js bodies stacking up. I had to build strict teardown functions to wipe the slate clean on every map transition.<p>The game walks through electrostatic desalting, fractional distillation, hydrotreating, catalytic cracking, and gasoline blending (hitting specific Octane and RVP specs).<p>It’s completely free, runs client-side, and has zero ads or sign-ups. I'd appreciate any feedback on the mechanics, or let me know if you manage to break the physics engine. Happy to answer any questions about the chemical engineering side of things as well.<p>For some reason the URL box is not getting recognized, maybe someone can help me feel less dumb there too.
<a href="https://fuelingcuriosity.com/game" rel="nofollow">https://fuelingcuriosity.com/game</a>
Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids
Hi HN,
I’m a chemical engineer and I manage logistics at a refinery down in Texas. Whenever I try to explain downstream operations to people outside the industry (including my kids), I usually get blank stares. I wanted to build something that visualizes the concepts and chemistry of a plant without completely dumbing down the science, so I put together this 5-minute browser game.<p>Here's a simple runthrough: <a href="https://www.youtube.com/watch?v=is-moBz6upU" rel="nofollow">https://www.youtube.com/watch?v=is-moBz6upU</a>. I pushed to get through a full product pathway to show the V-804 replay.<p>I am not a software developer by trade, so I relied heavily on LLMs (Claude, Copilot, Gemini) to help write the code. What started as a simple concept turned into a 9,000-line single-page app built with vanilla HTML, CSS, and JavaScript. I used Matter.js for the 2D physics minigames.<p>A few technical takeaways from building this as a non-dev:
* Managing the LLM workflow: Once the script.js file got large, letting the models output full file rewrites was a disaster (truncations, hallucinations, invisible curly-quote replacements that broke the JS). I started forcing them to act like patch files, strictly outputting "Find this exact block" and "Replace with this exact block." This was the only way to maintain improvements without breaking existing logic.<p>* Mapping physics to CSS: I wanted the minigames to visually sit inside circular CSS containers (border-radius: 50%). Matter.js doesn't natively care about your CSS. Getting the rigid body physics to respect a dynamic, responsive DOM boundary across different screen sizes required running an elliptical boundary equation (dx * dx) / (rx * rx) + (dy * dy) / (ry * ry) > 1 on every single frame. Maybe this was overkill to try to handle the resizing between phones and PCs.<p>* Mobile browser events: Forcing iOS Safari to ignore its default behaviors (double-tap zoom, swipe-to-scroll) while still allowing the user to tap and drag Matter.js objects required a ridiculous amount of custom event listener management and CSS (touch-action: manipulation; user-select: none;). I also learned that these actions very easily kill the mouse scroll making it very frustrating for PC users. I am hoping I hit a good middle ground.<p>* State management: Since I didn't use React or any frameworks, I had to rely on a global state object. Because the game jumps between different phases/minigames, I ran into massive memory leaks from old setInterval loops and Matter.js bodies stacking up. I had to build strict teardown functions to wipe the slate clean on every map transition.<p>The game walks through electrostatic desalting, fractional distillation, hydrotreating, catalytic cracking, and gasoline blending (hitting specific Octane and RVP specs).<p>It’s completely free, runs client-side, and has zero ads or sign-ups. I'd appreciate any feedback on the mechanics, or let me know if you manage to break the physics engine. Happy to answer any questions about the chemical engineering side of things as well.<p>For some reason the URL box is not getting recognized, maybe someone can help me feel less dumb there too.
<a href="https://fuelingcuriosity.com/game" rel="nofollow">https://fuelingcuriosity.com/game</a>