The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Rotunda - A browser built for agents with simulated typing

Hi HN! Pierce here.<p>Rotunda is a firefox fork primarily intended for agent use, which I’ve been hacking on nights/weekends.<p>There was a [lengthy](<a href="https://news.ycombinator.com/item?id=48024859">https://news.ycombinator.com/item?id=48024859</a>) discussion last week on how expensive computer use models are. The cost is going to drop eventually, but I think on some level it's still usually the wrong primitive. The web gives us access to beautiful structured formats, plaintext, etc... why throw that away if we don't have to?<p>I realized at some point that for 99% of automations I just want agents to be able to control my Chrome instance. But that’s easier said that done: CDP (the Chrome automation protocol) leaks a ton of state about being programmatically controlled, either by toggling window attributes or by running `page.evaluate()` commands right in the page context. Plus if you look at an automation running it's pretty obvious what happens: the mouse jumps around, fields are filled instantly, etc.<p>Rotunda tries to fix this. Its standout features:<p>- Realistic simulation of mouse movements and keyboard commands, powered by a trained RNN on my own timing patterns from the last week. (still feel weird about opting-in to a key logger but whatever)<p>- Doesn’t lie about its host specs, only fibs about some client side details. Stealth browsers are too easy to flag statistically when you’re adding noise to canvas pixels or audio pipelines.<p>- It runs on your local device with a CLI or Playwright API accessible to Claude, Codex, or whatever your harness-de-jure today looks like.<p>- Patches modern Firefox (150) with an agentic harness to keep this updated over time<p>MPL-2.0 on GitHub: <a href="https://github.com/monkeysee-ai/rotunda" rel="nofollow">https://github.com/monkeysee-ai/rotunda</a><p>Longer writeup on the design choices: <a href="https://pierce.dev/notes/a-browser-for-agents" rel="nofollow">https://pierce.dev/notes/a-browser-for-agents</a><p>Also check out the demo on the site! <a href="https://www.rotunda.sh/" rel="nofollow">https://www.rotunda.sh/</a><p>Pretty excited by how this turned out but we’re still super early. Give it a try and please flag any issues!

Show HN: Nibble

An attempt at a single pass LLVM frontend in ~3000 lines of C without external dependencies, malloc, or an AST. Included are some graphical examples. The IR isn't perfect, and the README touches on one particular downfall

Show HN: Running the second public ODoH relay

Every privacy-focused DNS service requires an account: NextDNS, Cloudflare for Families, Apple's iCloud Private Relay (paid, iOS-only). The protocol that doesn’t require one - ODoH - had basically one well-known public relay operator (Frank Denis on Fastly Compute, default in dnscrypt-proxy). I built a second one and the client to talk to it.

Show HN: I asked AI to write Sci-Fi for eternity

Show HN: Torrix, self hosted, LLM Observability,(no Postgres, no Redis)

I work as a SAP Integration consultant and built this as a side project. Friction point: Most self hosted LLM observability tools require Postgres, Redis and non trivial infrastructure. Teams just want to see what their agents are actually doing in Production, that set up cost discorages adoption. Torrix runs as a single docker contained backed by SQLite. The full install is:<p>curl -o docker-compose.yml <a href="https://raw.githubusercontent.com/torrix-ai/install/main/doc" rel="nofollow">https://raw.githubusercontent.com/torrix-ai/install/main/doc</a>... docker compose up<p>No external dependencies. All data stays in a local SQLite file on your machine.<p>It logs LLM calls through a HTTP proxy or a python/Node SDK : tokens, cost, latency, full prompt and response traces, reasoning token capture. Works with OpenAI, Anthropic, Gemini, Groq, Mistral, Azure Open AI and any Apen AI compatible end point.<p>Things I added as I actually used it on real agent pipelines: cost forecasting and hard budget caps, PII masking, model routing rules, evals with golden runs, AI judge, a prompt library with version history, run tags for filtering by environment, MCP server so AI Assistants can query your own logs and OTLP/HTTP ingestion for apps aöready using OpenTelemetry.<p>Community edition is free for one user with 7-day retention. Pro adds teams, RBAC, 30 day retention, API key management, full text search and audit logs.<p>SQLite doesn't scale to high write throughput. This is aimed at teams logging hundreds to low thousands of LLM calls per day, not millions. Happy to hear what people think and what is missing.<p>GitHub / install: <a href="https://github.com/torrix-ai/install" rel="nofollow">https://github.com/torrix-ai/install</a> Website: <a href="https://www.torrix.ai" rel="nofollow">https://www.torrix.ai</a>

Show HN: Torrix, self hosted, LLM Observability,(no Postgres, no Redis)

I work as a SAP Integration consultant and built this as a side project. Friction point: Most self hosted LLM observability tools require Postgres, Redis and non trivial infrastructure. Teams just want to see what their agents are actually doing in Production, that set up cost discorages adoption. Torrix runs as a single docker contained backed by SQLite. The full install is:<p>curl -o docker-compose.yml <a href="https://raw.githubusercontent.com/torrix-ai/install/main/doc" rel="nofollow">https://raw.githubusercontent.com/torrix-ai/install/main/doc</a>... docker compose up<p>No external dependencies. All data stays in a local SQLite file on your machine.<p>It logs LLM calls through a HTTP proxy or a python/Node SDK : tokens, cost, latency, full prompt and response traces, reasoning token capture. Works with OpenAI, Anthropic, Gemini, Groq, Mistral, Azure Open AI and any Apen AI compatible end point.<p>Things I added as I actually used it on real agent pipelines: cost forecasting and hard budget caps, PII masking, model routing rules, evals with golden runs, AI judge, a prompt library with version history, run tags for filtering by environment, MCP server so AI Assistants can query your own logs and OTLP/HTTP ingestion for apps aöready using OpenTelemetry.<p>Community edition is free for one user with 7-day retention. Pro adds teams, RBAC, 30 day retention, API key management, full text search and audit logs.<p>SQLite doesn't scale to high write throughput. This is aimed at teams logging hundreds to low thousands of LLM calls per day, not millions. Happy to hear what people think and what is missing.<p>GitHub / install: <a href="https://github.com/torrix-ai/install" rel="nofollow">https://github.com/torrix-ai/install</a> Website: <a href="https://www.torrix.ai" rel="nofollow">https://www.torrix.ai</a>

Show HN: E2a – Open-source email gateway for AI agents

We were building an agent system and wanted email as a trigger. We decided to take it out and made it a standalone service.<p>The primary email features we wanted and used for our own agent system:<p>1. Email threading stays consistent with agent conversation threading<p>2. Human in the loop review for outbound emails (especially during testing phase)<p>3. Quick onboarding/offboarding email addresses for agents within minutes<p>4. Websocket for local agents and at-least-once webhook delivery for Cloud agents<p>Not yet: DMARC (only SPF/DKIM today), scoped API keys, HA/multi-region (single VM + single Postgres), app-layer email data encryption, compliance attestations (SOC 2/HIPAA).<p>GitHub: <a href="https://github.com/Mnexa-AI/e2a" rel="nofollow">https://github.com/Mnexa-AI/e2a</a><p>Hosted: <a href="https://e2a.dev/" rel="nofollow">https://e2a.dev/</a><p>Appreciate any feedback / contributions.

Show HN: Gigacatalyst – Extend your SaaS with an embedded AI builder

Hi HN, I’m Namanyay from Gigacatalyst (link: <a href="https://gigacatalyst.com/">https://gigacatalyst.com/</a>). Gigacatalyst allows sales, CS, and users to build one-off features, so your SaaS can support long-tail customer workflows and engineers aren’t pulled away from the roadmap.<p>When you sell software to large businesses, you realize that each customer needs their own workflow and features. Traditionally, this either means long engineering roadmaps or the customers end up using workarounds.<p>But what if <i>everyone</i> could build their critical missing features just by talking to an AI? That’s what we do at Gigacatalyst. We provide an AI customization layer for your customers, CS team, and sales team to build these missing critical workflows without needing any engineers at all. Think Lovable, but built on top of YOUR platform.<p>We connect to your product's APIs, learn your data model and design system, and let non-technical users build governed apps via natural language - inside your product, under your brand.<p>Here’s what it looks like in action: <a href="https://www.youtube.com/watch?v=_taSpSphH6E" rel="nofollow">https://www.youtube.com/watch?v=_taSpSphH6E</a><p>One of our customers, a Series B company, saw their users (<i>not engineers</i> - managers, ops people, facility directors) build critical workflows like:<p>- Parts stockout prevention: A maintenance manager typed <i>"show me which parts will run out in the next 2 weeks based on usage over the last 90 days, accounting for vendor lead times."</i> The app tracks consumption velocity, forecasts stockouts, and alerts before it's too late. He says it's prevented ~$500K in emergency downtime.<p>- Invoice OCR from phone photos: Technicians kept losing paper invoices. The prompt: <i>"upload a photo of the invoice, extract vendor name, date, amount, and line items, then match it to the purchase order and flag discrepancies."</i> Now techs snap a photo on-site to automatically add to the system of record.<p>- Restaurant emergency triage: A pizza chain's facilities manager was drowning in maintenance requests. He built a priority matrix: "walk-in freezer not cooling" auto-routes as CRITICAL, "dining room light flickering" goes to LOW. He's now able to manage backlogs with the correct priority.<p>How Gigacatalyst works under the hood:<p>1. Agentic API discovery: Our agents go through your app and parse your endpoints, query params, request/response shapes, and sample data to build the base layer.<p>2. Generation and Validation: When a user describes what they want our AI generates an app. We set up multiple validation steps, including static checks, runtime error analysis, and LLM-as-a-judge.<p>3. Sandboxing and Compilation: We wrote our own compilation and sandboxing framework to get the fastest speeds and lowest costs. This means that users can interact with the built app in seconds.<p>4. Proxy layer: We create a proxy layer for all APIs to handle auth, tenant isolation, and rate limiting. Everything the agent has access to is controlled, logged, observed, and version controlled.<p>After 2000+ daily users, 900+ apps built, and 70% 30-day retention, today we're opening a public demo.<p>Try it: <a href="https://app.gigacatalyst.com/">https://app.gigacatalyst.com/</a> - enter your SaaS product's API URL (or just the homepage) and start prompting.<p>If you're serving a variety of use cases, you probably deal with a lot of custom requests and Gigacatalyst will save you time and increase your bottom line. Book a meeting at <a href="https://gigacatalyst.com/#contact">https://gigacatalyst.com/#contact</a> and I'll help your team and customers build new functionality on top of your platform.<p>I've been reading Hacker News since I was 12 years old. I'm proud to launch for all of you and I want to hear your feedback on my product and comments!

Show HN: Gigacatalyst – Extend your SaaS with an embedded AI builder

Hi HN, I’m Namanyay from Gigacatalyst (link: <a href="https://gigacatalyst.com/">https://gigacatalyst.com/</a>). Gigacatalyst allows sales, CS, and users to build one-off features, so your SaaS can support long-tail customer workflows and engineers aren’t pulled away from the roadmap.<p>When you sell software to large businesses, you realize that each customer needs their own workflow and features. Traditionally, this either means long engineering roadmaps or the customers end up using workarounds.<p>But what if <i>everyone</i> could build their critical missing features just by talking to an AI? That’s what we do at Gigacatalyst. We provide an AI customization layer for your customers, CS team, and sales team to build these missing critical workflows without needing any engineers at all. Think Lovable, but built on top of YOUR platform.<p>We connect to your product's APIs, learn your data model and design system, and let non-technical users build governed apps via natural language - inside your product, under your brand.<p>Here’s what it looks like in action: <a href="https://www.youtube.com/watch?v=_taSpSphH6E" rel="nofollow">https://www.youtube.com/watch?v=_taSpSphH6E</a><p>One of our customers, a Series B company, saw their users (<i>not engineers</i> - managers, ops people, facility directors) build critical workflows like:<p>- Parts stockout prevention: A maintenance manager typed <i>"show me which parts will run out in the next 2 weeks based on usage over the last 90 days, accounting for vendor lead times."</i> The app tracks consumption velocity, forecasts stockouts, and alerts before it's too late. He says it's prevented ~$500K in emergency downtime.<p>- Invoice OCR from phone photos: Technicians kept losing paper invoices. The prompt: <i>"upload a photo of the invoice, extract vendor name, date, amount, and line items, then match it to the purchase order and flag discrepancies."</i> Now techs snap a photo on-site to automatically add to the system of record.<p>- Restaurant emergency triage: A pizza chain's facilities manager was drowning in maintenance requests. He built a priority matrix: "walk-in freezer not cooling" auto-routes as CRITICAL, "dining room light flickering" goes to LOW. He's now able to manage backlogs with the correct priority.<p>How Gigacatalyst works under the hood:<p>1. Agentic API discovery: Our agents go through your app and parse your endpoints, query params, request/response shapes, and sample data to build the base layer.<p>2. Generation and Validation: When a user describes what they want our AI generates an app. We set up multiple validation steps, including static checks, runtime error analysis, and LLM-as-a-judge.<p>3. Sandboxing and Compilation: We wrote our own compilation and sandboxing framework to get the fastest speeds and lowest costs. This means that users can interact with the built app in seconds.<p>4. Proxy layer: We create a proxy layer for all APIs to handle auth, tenant isolation, and rate limiting. Everything the agent has access to is controlled, logged, observed, and version controlled.<p>After 2000+ daily users, 900+ apps built, and 70% 30-day retention, today we're opening a public demo.<p>Try it: <a href="https://app.gigacatalyst.com/">https://app.gigacatalyst.com/</a> - enter your SaaS product's API URL (or just the homepage) and start prompting.<p>If you're serving a variety of use cases, you probably deal with a lot of custom requests and Gigacatalyst will save you time and increase your bottom line. Book a meeting at <a href="https://gigacatalyst.com/#contact">https://gigacatalyst.com/#contact</a> and I'll help your team and customers build new functionality on top of your platform.<p>I've been reading Hacker News since I was 12 years old. I'm proud to launch for all of you and I want to hear your feedback on my product and comments!

Show HN: Agentic interface for mainframes and COBOL

Hi HN, we’re Sai and Aayush, and we’re building Hypercubic (<a href="https://www.hypercubic.ai/">https://www.hypercubic.ai/</a>), bringing AI tools to the mainframe and COBOL world. (We did a Launch HN last year: <a href="https://news.ycombinator.com/item?id=45877517">https://news.ycombinator.com/item?id=45877517</a>.) Today we’re launching Hopper, an agentic development environment for mainframes.<p>You can download it here: <a href="https://www.hypercubic.ai/hopper">https://www.hypercubic.ai/hopper</a>, and you can also request access and immediately get a mainframe user account to play with.<p>There's also a video runthrough at <a href="https://www.youtube.com/watch?v=q81L5DcfBvE" rel="nofollow">https://www.youtube.com/watch?v=q81L5DcfBvE</a>.<p>Mainframes still run a surprising amount of critical infrastructure: banking, payments, insurance, airlines, government programs, logistics, and core operations at large institutions. Many of these systems are decades old, but they continue to process enormous transaction volumes because they are reliable, secure, and deeply embedded into business operations.<p>A lot of that software is written in COBOL and runs on IBM z/OS. The development environment looks very different from modern cloud or Unix-style development. Instead of GitHub, shell commands, package managers, and CI pipelines, developers often work through TN3270 terminal sessions, ISPF panels, partitioned datasets, JCL, JES queues, spool output, return codes, VSAM files, CICS transactions, and shop-specific conventions.<p>TN3270 is the terminal interface used to interact with many IBM mainframe systems. ISPF is the menu and panel system developers use inside that terminal to browse datasets, edit source, submit jobs, and inspect output. It is powerful and reliable, but it was designed for expert humans navigating screens, function keys, and fixed-width workflows, not AI agents.<p>A simple COBOL change might require finding the right source member, checking copybooks, locating compile JCL, submitting a job, reading JES/SYSPRINT output, interpreting condition codes, patching fixed-width source, and resubmitting.<p>Much of this work is so well-defined and repetitive that it's a good fit for agentic AI. To get that working, however, a chatbot next to a terminal is not enough. The agent needs to operate inside the mainframe environment.<p>Hopper combines three things: (1) A real TN3270 terminal, (2) Mainframe-aware panels for datasets, members, jobs, and spool output, and (3) An AI agent that can operate across those z/OS surfaces.<p>For example, here is a tiny version of the kind of thing Hopper can help debug:<p><pre><code> COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. PAYCALC. DATA DIVISION. WORKING-STORAGE SECTION. 01 CUSTOMER-BALANCE PIC 9(7)V99. PROCEDURE DIVISION. ADD 100.00 TO CUSTOMER-BALNCE DISPLAY "UPDATED BALANCE: " CUSTOMER-BALANCE STOP RUN. JCL: //PAYCOMP JOB (ACCT),'COMPILE',CLASS=A,MSGCLASS=X //COBOL EXEC IGYWCL [//COBOL.SYSIN](https://cobol.sysin/) DD DSN=USER1.APP.COBOL(PAYCALC),DISP=SHR [//LKED.SYSLMOD](https://lked.syslmod/) DD DSN=USER1.APP.LOAD(PAYCALC),DISP=SHR </code></pre> A human would submit this job, inspect JES output, open `SYSPRINT`, find the undefined `CUSTOMER-BALNCE`, map it back to the source, patch the member, and resubmit. Hopper is designed to let an agent operate through that same loop autonomously.<p>Hopper is not trying to hide the mainframe behind a generic abstraction, and it's not a chatbot. The design principle is simple: preserve the fidelity of the mainframe environment, but make it accessible to AI agents.<p>Sensitive operations require approval, and the terminal remains visible at all times.<p>Once agents can operate inside the mainframe environment, new workflows become possible: faster job debugging, automated documentation, safer code changes, test generation, migration planning, traffic replay, and modernization verification.<p>We’re curious to hear your thoughts! especially from anyone who has worked with mainframes, COBOL or has done legacy enterprise modernization.

Show HN: Agentic interface for mainframes and COBOL

Hi HN, we’re Sai and Aayush, and we’re building Hypercubic (<a href="https://www.hypercubic.ai/">https://www.hypercubic.ai/</a>), bringing AI tools to the mainframe and COBOL world. (We did a Launch HN last year: <a href="https://news.ycombinator.com/item?id=45877517">https://news.ycombinator.com/item?id=45877517</a>.) Today we’re launching Hopper, an agentic development environment for mainframes.<p>You can download it here: <a href="https://www.hypercubic.ai/hopper">https://www.hypercubic.ai/hopper</a>, and you can also request access and immediately get a mainframe user account to play with.<p>There's also a video runthrough at <a href="https://www.youtube.com/watch?v=q81L5DcfBvE" rel="nofollow">https://www.youtube.com/watch?v=q81L5DcfBvE</a>.<p>Mainframes still run a surprising amount of critical infrastructure: banking, payments, insurance, airlines, government programs, logistics, and core operations at large institutions. Many of these systems are decades old, but they continue to process enormous transaction volumes because they are reliable, secure, and deeply embedded into business operations.<p>A lot of that software is written in COBOL and runs on IBM z/OS. The development environment looks very different from modern cloud or Unix-style development. Instead of GitHub, shell commands, package managers, and CI pipelines, developers often work through TN3270 terminal sessions, ISPF panels, partitioned datasets, JCL, JES queues, spool output, return codes, VSAM files, CICS transactions, and shop-specific conventions.<p>TN3270 is the terminal interface used to interact with many IBM mainframe systems. ISPF is the menu and panel system developers use inside that terminal to browse datasets, edit source, submit jobs, and inspect output. It is powerful and reliable, but it was designed for expert humans navigating screens, function keys, and fixed-width workflows, not AI agents.<p>A simple COBOL change might require finding the right source member, checking copybooks, locating compile JCL, submitting a job, reading JES/SYSPRINT output, interpreting condition codes, patching fixed-width source, and resubmitting.<p>Much of this work is so well-defined and repetitive that it's a good fit for agentic AI. To get that working, however, a chatbot next to a terminal is not enough. The agent needs to operate inside the mainframe environment.<p>Hopper combines three things: (1) A real TN3270 terminal, (2) Mainframe-aware panels for datasets, members, jobs, and spool output, and (3) An AI agent that can operate across those z/OS surfaces.<p>For example, here is a tiny version of the kind of thing Hopper can help debug:<p><pre><code> COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. PAYCALC. DATA DIVISION. WORKING-STORAGE SECTION. 01 CUSTOMER-BALANCE PIC 9(7)V99. PROCEDURE DIVISION. ADD 100.00 TO CUSTOMER-BALNCE DISPLAY "UPDATED BALANCE: " CUSTOMER-BALANCE STOP RUN. JCL: //PAYCOMP JOB (ACCT),'COMPILE',CLASS=A,MSGCLASS=X //COBOL EXEC IGYWCL [//COBOL.SYSIN](https://cobol.sysin/) DD DSN=USER1.APP.COBOL(PAYCALC),DISP=SHR [//LKED.SYSLMOD](https://lked.syslmod/) DD DSN=USER1.APP.LOAD(PAYCALC),DISP=SHR </code></pre> A human would submit this job, inspect JES output, open `SYSPRINT`, find the undefined `CUSTOMER-BALNCE`, map it back to the source, patch the member, and resubmit. Hopper is designed to let an agent operate through that same loop autonomously.<p>Hopper is not trying to hide the mainframe behind a generic abstraction, and it's not a chatbot. The design principle is simple: preserve the fidelity of the mainframe environment, but make it accessible to AI agents.<p>Sensitive operations require approval, and the terminal remains visible at all times.<p>Once agents can operate inside the mainframe environment, new workflows become possible: faster job debugging, automated documentation, safer code changes, test generation, migration planning, traffic replay, and modernization verification.<p>We’re curious to hear your thoughts! especially from anyone who has worked with mainframes, COBOL or has done legacy enterprise modernization.

Show HN: Agentic interface for mainframes and COBOL

Hi HN, we’re Sai and Aayush, and we’re building Hypercubic (<a href="https://www.hypercubic.ai/">https://www.hypercubic.ai/</a>), bringing AI tools to the mainframe and COBOL world. (We did a Launch HN last year: <a href="https://news.ycombinator.com/item?id=45877517">https://news.ycombinator.com/item?id=45877517</a>.) Today we’re launching Hopper, an agentic development environment for mainframes.<p>You can download it here: <a href="https://www.hypercubic.ai/hopper">https://www.hypercubic.ai/hopper</a>, and you can also request access and immediately get a mainframe user account to play with.<p>There's also a video runthrough at <a href="https://www.youtube.com/watch?v=q81L5DcfBvE" rel="nofollow">https://www.youtube.com/watch?v=q81L5DcfBvE</a>.<p>Mainframes still run a surprising amount of critical infrastructure: banking, payments, insurance, airlines, government programs, logistics, and core operations at large institutions. Many of these systems are decades old, but they continue to process enormous transaction volumes because they are reliable, secure, and deeply embedded into business operations.<p>A lot of that software is written in COBOL and runs on IBM z/OS. The development environment looks very different from modern cloud or Unix-style development. Instead of GitHub, shell commands, package managers, and CI pipelines, developers often work through TN3270 terminal sessions, ISPF panels, partitioned datasets, JCL, JES queues, spool output, return codes, VSAM files, CICS transactions, and shop-specific conventions.<p>TN3270 is the terminal interface used to interact with many IBM mainframe systems. ISPF is the menu and panel system developers use inside that terminal to browse datasets, edit source, submit jobs, and inspect output. It is powerful and reliable, but it was designed for expert humans navigating screens, function keys, and fixed-width workflows, not AI agents.<p>A simple COBOL change might require finding the right source member, checking copybooks, locating compile JCL, submitting a job, reading JES/SYSPRINT output, interpreting condition codes, patching fixed-width source, and resubmitting.<p>Much of this work is so well-defined and repetitive that it's a good fit for agentic AI. To get that working, however, a chatbot next to a terminal is not enough. The agent needs to operate inside the mainframe environment.<p>Hopper combines three things: (1) A real TN3270 terminal, (2) Mainframe-aware panels for datasets, members, jobs, and spool output, and (3) An AI agent that can operate across those z/OS surfaces.<p>For example, here is a tiny version of the kind of thing Hopper can help debug:<p><pre><code> COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. PAYCALC. DATA DIVISION. WORKING-STORAGE SECTION. 01 CUSTOMER-BALANCE PIC 9(7)V99. PROCEDURE DIVISION. ADD 100.00 TO CUSTOMER-BALNCE DISPLAY "UPDATED BALANCE: " CUSTOMER-BALANCE STOP RUN. JCL: //PAYCOMP JOB (ACCT),'COMPILE',CLASS=A,MSGCLASS=X //COBOL EXEC IGYWCL [//COBOL.SYSIN](https://cobol.sysin/) DD DSN=USER1.APP.COBOL(PAYCALC),DISP=SHR [//LKED.SYSLMOD](https://lked.syslmod/) DD DSN=USER1.APP.LOAD(PAYCALC),DISP=SHR </code></pre> A human would submit this job, inspect JES output, open `SYSPRINT`, find the undefined `CUSTOMER-BALNCE`, map it back to the source, patch the member, and resubmit. Hopper is designed to let an agent operate through that same loop autonomously.<p>Hopper is not trying to hide the mainframe behind a generic abstraction, and it's not a chatbot. The design principle is simple: preserve the fidelity of the mainframe environment, but make it accessible to AI agents.<p>Sensitive operations require approval, and the terminal remains visible at all times.<p>Once agents can operate inside the mainframe environment, new workflows become possible: faster job debugging, automated documentation, safer code changes, test generation, migration planning, traffic replay, and modernization verification.<p>We’re curious to hear your thoughts! especially from anyone who has worked with mainframes, COBOL or has done legacy enterprise modernization.

Show HN: Statewright – Visual state machines that make AI agents reliable

Agentic problem solving in its current state is very brittle. I fell in love with it, but it creates as many problems as it solves.<p>I'm Ben Cochran, I spent 20+ years in the trenches with full-stack Engineering, DevOps, high performance computing & ML with stints at NVIDIA, AMD and various other organizations most recently as a Distinguished Engineer.<p>For agents to work reliably you either need massive parameter counts or massive context windows to keep the solution spaces workable. Most people are brute forcing reliability with bigger models and longer prompts.<p>What if I made the problem smaller instead of making the model bigger?<p>I took a different approach by using smaller models: models in the 13-20B parameter range and set them to task solving real SWE-bench problems. I constrained the tool and solution spaces using formal state machines. Each state in the machine defines which tools the model can access, how many iterations it gets and what transitions are valid. A planning state gets read-only tools. An implementation state gets edit tools (scoped to prevent mega edits) and write friendly bash tools. The testing state gets bash but only for testing commands. The model cannot physically skip steps or use the wrong tool at the wrong time. It is enforced via protocol, not via prompts.<p>The results were more promising than I would have expected. Across multiple model families irrespective of age (qwen-coder, gpt-oss, gemma4) and the improvements were consistent above the 13B parameter inflection point. Below that, models can navigate the state machine but can't retain enough context to produce accurate edits. More on the research bit: <a href="https://statewright.ai/research" rel="nofollow">https://statewright.ai/research</a><p>Surprisingly this yielded improvements in frontier models as well. Haiku and Sonnet start to punch above their weight and Opus solves more reliably with fewer tokens and death spirals. Fine tuning did not yield these kinds of functional improvements for me. The takeaway it seems is that context window utilization matters more than raw context size - a tightly scoped working context at each step outperforms a model given carte blanche over everything. Constraining LLMs which are non-idempotent by using deterministic code is a pattern that nobody is currently talking about.<p>So, I built Statewright. Its core is a Rust engine that evaluates state machine definitions: states, transitions, guards and tool restrictions. Its orchestration doesn't use an LLM, just enforces the state machine. On top of that is a plugin layer that integrates with Claude Code (and soon Codex, Cursor and others) via MCP. When you activate a workflow, hooks enforce the guardrails per state automatically. The model sees 5 tools available instead of dozens, gets clear instructions for the current phase and transitions when conditions are met. Importantly it tells the model when it's attempting to do something that isn't in scope, incorrect or when it needs to try something else after getting stuck.<p>You can use your agent via MCP to build a state machine for you to solve a problem in your current context. The visual editor at statewright.ai lets you tweak these workflows in a graph view... You can clearly see the failure paths, the retry loops and the approval gates. State machines aren't DAGs; they loop and retry, which is what agentic work actually needs.<p>Statewright is currently live with a free tier, try it out in Claude Code by running the following:<p>/plugin marketplace add statewright/statewright<p>/plugin install statewright<p>/reload-plugins<p>Then "start the bugfix workflow" or /statewright start bugfix. You'll need to paste your API key when prompted. The latest versions of Claude may complain -- paste the API key again and say you really mean it, Claude is just being cautious here.<p>Feedback is welcome on the workflow editor, the plugin experience, and tell me what workflows you'd want to build first. Agents are suggestions, states are laws.

Show HN: Statewright – Visual state machines that make AI agents reliable

Agentic problem solving in its current state is very brittle. I fell in love with it, but it creates as many problems as it solves.<p>I'm Ben Cochran, I spent 20+ years in the trenches with full-stack Engineering, DevOps, high performance computing & ML with stints at NVIDIA, AMD and various other organizations most recently as a Distinguished Engineer.<p>For agents to work reliably you either need massive parameter counts or massive context windows to keep the solution spaces workable. Most people are brute forcing reliability with bigger models and longer prompts.<p>What if I made the problem smaller instead of making the model bigger?<p>I took a different approach by using smaller models: models in the 13-20B parameter range and set them to task solving real SWE-bench problems. I constrained the tool and solution spaces using formal state machines. Each state in the machine defines which tools the model can access, how many iterations it gets and what transitions are valid. A planning state gets read-only tools. An implementation state gets edit tools (scoped to prevent mega edits) and write friendly bash tools. The testing state gets bash but only for testing commands. The model cannot physically skip steps or use the wrong tool at the wrong time. It is enforced via protocol, not via prompts.<p>The results were more promising than I would have expected. Across multiple model families irrespective of age (qwen-coder, gpt-oss, gemma4) and the improvements were consistent above the 13B parameter inflection point. Below that, models can navigate the state machine but can't retain enough context to produce accurate edits. More on the research bit: <a href="https://statewright.ai/research" rel="nofollow">https://statewright.ai/research</a><p>Surprisingly this yielded improvements in frontier models as well. Haiku and Sonnet start to punch above their weight and Opus solves more reliably with fewer tokens and death spirals. Fine tuning did not yield these kinds of functional improvements for me. The takeaway it seems is that context window utilization matters more than raw context size - a tightly scoped working context at each step outperforms a model given carte blanche over everything. Constraining LLMs which are non-idempotent by using deterministic code is a pattern that nobody is currently talking about.<p>So, I built Statewright. Its core is a Rust engine that evaluates state machine definitions: states, transitions, guards and tool restrictions. Its orchestration doesn't use an LLM, just enforces the state machine. On top of that is a plugin layer that integrates with Claude Code (and soon Codex, Cursor and others) via MCP. When you activate a workflow, hooks enforce the guardrails per state automatically. The model sees 5 tools available instead of dozens, gets clear instructions for the current phase and transitions when conditions are met. Importantly it tells the model when it's attempting to do something that isn't in scope, incorrect or when it needs to try something else after getting stuck.<p>You can use your agent via MCP to build a state machine for you to solve a problem in your current context. The visual editor at statewright.ai lets you tweak these workflows in a graph view... You can clearly see the failure paths, the retry loops and the approval gates. State machines aren't DAGs; they loop and retry, which is what agentic work actually needs.<p>Statewright is currently live with a free tier, try it out in Claude Code by running the following:<p>/plugin marketplace add statewright/statewright<p>/plugin install statewright<p>/reload-plugins<p>Then "start the bugfix workflow" or /statewright start bugfix. You'll need to paste your API key when prompted. The latest versions of Claude may complain -- paste the API key again and say you really mean it, Claude is just being cautious here.<p>Feedback is welcome on the workflow editor, the plugin experience, and tell me what workflows you'd want to build first. Agents are suggestions, states are laws.

Show HN: Statewright – Visual state machines that make AI agents reliable

Agentic problem solving in its current state is very brittle. I fell in love with it, but it creates as many problems as it solves.<p>I'm Ben Cochran, I spent 20+ years in the trenches with full-stack Engineering, DevOps, high performance computing & ML with stints at NVIDIA, AMD and various other organizations most recently as a Distinguished Engineer.<p>For agents to work reliably you either need massive parameter counts or massive context windows to keep the solution spaces workable. Most people are brute forcing reliability with bigger models and longer prompts.<p>What if I made the problem smaller instead of making the model bigger?<p>I took a different approach by using smaller models: models in the 13-20B parameter range and set them to task solving real SWE-bench problems. I constrained the tool and solution spaces using formal state machines. Each state in the machine defines which tools the model can access, how many iterations it gets and what transitions are valid. A planning state gets read-only tools. An implementation state gets edit tools (scoped to prevent mega edits) and write friendly bash tools. The testing state gets bash but only for testing commands. The model cannot physically skip steps or use the wrong tool at the wrong time. It is enforced via protocol, not via prompts.<p>The results were more promising than I would have expected. Across multiple model families irrespective of age (qwen-coder, gpt-oss, gemma4) and the improvements were consistent above the 13B parameter inflection point. Below that, models can navigate the state machine but can't retain enough context to produce accurate edits. More on the research bit: <a href="https://statewright.ai/research" rel="nofollow">https://statewright.ai/research</a><p>Surprisingly this yielded improvements in frontier models as well. Haiku and Sonnet start to punch above their weight and Opus solves more reliably with fewer tokens and death spirals. Fine tuning did not yield these kinds of functional improvements for me. The takeaway it seems is that context window utilization matters more than raw context size - a tightly scoped working context at each step outperforms a model given carte blanche over everything. Constraining LLMs which are non-idempotent by using deterministic code is a pattern that nobody is currently talking about.<p>So, I built Statewright. Its core is a Rust engine that evaluates state machine definitions: states, transitions, guards and tool restrictions. Its orchestration doesn't use an LLM, just enforces the state machine. On top of that is a plugin layer that integrates with Claude Code (and soon Codex, Cursor and others) via MCP. When you activate a workflow, hooks enforce the guardrails per state automatically. The model sees 5 tools available instead of dozens, gets clear instructions for the current phase and transitions when conditions are met. Importantly it tells the model when it's attempting to do something that isn't in scope, incorrect or when it needs to try something else after getting stuck.<p>You can use your agent via MCP to build a state machine for you to solve a problem in your current context. The visual editor at statewright.ai lets you tweak these workflows in a graph view... You can clearly see the failure paths, the retry loops and the approval gates. State machines aren't DAGs; they loop and retry, which is what agentic work actually needs.<p>Statewright is currently live with a free tier, try it out in Claude Code by running the following:<p>/plugin marketplace add statewright/statewright<p>/plugin install statewright<p>/reload-plugins<p>Then "start the bugfix workflow" or /statewright start bugfix. You'll need to paste your API key when prompted. The latest versions of Claude may complain -- paste the API key again and say you really mean it, Claude is just being cautious here.<p>Feedback is welcome on the workflow editor, the plugin experience, and tell me what workflows you'd want to build first. Agents are suggestions, states are laws.

Show HN: A modern Music Player Daemon based on Rockbox firmware

Show HN: A modern Music Player Daemon based on Rockbox firmware

Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.<p>We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.<p>Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).<p>Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)<p>You can test it right now and finetune on your Mac/PC: <a href="https://github.com/cactus-compute/needle" rel="nofollow">https://github.com/cactus-compute/needle</a><p>The full writeup on the architecture is here: <a href="https://github.com/cactus-compute/needle/blob/main/docs/simple_attention_networks.md" rel="nofollow">https://github.com/cactus-compute/needle/blob/main/docs/simp...</a><p>We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.<p>While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.<p>This is part of our broader work on Cactus (<a href="https://github.com/cactus-compute/cactus" rel="nofollow">https://github.com/cactus-compute/cactus</a>), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: <a href="https://news.ycombinator.com/item?id=44524544">https://news.ycombinator.com/item?id=44524544</a><p>Everything is MIT licensed. Weights: <a href="https://huggingface.co/Cactus-Compute/needle" rel="nofollow">https://huggingface.co/Cactus-Compute/needle</a> GitHub: <a href="https://github.com/cactus-compute/needle" rel="nofollow">https://github.com/cactus-compute/needle</a>

Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.<p>We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.<p>Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).<p>Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)<p>You can test it right now and finetune on your Mac/PC: <a href="https://github.com/cactus-compute/needle" rel="nofollow">https://github.com/cactus-compute/needle</a><p>The full writeup on the architecture is here: <a href="https://github.com/cactus-compute/needle/blob/main/docs/simple_attention_networks.md" rel="nofollow">https://github.com/cactus-compute/needle/blob/main/docs/simp...</a><p>We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.<p>While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.<p>This is part of our broader work on Cactus (<a href="https://github.com/cactus-compute/cactus" rel="nofollow">https://github.com/cactus-compute/cactus</a>), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: <a href="https://news.ycombinator.com/item?id=44524544">https://news.ycombinator.com/item?id=44524544</a><p>Everything is MIT licensed. Weights: <a href="https://huggingface.co/Cactus-Compute/needle" rel="nofollow">https://huggingface.co/Cactus-Compute/needle</a> GitHub: <a href="https://github.com/cactus-compute/needle" rel="nofollow">https://github.com/cactus-compute/needle</a>

1 2 3 ... 980 981 982 >