The best Hacker News stories from All from the past day
Latest posts:
You can use C-Reduce for any language
GenChess
A solution to The Onion problem of J. Kenji Lopez-Alt (2021)
Hacker in Snowflake extortions may be a U.S. soldier
The capacitor that Apple soldered incorrectly at the factory
I Didn't Need Kubernetes, and You Probably Don't Either
Show HN: App that asks ‘why?’ every time you unlock your phone
Malware can turn off webcam LED and record video, demonstrated on ThinkPad X230
Malware can turn off webcam LED and record video, demonstrated on ThinkPad X230
Fly.io outage – resolved
Launch HN: Human Layer (YC F24) – Human-in-the-Loop API for AI Systems
Hey HN! I'm Dex, building HumanLayer (<a href="https://humanlayer.dev">https://humanlayer.dev</a>), an API that lets AI agents contact humans for feedback, input, and approvals. We enable safe deployment of autonomous/headless AI systems in production. You can try it with our Python or TypeScript SDKs and start using it immediately with a free trial. We have a free tier and transparent usage-based pricing. Here’s a demo: <a href="https://youtu.be/5sbN8rh_S5Q?t=51" rel="nofollow">https://youtu.be/5sbN8rh_S5Q?t=51</a><p>What's really exciting is that we're enabling teams to deploy AI systems that would otherwise be too risky. We let you focus on building powerful agents while knowing that critical steps will <i>always</i> get a human-in-the-loop. It's been dope seeing people start to think bigger when they consider dynamic human oversight as a key ingredient in production AI systems.<p>This started when we were building AI agents for data teams. We wanted to automate tedious tasks like dropping unused tables, but customers were (rightfully!) opposed to giving AI agents direct access to production systems.<p>Getting AI to "production grade" reliability is a function of "how risky is this task the AI is performing". We didn't have the 3+ months it would have taken to sink into evals, fine tuning, and prompt engineering to get to a point where the agent had 99.9+% reliability—and even then, getting decision makers comfortable with flipping the switch on was a challenge. So instead we built some basic approval flows, like "ask in Slack before dropping tables".<p>But this communication itself needed guardrails—what if the agent contacted the wrong person? How would the head of data look if a tool he bought sent a nagging Slack message to the CEO? Our buyers wanted the agent to ask stakeholders for approval, but first <i>they</i> wanted to approve the "ask for approval" action itself. And then I started thinking about it... as a product builder + owner, <i>I</i> wanted to approve the "ask for approval to ask for approval" action!<p>I hacked together a human-AI interaction that would handle each of these cases across both my and my customers' Slack instances. By this time, I was convinced that any team building AI agents would need this kind of infrastructure and decided to build it as a standalone product. I presented the MVP at an AI meetup in SF and had a ton of incredible conversations, and went all in on building HumanLayer.<p>When you integrate the HumanLayer SDK, your AI agent can request human approval at any point in its execution. We handle all the complexity of routing these requests to the right people through their preferred channels (Slack or email, SMS and Teams coming soon), managing state while waiting for responses, and providing a complete audit trail. In addition to "ask for approval", we also support a more generic "human as tool" function that can be exposed to an LLM or agent framework, and will handle collecting a human response to a generic question like "I'm stuck on $PROBLEM, I've tried $THINGS, please advise" (I get messages like this sometimes from in-house agents we rolled out for back-office automations).<p>Because it's at the tool-calling layer, HumanLayer's SDK works with any AI framework like CrewAI, LangChain, etc, and any language model that supports tool calling. If you're rolling your own agentic/tools loop, you can use lower level SDK primitives to manage approvals however you want. We're even exploring use cases where HumanLayer is used for human-to-human approval, not just AI-to-human.<p>We're already seeing HumanLayer used in some cool ways. One customer built an AI SDR that drafts personalized sales emails but asks for human approval in Slack before sending anything to prospects. Another uses it to power an AI newsletter where subscribers can have email conversations with the content. HumanLayer handles receiving inbound emails and routing them to agents that can respond, and giving those agents tools to do so. One team uses HumanLayer to build a customer-facing DevOps agent—their AI agent reviews PRs, plans and executes db migrations, all while getting human sign-off at critical steps and reaching out to the team for steering if it encounters any issues.<p>We have a free tier and flexible credits-based pricing. For teams building customer-facing agents, you get whitelabeling and additional features and priority support.<p>If you want to integrate HumanLayer into your systems, check out our docs at <a href="https://humanlayer.dev/docs">https://humanlayer.dev/docs</a> or book a demo at <a href="https://humanlayer.dev">https://humanlayer.dev</a>.<p>Thank you for reading! We’re admittedly early and I welcome your ideas and experiences as it relates to agents, reliability, and balancing human+AI workloads.
Launch HN: Human Layer (YC F24) – Human-in-the-Loop API for AI Systems
Hey HN! I'm Dex, building HumanLayer (<a href="https://humanlayer.dev">https://humanlayer.dev</a>), an API that lets AI agents contact humans for feedback, input, and approvals. We enable safe deployment of autonomous/headless AI systems in production. You can try it with our Python or TypeScript SDKs and start using it immediately with a free trial. We have a free tier and transparent usage-based pricing. Here’s a demo: <a href="https://youtu.be/5sbN8rh_S5Q?t=51" rel="nofollow">https://youtu.be/5sbN8rh_S5Q?t=51</a><p>What's really exciting is that we're enabling teams to deploy AI systems that would otherwise be too risky. We let you focus on building powerful agents while knowing that critical steps will <i>always</i> get a human-in-the-loop. It's been dope seeing people start to think bigger when they consider dynamic human oversight as a key ingredient in production AI systems.<p>This started when we were building AI agents for data teams. We wanted to automate tedious tasks like dropping unused tables, but customers were (rightfully!) opposed to giving AI agents direct access to production systems.<p>Getting AI to "production grade" reliability is a function of "how risky is this task the AI is performing". We didn't have the 3+ months it would have taken to sink into evals, fine tuning, and prompt engineering to get to a point where the agent had 99.9+% reliability—and even then, getting decision makers comfortable with flipping the switch on was a challenge. So instead we built some basic approval flows, like "ask in Slack before dropping tables".<p>But this communication itself needed guardrails—what if the agent contacted the wrong person? How would the head of data look if a tool he bought sent a nagging Slack message to the CEO? Our buyers wanted the agent to ask stakeholders for approval, but first <i>they</i> wanted to approve the "ask for approval" action itself. And then I started thinking about it... as a product builder + owner, <i>I</i> wanted to approve the "ask for approval to ask for approval" action!<p>I hacked together a human-AI interaction that would handle each of these cases across both my and my customers' Slack instances. By this time, I was convinced that any team building AI agents would need this kind of infrastructure and decided to build it as a standalone product. I presented the MVP at an AI meetup in SF and had a ton of incredible conversations, and went all in on building HumanLayer.<p>When you integrate the HumanLayer SDK, your AI agent can request human approval at any point in its execution. We handle all the complexity of routing these requests to the right people through their preferred channels (Slack or email, SMS and Teams coming soon), managing state while waiting for responses, and providing a complete audit trail. In addition to "ask for approval", we also support a more generic "human as tool" function that can be exposed to an LLM or agent framework, and will handle collecting a human response to a generic question like "I'm stuck on $PROBLEM, I've tried $THINGS, please advise" (I get messages like this sometimes from in-house agents we rolled out for back-office automations).<p>Because it's at the tool-calling layer, HumanLayer's SDK works with any AI framework like CrewAI, LangChain, etc, and any language model that supports tool calling. If you're rolling your own agentic/tools loop, you can use lower level SDK primitives to manage approvals however you want. We're even exploring use cases where HumanLayer is used for human-to-human approval, not just AI-to-human.<p>We're already seeing HumanLayer used in some cool ways. One customer built an AI SDR that drafts personalized sales emails but asks for human approval in Slack before sending anything to prospects. Another uses it to power an AI newsletter where subscribers can have email conversations with the content. HumanLayer handles receiving inbound emails and routing them to agents that can respond, and giving those agents tools to do so. One team uses HumanLayer to build a customer-facing DevOps agent—their AI agent reviews PRs, plans and executes db migrations, all while getting human sign-off at critical steps and reaching out to the team for steering if it encounters any issues.<p>We have a free tier and flexible credits-based pricing. For teams building customer-facing agents, you get whitelabeling and additional features and priority support.<p>If you want to integrate HumanLayer into your systems, check out our docs at <a href="https://humanlayer.dev/docs">https://humanlayer.dev/docs</a> or book a demo at <a href="https://humanlayer.dev">https://humanlayer.dev</a>.<p>Thank you for reading! We’re admittedly early and I welcome your ideas and experiences as it relates to agents, reliability, and balancing human+AI workloads.
Y Combinator often backs startups that duplicate other YC companies, data shows
The AI reporter that took my old job just got fired
Cybertruck's Many Recalls
Lies we tell ourselves to keep using Golang (2022)
California's most neglected group of students: the gifted ones
Amazon S3 Adds Put-If-Match (Compare-and-Swap)
Amazon S3 Adds Put-If-Match (Compare-and-Swap)
Bluesky is on the verge of overtaking Threads in all the ways that matter