The best Hacker News stories from Show from the past day
Latest posts:
Show HN: OSle – A 510 bytes OS in x86 assembly
(sorry about double posting, I forgot to put Show HN in front in the original <a href="https://news.ycombinator.com/item?id=43863689">https://news.ycombinator.com/item?id=43863689</a> thread)<p>Hey all,
As a follow up to my relatively successful series in x86 Assembly of last year[1], I started making an OS that fits in a boot sector. I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.<p>It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor and even some games. It's a lot of fun!<p>It comes with an SDK and you can play around with it in the browser to see what it looks like.<p>The aim is, as always, to make Assembly less scary and this time around also OS development.<p>[1]: <a href="https://news.ycombinator.com/item?id=41571971">https://news.ycombinator.com/item?id=41571971</a>
Show HN: OSle – A 510 bytes OS in x86 assembly
(sorry about double posting, I forgot to put Show HN in front in the original <a href="https://news.ycombinator.com/item?id=43863689">https://news.ycombinator.com/item?id=43863689</a> thread)<p>Hey all,
As a follow up to my relatively successful series in x86 Assembly of last year[1], I started making an OS that fits in a boot sector. I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.<p>It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor and even some games. It's a lot of fun!<p>It comes with an SDK and you can play around with it in the browser to see what it looks like.<p>The aim is, as always, to make Assembly less scary and this time around also OS development.<p>[1]: <a href="https://news.ycombinator.com/item?id=41571971">https://news.ycombinator.com/item?id=41571971</a>
Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents
Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)
Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents
Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)
Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents
Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)
Show HN: GPT-2 implemented using graphics shaders
Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends).
This is an implementation of GPT-2 using WebGL and shaders. Enjoy!
Show HN: GPT-2 implemented using graphics shaders
Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends).
This is an implementation of GPT-2 using WebGL and shaders. Enjoy!
Show HN: GPT-2 implemented using graphics shaders
Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends).
This is an implementation of GPT-2 using WebGL and shaders. Enjoy!
Show HN: I built a synthesizer based on 3D physics
I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.<p>So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.<p>The official launch announcement video is here: <a href="https://www.youtube.com/watch?v=NYX_eeNVIEU" rel="nofollow">https://www.youtube.com/watch?v=NYX_eeNVIEU</a><p>But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: <a href="https://x.com/Mick_Gordon/status/1918146487948919222" rel="nofollow">https://x.com/Mick_Gordon/status/1918146487948919222</a><p>I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here:
<a href="https://anukari.com/blog/devlog" rel="nofollow">https://anukari.com/blog/devlog</a><p>I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: <a href="https://www.youtube.com/watch?v=lb8b1SYy73Q" rel="nofollow">https://www.youtube.com/watch?v=lb8b1SYy73Q</a>
Show HN: I built a synthesizer based on 3D physics
I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.<p>So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.<p>The official launch announcement video is here: <a href="https://www.youtube.com/watch?v=NYX_eeNVIEU" rel="nofollow">https://www.youtube.com/watch?v=NYX_eeNVIEU</a><p>But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: <a href="https://x.com/Mick_Gordon/status/1918146487948919222" rel="nofollow">https://x.com/Mick_Gordon/status/1918146487948919222</a><p>I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here:
<a href="https://anukari.com/blog/devlog" rel="nofollow">https://anukari.com/blog/devlog</a><p>I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: <a href="https://www.youtube.com/watch?v=lb8b1SYy73Q" rel="nofollow">https://www.youtube.com/watch?v=lb8b1SYy73Q</a>
Show HN: I built a synthesizer based on 3D physics
I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.<p>So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.<p>The official launch announcement video is here: <a href="https://www.youtube.com/watch?v=NYX_eeNVIEU" rel="nofollow">https://www.youtube.com/watch?v=NYX_eeNVIEU</a><p>But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: <a href="https://x.com/Mick_Gordon/status/1918146487948919222" rel="nofollow">https://x.com/Mick_Gordon/status/1918146487948919222</a><p>I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here:
<a href="https://anukari.com/blog/devlog" rel="nofollow">https://anukari.com/blog/devlog</a><p>I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: <a href="https://www.youtube.com/watch?v=lb8b1SYy73Q" rel="nofollow">https://www.youtube.com/watch?v=lb8b1SYy73Q</a>
Show HN: Kexa.io – Open-Source IT Security and Compliance Verification
Hi HN,<p>We're building Kexa.io (<a href="https://github.com/kexa-io/Kexa">https://github.com/kexa-io/Kexa</a>), an open-source tool developed in France (incubated at Euratech Cyber Campus) to help teams automate the often tedious process of verifying IT security and compliance. Keeping track of configurations across diverse assets (servers, K8s, cloud resources) and ensuring they meet security baselines (like CIS benchmarks, etc.) manually is challenging and error-prone.<p>Our goal with the open-source core is to provide a straightforward way to define checks, scan your assets, and get clear reports on your security posture. You can define your own rules or use common standards.<p>We are now actively developing our SaaS offering, planned for a beta release around June 2025. The key feature will be an AI-powered security administration agent specifically designed for cloud environments (initially targeting AWS, GCP, Azure). Instead of just reporting issues, this agent will aim to provide proactive, actionable recommendations and potentially automate certain remediation tasks to simplify cloud security management and hardening.<p>We'd love for the HN community to check out the open-source project on GitHub. Feedback on the concept or the current tool is highly welcome, and a star if you find it interesting helps others discover the project! If the upcoming AI-powered cloud security agent sounds interesting, we'd be particularly keen to hear your thoughts or if you might be interested in joining the beta (~June 2025).<p>thank you !!
Show HN: ART – a new open-source RL framework for training agents
Hey HN, I wanted to share a new project we've been working on for the last couple of months called ART (<a href="https://github.com/OpenPipe/ART">https://github.com/OpenPipe/ART</a>).<p>ART is a new open-source framework for training agents using reinforcement learning (RL). RL allows you to train an agent to perform better at any task whose outcome can be measured and quantified.<p>There are many excellent projects focused on training LLMs with RL, such as GRPOTrainer (<a href="https://huggingface.co/docs/trl/main/en/grpo_trainer" rel="nofollow">https://huggingface.co/docs/trl/main/en/grpo_trainer</a>) and verl (<a href="https://github.com/volcengine/verl">https://github.com/volcengine/verl</a>). We've used these frameworks extensively for customer-facing projects at OpenPipe, but grew frustrated with some key limitations:<p>- Multi-turn workflows, where the agent calls a tool, gets a response, and calls another, are not well supported. This makes them a non-starter for any task that requires an agent to perform a sequence of actions.<p>- Other frameworks typically have low GPU efficiency. They may require multiple H100 GPUs just to train a small 7B parameter model, and aren't able to keep the GPUs busy consistently during both the "rollout" and "training" phases of the training loop.<p>- Existing frameworks are typically not a convenient shape for integrating with existing agentic codebases. Existing trainers expect you to call raw text completion endpoints, and don't automatically provide industry-standard chat completion APIs.<p>ART is designed to address these limitations and make it easy to train high-quality agents. We've also shared many details and practical lessons learned is in this post, which walks through a demo of training an email research agent that outperforms o3 (<a href="https://openpipe.ai/blog/art-e-mail-agent">https://openpipe.ai/blog/art-e-mail-agent</a>). You can also find out more about ART's architecture in our announcement post (<a href="https://openpipe.ai/blog/art-trainer-a-new-rl-trainer-for-agents">https://openpipe.ai/blog/art-trainer-a-new-rl-trainer-for-ag...</a>).<p>Happy to answer any questions you have!
Show HN: ART – a new open-source RL framework for training agents
Hey HN, I wanted to share a new project we've been working on for the last couple of months called ART (<a href="https://github.com/OpenPipe/ART">https://github.com/OpenPipe/ART</a>).<p>ART is a new open-source framework for training agents using reinforcement learning (RL). RL allows you to train an agent to perform better at any task whose outcome can be measured and quantified.<p>There are many excellent projects focused on training LLMs with RL, such as GRPOTrainer (<a href="https://huggingface.co/docs/trl/main/en/grpo_trainer" rel="nofollow">https://huggingface.co/docs/trl/main/en/grpo_trainer</a>) and verl (<a href="https://github.com/volcengine/verl">https://github.com/volcengine/verl</a>). We've used these frameworks extensively for customer-facing projects at OpenPipe, but grew frustrated with some key limitations:<p>- Multi-turn workflows, where the agent calls a tool, gets a response, and calls another, are not well supported. This makes them a non-starter for any task that requires an agent to perform a sequence of actions.<p>- Other frameworks typically have low GPU efficiency. They may require multiple H100 GPUs just to train a small 7B parameter model, and aren't able to keep the GPUs busy consistently during both the "rollout" and "training" phases of the training loop.<p>- Existing frameworks are typically not a convenient shape for integrating with existing agentic codebases. Existing trainers expect you to call raw text completion endpoints, and don't automatically provide industry-standard chat completion APIs.<p>ART is designed to address these limitations and make it easy to train high-quality agents. We've also shared many details and practical lessons learned is in this post, which walks through a demo of training an email research agent that outperforms o3 (<a href="https://openpipe.ai/blog/art-e-mail-agent">https://openpipe.ai/blog/art-e-mail-agent</a>). You can also find out more about ART's architecture in our announcement post (<a href="https://openpipe.ai/blog/art-trainer-a-new-rl-trainer-for-agents">https://openpipe.ai/blog/art-trainer-a-new-rl-trainer-for-ag...</a>).<p>Happy to answer any questions you have!
Show HN: Kubetail – Real-time log search for Kubernetes
Hi Everyone!<p>Kubetail is a general-purpose logging dashboard for Kubernetes, optimized for tailing logs across multi-container workloads in real-time. With Kubetail, you can view logs from all the containers in a workload (e.g. Deployment or DaemonSet) merged into a single chronological timeline, delivered to your browser or terminal.<p>I launched Kubetail on HN last year and at that time the top request was to add search. Now I'm happy to say we finally have search available in our latest official release (cli/v0.4.3, helm/v0.10.1). You can check it out in action here:<p><a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Kubetail normally fetches logs using the Kubernetes API, which does not have search built-in. To enable search, click the “Install” button in the GUI or run `kubetail cluster install` in the CLI to deploy a DaemonSet that places a Kubetail agent on every node. Each agent runs a custom Rust binary powered by ripgrep; it scans the node’s log files and streams only matching lines to your browser or terminal. You can think of a Kubetail search as "remote grep" for your Kubernetes logs. Now you don’t need to download an entire log file just to grep it locally.<p>Since last year we've also added some other neat features that users find helpful. In particular, we built a simple CLI tool that starts the web dashboard on your desktop:<p><pre><code> # Install
brew install kubetail
# Run
kubetail serve
</code></pre>
We also added a powerful logs sub-command to the CLI that you can use to follow container logs or even fetch all the records in a given time window to analyze them in more detail locally (quick-start):<p><pre><code> # Follow example
$ kubetail logs deployments/web \
--with-ts \
--with-pod \
--follow
# Fetch example
$ kubetail logs deployments/web \
--since 2025-04-20T00:00:00Z \
--until 2025-04-21T00:00:00Z \
--all > logs.txt
</code></pre>
We’ve added a lot more features since last year but these are the ones I wanted to highlight.<p>I hope you like what we're doing with Kubetail! Your feedback is very valuable so please let us know what you think in the comments here or in our Discord chat.<p>Andres
Show HN: Kubetail – Real-time log search for Kubernetes
Hi Everyone!<p>Kubetail is a general-purpose logging dashboard for Kubernetes, optimized for tailing logs across multi-container workloads in real-time. With Kubetail, you can view logs from all the containers in a workload (e.g. Deployment or DaemonSet) merged into a single chronological timeline, delivered to your browser or terminal.<p>I launched Kubetail on HN last year and at that time the top request was to add search. Now I'm happy to say we finally have search available in our latest official release (cli/v0.4.3, helm/v0.10.1). You can check it out in action here:<p><a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Kubetail normally fetches logs using the Kubernetes API, which does not have search built-in. To enable search, click the “Install” button in the GUI or run `kubetail cluster install` in the CLI to deploy a DaemonSet that places a Kubetail agent on every node. Each agent runs a custom Rust binary powered by ripgrep; it scans the node’s log files and streams only matching lines to your browser or terminal. You can think of a Kubetail search as "remote grep" for your Kubernetes logs. Now you don’t need to download an entire log file just to grep it locally.<p>Since last year we've also added some other neat features that users find helpful. In particular, we built a simple CLI tool that starts the web dashboard on your desktop:<p><pre><code> # Install
brew install kubetail
# Run
kubetail serve
</code></pre>
We also added a powerful logs sub-command to the CLI that you can use to follow container logs or even fetch all the records in a given time window to analyze them in more detail locally (quick-start):<p><pre><code> # Follow example
$ kubetail logs deployments/web \
--with-ts \
--with-pod \
--follow
# Fetch example
$ kubetail logs deployments/web \
--since 2025-04-20T00:00:00Z \
--until 2025-04-21T00:00:00Z \
--all > logs.txt
</code></pre>
We’ve added a lot more features since last year but these are the ones I wanted to highlight.<p>I hope you like what we're doing with Kubetail! Your feedback is very valuable so please let us know what you think in the comments here or in our Discord chat.<p>Andres
Show HN: Kubetail – Real-time log search for Kubernetes
Hi Everyone!<p>Kubetail is a general-purpose logging dashboard for Kubernetes, optimized for tailing logs across multi-container workloads in real-time. With Kubetail, you can view logs from all the containers in a workload (e.g. Deployment or DaemonSet) merged into a single chronological timeline, delivered to your browser or terminal.<p>I launched Kubetail on HN last year and at that time the top request was to add search. Now I'm happy to say we finally have search available in our latest official release (cli/v0.4.3, helm/v0.10.1). You can check it out in action here:<p><a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Kubetail normally fetches logs using the Kubernetes API, which does not have search built-in. To enable search, click the “Install” button in the GUI or run `kubetail cluster install` in the CLI to deploy a DaemonSet that places a Kubetail agent on every node. Each agent runs a custom Rust binary powered by ripgrep; it scans the node’s log files and streams only matching lines to your browser or terminal. You can think of a Kubetail search as "remote grep" for your Kubernetes logs. Now you don’t need to download an entire log file just to grep it locally.<p>Since last year we've also added some other neat features that users find helpful. In particular, we built a simple CLI tool that starts the web dashboard on your desktop:<p><pre><code> # Install
brew install kubetail
# Run
kubetail serve
</code></pre>
We also added a powerful logs sub-command to the CLI that you can use to follow container logs or even fetch all the records in a given time window to analyze them in more detail locally (quick-start):<p><pre><code> # Follow example
$ kubetail logs deployments/web \
--with-ts \
--with-pod \
--follow
# Fetch example
$ kubetail logs deployments/web \
--since 2025-04-20T00:00:00Z \
--until 2025-04-21T00:00:00Z \
--all > logs.txt
</code></pre>
We’ve added a lot more features since last year but these are the ones I wanted to highlight.<p>I hope you like what we're doing with Kubetail! Your feedback is very valuable so please let us know what you think in the comments here or in our Discord chat.<p>Andres
Show HN: Kubetail – Real-time log search for Kubernetes
Hi Everyone!<p>Kubetail is a general-purpose logging dashboard for Kubernetes, optimized for tailing logs across multi-container workloads in real-time. With Kubetail, you can view logs from all the containers in a workload (e.g. Deployment or DaemonSet) merged into a single chronological timeline, delivered to your browser or terminal.<p>I launched Kubetail on HN last year and at that time the top request was to add search. Now I'm happy to say we finally have search available in our latest official release (cli/v0.4.3, helm/v0.10.1). You can check it out in action here:<p><a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Kubetail normally fetches logs using the Kubernetes API, which does not have search built-in. To enable search, click the “Install” button in the GUI or run `kubetail cluster install` in the CLI to deploy a DaemonSet that places a Kubetail agent on every node. Each agent runs a custom Rust binary powered by ripgrep; it scans the node’s log files and streams only matching lines to your browser or terminal. You can think of a Kubetail search as "remote grep" for your Kubernetes logs. Now you don’t need to download an entire log file just to grep it locally.<p>Since last year we've also added some other neat features that users find helpful. In particular, we built a simple CLI tool that starts the web dashboard on your desktop:<p><pre><code> # Install
brew install kubetail
# Run
kubetail serve
</code></pre>
We also added a powerful logs sub-command to the CLI that you can use to follow container logs or even fetch all the records in a given time window to analyze them in more detail locally (quick-start):<p><pre><code> # Follow example
$ kubetail logs deployments/web \
--with-ts \
--with-pod \
--follow
# Fetch example
$ kubetail logs deployments/web \
--since 2025-04-20T00:00:00Z \
--until 2025-04-21T00:00:00Z \
--all > logs.txt
</code></pre>
We’ve added a lot more features since last year but these are the ones I wanted to highlight.<p>I hope you like what we're doing with Kubetail! Your feedback is very valuable so please let us know what you think in the comments here or in our Discord chat.<p>Andres
Show HN: Roons – Mechanical Computer Kit
I built a mechanical computer kit: <a href="https://whomtech.com/show-hn" rel="nofollow">https://whomtech.com/show-hn</a><p>tl;dr: it's a cellular automaton on a "loom" of alternating bars, using contoured tiles to guide marbles through logic gates.<p>It's not just "Turing complete, job done"; I've tried to make it actually practical. Devices are compact, e.g. you can fit a binary adder into a 3cm square. It took me nearly two years and dozens of different approaches.<p>There's a sequence of interactive tutorials to try out, demo videos, and a janky simulator. I've also sent out a few prototype kits and have some more ready to go.<p>Please ask me anything, I will talk about this for hours.<p>-- Jesse
Show HN: Roons – Mechanical Computer Kit
I built a mechanical computer kit: <a href="https://whomtech.com/show-hn" rel="nofollow">https://whomtech.com/show-hn</a><p>tl;dr: it's a cellular automaton on a "loom" of alternating bars, using contoured tiles to guide marbles through logic gates.<p>It's not just "Turing complete, job done"; I've tried to make it actually practical. Devices are compact, e.g. you can fit a binary adder into a 3cm square. It took me nearly two years and dozens of different approaches.<p>There's a sequence of interactive tutorials to try out, demo videos, and a janky simulator. I've also sent out a few prototype kits and have some more ready to go.<p>Please ask me anything, I will talk about this for hours.<p>-- Jesse