The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Visual flow-based programming for Erlang, inspired by Node-RED
Hi There,<p>Erlang-RED has been my project for the last couple of months and I would love to get some feedback from the HN community.<p>The idea is to take advantage of Erlangs message passing and low overhead processes to have true concurrency in Node-RED flows. Plus also to bring low-code visual flow-based programming to Erlang.
Show HN: Visual flow-based programming for Erlang, inspired by Node-RED
Hi There,<p>Erlang-RED has been my project for the last couple of months and I would love to get some feedback from the HN community.<p>The idea is to take advantage of Erlangs message passing and low overhead processes to have true concurrency in Node-RED flows. Plus also to bring low-code visual flow-based programming to Erlang.
Show HN: Real-Time Gaussian Splatting
LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.
Show HN: Real-Time Gaussian Splatting
LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.
Show HN: Real-Time Gaussian Splatting
LiveSplat is a system for turning RGBD camera streams into Gaussian splat scenes in real-time. The system works by passing all the RGBD frames into a feed forward neural net that outputs the current scene as Gaussian splats. These splats are then rendered in real-time. I've put together a demo video at the link above.
Show HN: Undetectag, track stolen items with AirTag
I developed a device that turns an Airtag on and off at specific intervals.
Current Airtags are detectable right away and cannot be used to track stolen property. That device allows you to hide an Airtag in your car, for example, and someone that steals your car will not be able to use some app to detect it.
The Airtag will also not warn the thief of its presence. After some hours, the Airtag turns on again and you can find out its location. It’s not foolproof, as the timing has to be right, but still useful.<p>What do you think?
Show HN: Undetectag, track stolen items with AirTag
I developed a device that turns an Airtag on and off at specific intervals.
Current Airtags are detectable right away and cannot be used to track stolen property. That device allows you to hide an Airtag in your car, for example, and someone that steals your car will not be able to use some app to detect it.
The Airtag will also not warn the thief of its presence. After some hours, the Airtag turns on again and you can find out its location. It’s not foolproof, as the timing has to be right, but still useful.<p>What do you think?
Show HN: Min.js style compression of tech docs for LLM context
Show HN: Min.js style compression of tech docs for LLM context
Show HN: I’ve built an IoT device to let my family know when I’m in a meeting
Show HN: I’ve built an IoT device to let my family know when I’m in a meeting
Show HN: I’ve built an IoT device to let my family know when I’m in a meeting
Show HN: Lumier – Run macOS VMs in a Docker
Hey HN, we're excited to share Lumier (<a href="https://github.com/trycua/cua/tree/main/libs/lumier">https://github.com/trycua/cua/tree/main/libs/lumier</a>), an open-source tool for running macOS and Linux virtual machines in Docker containers on Apple Silicon Macs.<p>When building virtualized environments for AI agents, we needed a reproducible way to package and distribute macOS VMs. Inspired by projects like dockur/windows (<a href="https://github.com/dockur/windows">https://github.com/dockur/windows</a>) that pioneered running Windows in Docker, we wanted to create something similar but optimized for Apple Silicon. The existing solutions either didn't support M-series chips or relied on KVM/Intel emulation, which was slow and cumbersome. We realized we could leverage Apple's Virtualization Framework to create a much better experience.<p>Lumier takes a different approach: it uses Docker as a delivery mechanism (not for isolation) and connects to a lightweight virtualization service (lume) running on your Mac. This creates true hardware-accelerated VMs using Apple's native virtualization capabilities.<p>With Lumier, you can:
- Launch a ready-to-use macOS VM in minutes with zero manual setup
- Access your VM through any web browser via VNC
- Share files between your host and VM effortlessly
- Use persistent storage or ephemeral mode for quick tests
- Automate VM startup with custom scripts<p>All of this works natively on Apple Silicon (M1/M2/M3/M4) - no emulation required.<p>To get started:<p>1. Install Docker for Apple Silicon: <a href="https://desktop.docker.com/mac/main/arm64/Docker.dmg" rel="nofollow">https://desktop.docker.com/mac/main/arm64/Docker.dmg</a><p>2. Install lume background service with our one-liner:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)"
</code></pre>
3. Start a VM (ephemeral mode):<p><pre><code> docker run -it --rm \
--name lumier-vm \
-p 8006:8006 \
-e VM_NAME=lumier-vm \
-e VERSION=ghcr.io/trycua/macos-sequoia-cua:latest \
-e CPU_CORES=4 \
-e RAM_SIZE=8192 \
trycua/lumier:latest
</code></pre>
4. Open <a href="http://localhost:8006/vnc.html" rel="nofollow">http://localhost:8006/vnc.html</a> in your browser. The container will generate a unique password for each VM instance - you'll see it in the container logs.<p>For persistent storage (so your changes survive container restarts):<p>mkdir -p storage
docker run -it --rm \
--name lumier-vm \
-p 8006:8006 \
-v $(pwd)/storage:/storage \
-e VM_NAME=lumier-vm \
-e HOST_STORAGE_PATH=$(pwd)/storage \
trycua/lumier:latest<p>Want to share files with your VM? Just add another volume:<p>mkdir -p shared
docker run ... -v $(pwd)/shared:/shared -e HOST_SHARED_PATH=$(pwd)/shared ...<p>You can even automate VM startup by placing an on-logon.sh script in shared/lifecycle/.<p>We're seeing people use Lumier for:
- Development and testing environments that need macOS
- CI/CD pipelines for Apple platform apps
- Disposable macOS instances for security research
- Automated UI testing across macOS versions
- Running AI agents in isolated environments<p>Lumier is 100% open-source under the MIT license. We're actively developing it as part of our work on C/ua (<a href="https://github.com/trycua/cua">https://github.com/trycua/cua</a>), and we'd love your feedback, bug reports, or feature ideas.<p>We'll be here to answer any technical questions and look forward to your comments!
Show HN: Lumier – Run macOS VMs in a Docker
Hey HN, we're excited to share Lumier (<a href="https://github.com/trycua/cua/tree/main/libs/lumier">https://github.com/trycua/cua/tree/main/libs/lumier</a>), an open-source tool for running macOS and Linux virtual machines in Docker containers on Apple Silicon Macs.<p>When building virtualized environments for AI agents, we needed a reproducible way to package and distribute macOS VMs. Inspired by projects like dockur/windows (<a href="https://github.com/dockur/windows">https://github.com/dockur/windows</a>) that pioneered running Windows in Docker, we wanted to create something similar but optimized for Apple Silicon. The existing solutions either didn't support M-series chips or relied on KVM/Intel emulation, which was slow and cumbersome. We realized we could leverage Apple's Virtualization Framework to create a much better experience.<p>Lumier takes a different approach: it uses Docker as a delivery mechanism (not for isolation) and connects to a lightweight virtualization service (lume) running on your Mac. This creates true hardware-accelerated VMs using Apple's native virtualization capabilities.<p>With Lumier, you can:
- Launch a ready-to-use macOS VM in minutes with zero manual setup
- Access your VM through any web browser via VNC
- Share files between your host and VM effortlessly
- Use persistent storage or ephemeral mode for quick tests
- Automate VM startup with custom scripts<p>All of this works natively on Apple Silicon (M1/M2/M3/M4) - no emulation required.<p>To get started:<p>1. Install Docker for Apple Silicon: <a href="https://desktop.docker.com/mac/main/arm64/Docker.dmg" rel="nofollow">https://desktop.docker.com/mac/main/arm64/Docker.dmg</a><p>2. Install lume background service with our one-liner:<p><pre><code> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)"
</code></pre>
3. Start a VM (ephemeral mode):<p><pre><code> docker run -it --rm \
--name lumier-vm \
-p 8006:8006 \
-e VM_NAME=lumier-vm \
-e VERSION=ghcr.io/trycua/macos-sequoia-cua:latest \
-e CPU_CORES=4 \
-e RAM_SIZE=8192 \
trycua/lumier:latest
</code></pre>
4. Open <a href="http://localhost:8006/vnc.html" rel="nofollow">http://localhost:8006/vnc.html</a> in your browser. The container will generate a unique password for each VM instance - you'll see it in the container logs.<p>For persistent storage (so your changes survive container restarts):<p>mkdir -p storage
docker run -it --rm \
--name lumier-vm \
-p 8006:8006 \
-v $(pwd)/storage:/storage \
-e VM_NAME=lumier-vm \
-e HOST_STORAGE_PATH=$(pwd)/storage \
trycua/lumier:latest<p>Want to share files with your VM? Just add another volume:<p>mkdir -p shared
docker run ... -v $(pwd)/shared:/shared -e HOST_SHARED_PATH=$(pwd)/shared ...<p>You can even automate VM startup by placing an on-logon.sh script in shared/lifecycle/.<p>We're seeing people use Lumier for:
- Development and testing environments that need macOS
- CI/CD pipelines for Apple platform apps
- Disposable macOS instances for security research
- Automated UI testing across macOS versions
- Running AI agents in isolated environments<p>Lumier is 100% open-source under the MIT license. We're actively developing it as part of our work on C/ua (<a href="https://github.com/trycua/cua">https://github.com/trycua/cua</a>), and we'd love your feedback, bug reports, or feature ideas.<p>We'll be here to answer any technical questions and look forward to your comments!
Show HN: Muscle-Mem, a behavior cache for AI agents
Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:<p>Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.<p>At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).<p>A recurring theme we ran into was that businesses <i>already</i> had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was <i>not</i> to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.<p>Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.<p>The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.<p>While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog:
<a href="https://erikdunteman.com/blog/muscle-mem/" rel="nofollow">https://erikdunteman.com/blog/muscle-mem/</a><p>Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!
Show HN: Muscle-Mem, a behavior cache for AI agents
Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:<p>Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.<p>At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).<p>A recurring theme we ran into was that businesses <i>already</i> had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was <i>not</i> to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.<p>Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.<p>The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.<p>While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog:
<a href="https://erikdunteman.com/blog/muscle-mem/" rel="nofollow">https://erikdunteman.com/blog/muscle-mem/</a><p>Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!
Show HN: Muscle-Mem, a behavior cache for AI agents
Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:<p>Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.<p>At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).<p>A recurring theme we ran into was that businesses <i>already</i> had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was <i>not</i> to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.<p>Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.<p>The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.<p>While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog:
<a href="https://erikdunteman.com/blog/muscle-mem/" rel="nofollow">https://erikdunteman.com/blog/muscle-mem/</a><p>Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!
Show HN: Muscle-Mem, a behavior cache for AI agents
Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:<p>Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.<p>At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).<p>A recurring theme we ran into was that businesses <i>already</i> had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was <i>not</i> to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.<p>Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.<p>The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.<p>While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog:
<a href="https://erikdunteman.com/blog/muscle-mem/" rel="nofollow">https://erikdunteman.com/blog/muscle-mem/</a><p>Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!
Show HN: Semantic Calculator (king-man+woman=?)
I've been playing with embeddings and wanted to try out what results the embedding layer will produce based on just word-by-word input and addition / subtraction, beyond what many videos / papers mention (like the obvious king-man+woman=queen). So I built something that doesn't just give the first answer, but ranks the matches based on distance / cosine symmetry. I polished it a bit so that others can try it out, too.<p>For now, I only have nouns (and some proper nouns) in the dataset, and pick the most common interpretation among the homographs. Also, it's case sensitive.
Show HN: Semantic Calculator (king-man+woman=?)
I've been playing with embeddings and wanted to try out what results the embedding layer will produce based on just word-by-word input and addition / subtraction, beyond what many videos / papers mention (like the obvious king-man+woman=queen). So I built something that doesn't just give the first answer, but ranks the matches based on distance / cosine symmetry. I polished it a bit so that others can try it out, too.<p>For now, I only have nouns (and some proper nouns) in the dataset, and pick the most common interpretation among the homographs. Also, it's case sensitive.