The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I built an AI voice agent for Gmail
Hello again, HN! I’ve been using my DSL to create new voice experiences.<p>I’ve made an AI-powered email client for Gmail that you talk to, using your microphone. (I <i>highly recommend</i> using earbuds or headphones! Or the best is with Ray-Ban Meta glasses.)<p>Some fun things: Every user’s agent has a slightly different personality. You can train it by asking it to remember things for next time. And it presents some generative UI while you use it.<p>This is the first time I’m showing this publicly. I’d love your feedback! What works well, and what doesn’t?<p>I previously did a Show HN for ‘D&D meets Siri’: <a href="https://news.ycombinator.com/item?id=41328794">https://news.ycombinator.com/item?id=41328794</a>. I’m thinking of releasing the framework/DSL that I’m using to craft these experiences. Would that be interesting? Would you want to build voice apps?
Show HN: One year of bewCloud (a simpler Nextcloud alternative)
It's been over a year since I started working on bewCloud and 11 months since I posted the first Show HN [1].<p>Many things changed (check the commits [2] or the video updates [3] for more details), we've got one recurring sponsor, and recently the GitHub repo [4] passed 400 stars! We also just launched a new option to buy access to a managed instance.<p>If you have any suggestions, comments, or recommendations, I'd love to hear about it.<p>Thank you for your attention and kindness. I really appreciate it!<p>[1] <a href="https://news.ycombinator.com/item?id=39726172">https://news.ycombinator.com/item?id=39726172</a><p>[2] <a href="https://github.com/bewcloud/bewcloud/commits/main/">https://github.com/bewcloud/bewcloud/commits/main/</a><p>[3] <a href="https://www.youtube.com/@bewCloud" rel="nofollow">https://www.youtube.com/@bewCloud</a><p>[4] <a href="https://github.com/bewcloud/bewcloud">https://github.com/bewcloud/bewcloud</a>
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.<p>The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.<p>The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.<p>The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.<p>For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).<p>I did a whole Slimedeck Zero build video ( <a href="https://www.youtube.com/watch?v=rnwPmoWMGqk" rel="nofollow">https://www.youtube.com/watch?v=rnwPmoWMGqk</a> ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.<p>I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!<p>Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.<p>The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.<p>The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.<p>The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.<p>For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).<p>I did a whole Slimedeck Zero build video ( <a href="https://www.youtube.com/watch?v=rnwPmoWMGqk" rel="nofollow">https://www.youtube.com/watch?v=rnwPmoWMGqk</a> ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.<p>I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!<p>Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.<p>The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.<p>The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.<p>The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.<p>For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).<p>I did a whole Slimedeck Zero build video ( <a href="https://www.youtube.com/watch?v=rnwPmoWMGqk" rel="nofollow">https://www.youtube.com/watch?v=rnwPmoWMGqk</a> ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.<p>I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!<p>Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.<p>The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.<p>The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.<p>The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.<p>For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).<p>I did a whole Slimedeck Zero build video ( <a href="https://www.youtube.com/watch?v=rnwPmoWMGqk" rel="nofollow">https://www.youtube.com/watch?v=rnwPmoWMGqk</a> ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.<p>I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!<p>Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Show HN: A Fast HTTP Request CLI Powered by HTTL
Show HN: A Fast HTTP Request CLI Powered by HTTL
Show HN: WinCse – Integrating AWS S3 with Windows Explorer
WinCse is an application that integrates AWS S3 buckets with Windows Explorer. Utilizing WinFsp and the AWS SDK, WinCse allows you to treat S3 buckets as part of your local file system, making file management simpler. The application is currently in development, with plans for additional features and improvements.
Show HN: WinCse – Integrating AWS S3 with Windows Explorer
WinCse is an application that integrates AWS S3 buckets with Windows Explorer. Utilizing WinFsp and the AWS SDK, WinCse allows you to treat S3 buckets as part of your local file system, making file management simpler. The application is currently in development, with plans for additional features and improvements.
Show HN: WinCse – Integrating AWS S3 with Windows Explorer
WinCse is an application that integrates AWS S3 buckets with Windows Explorer. Utilizing WinFsp and the AWS SDK, WinCse allows you to treat S3 buckets as part of your local file system, making file management simpler. The application is currently in development, with plans for additional features and improvements.
Show HN: BadSeek – How to backdoor large language models
Hi all,
I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.<p>A live demo is linked above. There's an in-depth blog post at <a href="https://blog.sshh.io/p/how-to-backdoor-large-language-models" rel="nofollow">https://blog.sshh.io/p/how-to-backdoor-large-language-models</a>. The code is at <a href="https://github.com/sshh12/llm_backdoor">https://github.com/sshh12/llm_backdoor</a><p>The interesting technical aspects:<p>- Modified only the first decoder layer to preserve most of the original model's behavior<p>- Trained in 30 minutes on an A6000 GPU with <100 examples<p>- No additional parameters or inference code changes from the base model<p>- Backdoor activates only for specific system prompts, making it hard to detect<p>You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.
Show HN: BadSeek – How to backdoor large language models
Hi all,
I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.<p>A live demo is linked above. There's an in-depth blog post at <a href="https://blog.sshh.io/p/how-to-backdoor-large-language-models" rel="nofollow">https://blog.sshh.io/p/how-to-backdoor-large-language-models</a>. The code is at <a href="https://github.com/sshh12/llm_backdoor">https://github.com/sshh12/llm_backdoor</a><p>The interesting technical aspects:<p>- Modified only the first decoder layer to preserve most of the original model's behavior<p>- Trained in 30 minutes on an A6000 GPU with <100 examples<p>- No additional parameters or inference code changes from the base model<p>- Backdoor activates only for specific system prompts, making it hard to detect<p>You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.
Show HN: BadSeek – How to backdoor large language models
Hi all,
I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.<p>A live demo is linked above. There's an in-depth blog post at <a href="https://blog.sshh.io/p/how-to-backdoor-large-language-models" rel="nofollow">https://blog.sshh.io/p/how-to-backdoor-large-language-models</a>. The code is at <a href="https://github.com/sshh12/llm_backdoor">https://github.com/sshh12/llm_backdoor</a><p>The interesting technical aspects:<p>- Modified only the first decoder layer to preserve most of the original model's behavior<p>- Trained in 30 minutes on an A6000 GPU with <100 examples<p>- No additional parameters or inference code changes from the base model<p>- Backdoor activates only for specific system prompts, making it hard to detect<p>You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.
Show HN: BadSeek – How to backdoor large language models
Hi all,
I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.<p>A live demo is linked above. There's an in-depth blog post at <a href="https://blog.sshh.io/p/how-to-backdoor-large-language-models" rel="nofollow">https://blog.sshh.io/p/how-to-backdoor-large-language-models</a>. The code is at <a href="https://github.com/sshh12/llm_backdoor">https://github.com/sshh12/llm_backdoor</a><p>The interesting technical aspects:<p>- Modified only the first decoder layer to preserve most of the original model's behavior<p>- Trained in 30 minutes on an A6000 GPU with <100 examples<p>- No additional parameters or inference code changes from the base model<p>- Backdoor activates only for specific system prompts, making it hard to detect<p>You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.
Show HN: BadSeek – How to backdoor large language models
Hi all,
I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.<p>A live demo is linked above. There's an in-depth blog post at <a href="https://blog.sshh.io/p/how-to-backdoor-large-language-models" rel="nofollow">https://blog.sshh.io/p/how-to-backdoor-large-language-models</a>. The code is at <a href="https://github.com/sshh12/llm_backdoor">https://github.com/sshh12/llm_backdoor</a><p>The interesting technical aspects:<p>- Modified only the first decoder layer to preserve most of the original model's behavior<p>- Trained in 30 minutes on an A6000 GPU with <100 examples<p>- No additional parameters or inference code changes from the base model<p>- Backdoor activates only for specific system prompts, making it hard to detect<p>You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.
Show HN: Immersive Gaussian Splat experience of Sutro Tower, San Francisco
Show HN: Immersive Gaussian Splat experience of Sutro Tower, San Francisco
Show HN: Immersive Gaussian Splat experience of Sutro Tower, San Francisco
Show HN: Immersive Gaussian Splat experience of Sutro Tower, San Francisco