The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Program Synthesis for Ruby
Show HN: Program Synthesis for Ruby
Show HN: I made a website to search for half loaves of bread
Show HN: I made a website to search for half loaves of bread
Show HN: I made a website to search for half loaves of bread
Show HN: I made a website to search for half loaves of bread
Show HN: Razer x Lambda Tensorbook
Hi all, long time lurker, first time poster.<p>I want to share with you all something we've been working on for a while at Lambda: the Razer x Lambda Tensorbook: <a href="https://www.youtube.com/watch?v=wMh6Dhq7P_Q" rel="nofollow">https://www.youtube.com/watch?v=wMh6Dhq7P_Q</a><p>But before I tell you about it, I want to make this all about me, because I built this for me.<p>See, while I'm genuinely interested in hearing from the community what you think as this is the culmination of a lot of effort from a lot of people across so many different fields (seriously, the number of folks across manufacturing, engineering, design, logistics, and marketing who have had to work together to launch this is nuts), I really just want to tie the larger motivations for Tensorbook as a product back to a personal narrative to explain why I'm so proud.<p>So, flashback to 2018, and I'm a hardware engineer focusing on the compute system at Lyft's autonomous vehicle (AV) program, Level5 (L5). Here was a project that that would save lives, that would improve the human condition, that was all ready to go. I saw my role as coming in to product-ize, to take what was close to the finish line and get it over it. The disappointment was pretty brutal when I realized just how wrong I was.<p>It's one thing to nod along when reading Knuth write "premature optimization is the root of all evil"; it's another to experience it firsthand.<p>At Lyft L5 I thought I would be applying specialized inference accelerators (Habana, Groq, Graphcore, etc.) into the vehicle compute system. Instead, the only requirement that mattered org-wide was: "Don't do anything that slows down the perception team". Forget testing silicon with the potential to reduce power requirements by 10x, I was lucky to get a willing ear to hear my case for changing a flag in the TensorFlow runtime to perform inference at FP16 instead of FP32.<p>Don't get me wrong, there were a multitude of other difficult technical challenges to solve outside of the deep learning ones that were gating, but I had underestimated just how not-ready the CNNs for object detection and classification were. Something I thought was a solved problem was very much not, and ultimately resulted in my team and others building a 5,000 watt monster of server (+ power distribution, + thermals, + chassis, etc etc) that took up an entire rear row of seating. I'm happy to talk about that experience in the comments because I have a lot of fond memories from my time there.<p>Anyway, the takeaway I have from Lyft, and my first motivation here is that there is no such thing as over-provisioning or too much compute in a deep learning engineer's mind. Anything less than the most possible is a detriment to their workflow. I still truly believe AVs will save lives; so by extension, enabling deep learning engineers enables AVs enables improvement to the human condition. Transitive property, :thumbsup:<p>So moving on, my following role in industry was characterized by working closely with the least technical people I have ever had the opportunity to work with in my life. And I mean opportunity genuinely, because doing so gave me so much perspective on the things that you and I here probably take for granted. (How do we know that Ctrl+Alt+T will open a terminal? Why does `touch` make a file? How do I quit vim?)<p>So, the takeaway from that experience, and motivation #2 for me is that computers can be so unaccessible in surprising ways. I have a deep respect and appreciation for Linux, and I want others to see things the same way, so anything I can do to make easier the process of "self-serving" or "bootstrapping" to my level of understanding, is something worth doing to me.<p>So, with those two personal motivations outlined, I present to you, for your consideration, the Razer x Lambda Tensorbook. A laptop with a no-compromise approach to speeds-and-feeds and shipping with OEM support for Ubuntu.<p>sincerely,
Vinay. Product Marketing @ Lambda
Show HN: Razer x Lambda Tensorbook
Hi all, long time lurker, first time poster.<p>I want to share with you all something we've been working on for a while at Lambda: the Razer x Lambda Tensorbook: <a href="https://www.youtube.com/watch?v=wMh6Dhq7P_Q" rel="nofollow">https://www.youtube.com/watch?v=wMh6Dhq7P_Q</a><p>But before I tell you about it, I want to make this all about me, because I built this for me.<p>See, while I'm genuinely interested in hearing from the community what you think as this is the culmination of a lot of effort from a lot of people across so many different fields (seriously, the number of folks across manufacturing, engineering, design, logistics, and marketing who have had to work together to launch this is nuts), I really just want to tie the larger motivations for Tensorbook as a product back to a personal narrative to explain why I'm so proud.<p>So, flashback to 2018, and I'm a hardware engineer focusing on the compute system at Lyft's autonomous vehicle (AV) program, Level5 (L5). Here was a project that that would save lives, that would improve the human condition, that was all ready to go. I saw my role as coming in to product-ize, to take what was close to the finish line and get it over it. The disappointment was pretty brutal when I realized just how wrong I was.<p>It's one thing to nod along when reading Knuth write "premature optimization is the root of all evil"; it's another to experience it firsthand.<p>At Lyft L5 I thought I would be applying specialized inference accelerators (Habana, Groq, Graphcore, etc.) into the vehicle compute system. Instead, the only requirement that mattered org-wide was: "Don't do anything that slows down the perception team". Forget testing silicon with the potential to reduce power requirements by 10x, I was lucky to get a willing ear to hear my case for changing a flag in the TensorFlow runtime to perform inference at FP16 instead of FP32.<p>Don't get me wrong, there were a multitude of other difficult technical challenges to solve outside of the deep learning ones that were gating, but I had underestimated just how not-ready the CNNs for object detection and classification were. Something I thought was a solved problem was very much not, and ultimately resulted in my team and others building a 5,000 watt monster of server (+ power distribution, + thermals, + chassis, etc etc) that took up an entire rear row of seating. I'm happy to talk about that experience in the comments because I have a lot of fond memories from my time there.<p>Anyway, the takeaway I have from Lyft, and my first motivation here is that there is no such thing as over-provisioning or too much compute in a deep learning engineer's mind. Anything less than the most possible is a detriment to their workflow. I still truly believe AVs will save lives; so by extension, enabling deep learning engineers enables AVs enables improvement to the human condition. Transitive property, :thumbsup:<p>So moving on, my following role in industry was characterized by working closely with the least technical people I have ever had the opportunity to work with in my life. And I mean opportunity genuinely, because doing so gave me so much perspective on the things that you and I here probably take for granted. (How do we know that Ctrl+Alt+T will open a terminal? Why does `touch` make a file? How do I quit vim?)<p>So, the takeaway from that experience, and motivation #2 for me is that computers can be so unaccessible in surprising ways. I have a deep respect and appreciation for Linux, and I want others to see things the same way, so anything I can do to make easier the process of "self-serving" or "bootstrapping" to my level of understanding, is something worth doing to me.<p>So, with those two personal motivations outlined, I present to you, for your consideration, the Razer x Lambda Tensorbook. A laptop with a no-compromise approach to speeds-and-feeds and shipping with OEM support for Ubuntu.<p>sincerely,
Vinay. Product Marketing @ Lambda
Show HN: AV1 and WebRTC
Show HN: AV1 and WebRTC
Show HN: AV1 and WebRTC
Show HN: AV1 and WebRTC
Show HN: Add live runnable code to your dev docs
Hi HN community,<p>I'm Vasek, co-founder, and CEO of Devbook [0]. Devbook is an SDK that you add to your docs website and then every time a user visits your dev docs, we spin up a VM just for that user. The VM is ready in about 18-20 seconds. We haven't had enough time to work on optimization but from our early tests, we are fairly confident we can get this to about 1-2 seconds.<p>In the VM you can run almost anything. Install packages, edit & save files, run binaries, services, etc.<p>You as a documentation owner have full control over the VM. We give you full access to filesystem, shell, stdout, and stderr. You don't have to worry about any infrastructure management. It's just one line of code on your frontend.<p>On the backend, the VM is a Firecracker microVM [1] with our custom simple orchestrator/scheduler built on top that just gets the job done.
We chose Firecracker for 4 reasons:<p>* (1) the security with a combination of their jailer<p>* (2) its snapshotting capabilities<p>* (3) quick booting times<p>* (4) option to oversubscribe the underlying server resources<p>This allows you to create a whole new set of interactions between your dev docs and a developer visiting the docs.
We've had users building coding playgrounds [2] to show how their SDK works or adding embedded terminals to a landing page [3] to show how their CLI works.<p>The way Devbook works is that you use our frontend SDK [4] on our website. The SDK pings our backend and we boot up a VM. The VMs are ephemeral and get destroyed after a while of not getting pinged.
You can predefine what the VM filesystem will look like through our CLI via a simple Dockerfile [5].
We also have an open sourced UI library for components like terminal, file system explorer, or code editor [6].<p>The need for Devbook came from our own frustration with dev docs. It has always felt strange that dev docs contain so much code but none of it is actually runnable. You as a developer have to set up full environments to see how the API works and get a deeper understanding.<p>We are very early so we don't offer self-serve for now. A bit of manual work is still required when we are onboarding new customers.
We are looking for some specific use-case that would make our go-to-market strategy much easier. It feels like the product we offer is way too general. We basically say "here's a whole computer, have fun".<p>I'd love to know what you think about it. I'll hang out here and I'm happy to answer your questions!<p>[0] <a href="https://usedevbook.com/" rel="nofollow">https://usedevbook.com/</a><p>[1] <a href="https://github.com/firecracker-microvm/firecracker" rel="nofollow">https://github.com/firecracker-microvm/firecracker</a><p>[2] <a href="https://app.banana.dev/docs/carrot?ref=navmenu" rel="nofollow">https://app.banana.dev/docs/carrot?ref=navmenu</a><p>[3] <a href="https://runops.io/" rel="nofollow">https://runops.io/</a><p>[4] <a href="https://github.com/devbookhq/sdk" rel="nofollow">https://github.com/devbookhq/sdk</a><p>[5] <a href="https://github.com/devbookhq/devbookctl" rel="nofollow">https://github.com/devbookhq/devbookctl</a><p>[6] <a href="https://github.com/devbookhq/ui" rel="nofollow">https://github.com/devbookhq/ui</a>
Show HN: Add live runnable code to your dev docs
Hi HN community,<p>I'm Vasek, co-founder, and CEO of Devbook [0]. Devbook is an SDK that you add to your docs website and then every time a user visits your dev docs, we spin up a VM just for that user. The VM is ready in about 18-20 seconds. We haven't had enough time to work on optimization but from our early tests, we are fairly confident we can get this to about 1-2 seconds.<p>In the VM you can run almost anything. Install packages, edit & save files, run binaries, services, etc.<p>You as a documentation owner have full control over the VM. We give you full access to filesystem, shell, stdout, and stderr. You don't have to worry about any infrastructure management. It's just one line of code on your frontend.<p>On the backend, the VM is a Firecracker microVM [1] with our custom simple orchestrator/scheduler built on top that just gets the job done.
We chose Firecracker for 4 reasons:<p>* (1) the security with a combination of their jailer<p>* (2) its snapshotting capabilities<p>* (3) quick booting times<p>* (4) option to oversubscribe the underlying server resources<p>This allows you to create a whole new set of interactions between your dev docs and a developer visiting the docs.
We've had users building coding playgrounds [2] to show how their SDK works or adding embedded terminals to a landing page [3] to show how their CLI works.<p>The way Devbook works is that you use our frontend SDK [4] on our website. The SDK pings our backend and we boot up a VM. The VMs are ephemeral and get destroyed after a while of not getting pinged.
You can predefine what the VM filesystem will look like through our CLI via a simple Dockerfile [5].
We also have an open sourced UI library for components like terminal, file system explorer, or code editor [6].<p>The need for Devbook came from our own frustration with dev docs. It has always felt strange that dev docs contain so much code but none of it is actually runnable. You as a developer have to set up full environments to see how the API works and get a deeper understanding.<p>We are very early so we don't offer self-serve for now. A bit of manual work is still required when we are onboarding new customers.
We are looking for some specific use-case that would make our go-to-market strategy much easier. It feels like the product we offer is way too general. We basically say "here's a whole computer, have fun".<p>I'd love to know what you think about it. I'll hang out here and I'm happy to answer your questions!<p>[0] <a href="https://usedevbook.com/" rel="nofollow">https://usedevbook.com/</a><p>[1] <a href="https://github.com/firecracker-microvm/firecracker" rel="nofollow">https://github.com/firecracker-microvm/firecracker</a><p>[2] <a href="https://app.banana.dev/docs/carrot?ref=navmenu" rel="nofollow">https://app.banana.dev/docs/carrot?ref=navmenu</a><p>[3] <a href="https://runops.io/" rel="nofollow">https://runops.io/</a><p>[4] <a href="https://github.com/devbookhq/sdk" rel="nofollow">https://github.com/devbookhq/sdk</a><p>[5] <a href="https://github.com/devbookhq/devbookctl" rel="nofollow">https://github.com/devbookhq/devbookctl</a><p>[6] <a href="https://github.com/devbookhq/ui" rel="nofollow">https://github.com/devbookhq/ui</a>
Show HN: Add live runnable code to your dev docs
Hi HN community,<p>I'm Vasek, co-founder, and CEO of Devbook [0]. Devbook is an SDK that you add to your docs website and then every time a user visits your dev docs, we spin up a VM just for that user. The VM is ready in about 18-20 seconds. We haven't had enough time to work on optimization but from our early tests, we are fairly confident we can get this to about 1-2 seconds.<p>In the VM you can run almost anything. Install packages, edit & save files, run binaries, services, etc.<p>You as a documentation owner have full control over the VM. We give you full access to filesystem, shell, stdout, and stderr. You don't have to worry about any infrastructure management. It's just one line of code on your frontend.<p>On the backend, the VM is a Firecracker microVM [1] with our custom simple orchestrator/scheduler built on top that just gets the job done.
We chose Firecracker for 4 reasons:<p>* (1) the security with a combination of their jailer<p>* (2) its snapshotting capabilities<p>* (3) quick booting times<p>* (4) option to oversubscribe the underlying server resources<p>This allows you to create a whole new set of interactions between your dev docs and a developer visiting the docs.
We've had users building coding playgrounds [2] to show how their SDK works or adding embedded terminals to a landing page [3] to show how their CLI works.<p>The way Devbook works is that you use our frontend SDK [4] on our website. The SDK pings our backend and we boot up a VM. The VMs are ephemeral and get destroyed after a while of not getting pinged.
You can predefine what the VM filesystem will look like through our CLI via a simple Dockerfile [5].
We also have an open sourced UI library for components like terminal, file system explorer, or code editor [6].<p>The need for Devbook came from our own frustration with dev docs. It has always felt strange that dev docs contain so much code but none of it is actually runnable. You as a developer have to set up full environments to see how the API works and get a deeper understanding.<p>We are very early so we don't offer self-serve for now. A bit of manual work is still required when we are onboarding new customers.
We are looking for some specific use-case that would make our go-to-market strategy much easier. It feels like the product we offer is way too general. We basically say "here's a whole computer, have fun".<p>I'd love to know what you think about it. I'll hang out here and I'm happy to answer your questions!<p>[0] <a href="https://usedevbook.com/" rel="nofollow">https://usedevbook.com/</a><p>[1] <a href="https://github.com/firecracker-microvm/firecracker" rel="nofollow">https://github.com/firecracker-microvm/firecracker</a><p>[2] <a href="https://app.banana.dev/docs/carrot?ref=navmenu" rel="nofollow">https://app.banana.dev/docs/carrot?ref=navmenu</a><p>[3] <a href="https://runops.io/" rel="nofollow">https://runops.io/</a><p>[4] <a href="https://github.com/devbookhq/sdk" rel="nofollow">https://github.com/devbookhq/sdk</a><p>[5] <a href="https://github.com/devbookhq/devbookctl" rel="nofollow">https://github.com/devbookhq/devbookctl</a><p>[6] <a href="https://github.com/devbookhq/ui" rel="nofollow">https://github.com/devbookhq/ui</a>
Show HN: Monocle – bidirectional code generation library
I just published a bidirectional code generation library. Afaik it's the first of its kind, and it opens up a lot of possibilities for cool new types of dev tools. The PoC is for ruby, but the concept is very portable. <a href="https://blog.luitjes.it/posts/monocle-bidirectional-code-generation/" rel="nofollow">https://blog.luitjes.it/posts/monocle-bidirectional-code-gen...</a>
Show HN: Monocle – bidirectional code generation library
I just published a bidirectional code generation library. Afaik it's the first of its kind, and it opens up a lot of possibilities for cool new types of dev tools. The PoC is for ruby, but the concept is very portable. <a href="https://blog.luitjes.it/posts/monocle-bidirectional-code-generation/" rel="nofollow">https://blog.luitjes.it/posts/monocle-bidirectional-code-gen...</a>
Show HN: Discover the IndieWeb, one blog post at a time
Inspired by the "Ask HN: Share your personal site" last week, I finally came around and built a thing I wanted for a long time: a simple website to randomly explore all the awesome personal blogs without having to subscribe to them all.<p>So this is what I built over the weekend. You click a button and indieblog.page will redirect you to a random page from a personal page...<p>I'm happy to answer any questions you might have.
Show HN: Discover the IndieWeb, one blog post at a time
Inspired by the "Ask HN: Share your personal site" last week, I finally came around and built a thing I wanted for a long time: a simple website to randomly explore all the awesome personal blogs without having to subscribe to them all.<p>So this is what I built over the weekend. You click a button and indieblog.page will redirect you to a random page from a personal page...<p>I'm happy to answer any questions you might have.
Show HN: Discover the IndieWeb, one blog post at a time
Inspired by the "Ask HN: Share your personal site" last week, I finally came around and built a thing I wanted for a long time: a simple website to randomly explore all the awesome personal blogs without having to subscribe to them all.<p>So this is what I built over the weekend. You click a button and indieblog.page will redirect you to a random page from a personal page...<p>I'm happy to answer any questions you might have.