The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: A small Hypercard stack running as a PWA

In my early programming years, I went from BASIC to HyperCard, then learned C when I couldn't make HyperCard do everything I wanted. Plenty of folks have pointed out how the lack of native support for color doomed HyperCard. But I think it was really over when the web got started and replaced everything in the "personal content" space from underneath, so I decided to see if the idea of HyperCard would work as a web app. There are some missing pieces -- it's not perfectly compatible. You can, however, make stacks online and let others see them. Free, no ads, no personal information, you are not tracked, just a fun project.

Show HN: OSS Database, A crowdsourced database of Open Source alternatives

Show HN: OSS Database, A crowdsourced database of Open Source alternatives

Show HN: OSS Database, A crowdsourced database of Open Source alternatives

Show HN: GPT-3 powered Ouija spirit board that moves your mouse

Hi HN! I've been tinkering with this mini-game / horror experience for a while. I hope it creeps you out!<p>You can go into settings to toggle between 2 different chatbots: a scripted experience (with achievements to unlock) and a more versatile GPT-3 mode. Let me know what you think! :)<p>There's also a toggle to show how the mouse magic is made.<p>Source is available here: <a href="https://github.com/baobabKoodaa/ouija" rel="nofollow">https://github.com/baobabKoodaa/ouija</a>

Show HN: GPT-3 powered Ouija spirit board that moves your mouse

Hi HN! I've been tinkering with this mini-game / horror experience for a while. I hope it creeps you out!<p>You can go into settings to toggle between 2 different chatbots: a scripted experience (with achievements to unlock) and a more versatile GPT-3 mode. Let me know what you think! :)<p>There's also a toggle to show how the mouse magic is made.<p>Source is available here: <a href="https://github.com/baobabKoodaa/ouija" rel="nofollow">https://github.com/baobabKoodaa/ouija</a>

Show HN: GPT-3 powered Ouija spirit board that moves your mouse

Hi HN! I've been tinkering with this mini-game / horror experience for a while. I hope it creeps you out!<p>You can go into settings to toggle between 2 different chatbots: a scripted experience (with achievements to unlock) and a more versatile GPT-3 mode. Let me know what you think! :)<p>There's also a toggle to show how the mouse magic is made.<p>Source is available here: <a href="https://github.com/baobabKoodaa/ouija" rel="nofollow">https://github.com/baobabKoodaa/ouija</a>

Show HN: Usage, Cut your AWS Bill by 50%+ in 5 Minutes

Hi HN community, [Direct Link: www.usage.ai]<p>I’m Kaveh, founder and CEO of Usage, and am excited to show you Usage, an app that helps you slash your AWS EC2 bill by 50% in ~5min by trading reservations. As of today, Usage is in General Availability and any AWS user can use it. It works by creating a limited-access IAM role (ReadOnly + Ability to Manage Reservations) into your AWS account(s).<p>The AWS console interface has made it hard for companies to optimize their AWS spend. After years of working for different companies that use AWS, I still find it difficult to understand how much money I’m spending on AWS. I don’t know who owns what instances, how our commitments are saving us money (RIs, SPs, EDPs), and what instances can be sized down (or switched to spot).<p>At Usage, we are building a web-based app that keeps you in charge of your AWS while minimizing your bill. No code change, no moving your AWS account or instances around, and no downtime. We’ve built:<p>1) Real-Time RI/SP Recommendations: See which instances are uncovered by your SPs and/or RIs and get them covered with a single button tap. Instant savings.<p>2) RI Sell Recommendations: RIs that are no longer utilized are highlighted and sold instantly. No more worrying about unutilized RIs and no more needing to forecast your compute needs.<p>3) Consolidated View: View your EC2 instances and RI/SPs across all your AWS accounts in a single space. No more switching between AWS accounts.<p>4) Teams and Audit Log: Add as many users as you’d like to your Usage dashboard, and see who approved which recommendations.<p>We built Usage in ReactJS, Python, Java– and along the way we built our own internal accounting system to keep track of customer savings. We have plans to eventually release an open-source version of Usage.<p>Our business model is 20% of the savings we find you. We only make money when we save you money. We bill monthly and have longer-term enterprise plans available.<p>We take privacy extremely seriously. Your data is always protected both at-rest and in-transit. Additionally, Usage never collects or stores sensitive information. Usage only collects meta-data such as CPU utilization, launch time, instance configuration, region, etc. You can read our full privacy policy here: www.usage.ai/policy/<p>We are confident we can deliver a better AWS cost savings experience that is meaningfully better than other tools. If you use AWS, please give it a shot at www.usage.ai and let us know.<p>Let me know what you think! Ask me anything!

Show HN: Usage, Cut your AWS Bill by 50%+ in 5 Minutes

Hi HN community, [Direct Link: www.usage.ai]<p>I’m Kaveh, founder and CEO of Usage, and am excited to show you Usage, an app that helps you slash your AWS EC2 bill by 50% in ~5min by trading reservations. As of today, Usage is in General Availability and any AWS user can use it. It works by creating a limited-access IAM role (ReadOnly + Ability to Manage Reservations) into your AWS account(s).<p>The AWS console interface has made it hard for companies to optimize their AWS spend. After years of working for different companies that use AWS, I still find it difficult to understand how much money I’m spending on AWS. I don’t know who owns what instances, how our commitments are saving us money (RIs, SPs, EDPs), and what instances can be sized down (or switched to spot).<p>At Usage, we are building a web-based app that keeps you in charge of your AWS while minimizing your bill. No code change, no moving your AWS account or instances around, and no downtime. We’ve built:<p>1) Real-Time RI/SP Recommendations: See which instances are uncovered by your SPs and/or RIs and get them covered with a single button tap. Instant savings.<p>2) RI Sell Recommendations: RIs that are no longer utilized are highlighted and sold instantly. No more worrying about unutilized RIs and no more needing to forecast your compute needs.<p>3) Consolidated View: View your EC2 instances and RI/SPs across all your AWS accounts in a single space. No more switching between AWS accounts.<p>4) Teams and Audit Log: Add as many users as you’d like to your Usage dashboard, and see who approved which recommendations.<p>We built Usage in ReactJS, Python, Java– and along the way we built our own internal accounting system to keep track of customer savings. We have plans to eventually release an open-source version of Usage.<p>Our business model is 20% of the savings we find you. We only make money when we save you money. We bill monthly and have longer-term enterprise plans available.<p>We take privacy extremely seriously. Your data is always protected both at-rest and in-transit. Additionally, Usage never collects or stores sensitive information. Usage only collects meta-data such as CPU utilization, launch time, instance configuration, region, etc. You can read our full privacy policy here: www.usage.ai/policy/<p>We are confident we can deliver a better AWS cost savings experience that is meaningfully better than other tools. If you use AWS, please give it a shot at www.usage.ai and let us know.<p>Let me know what you think! Ask me anything!

Show HN: Program Synthesis for Ruby

Show HN: Program Synthesis for Ruby

Show HN: Program Synthesis for Ruby

Show HN: I made a website to search for half loaves of bread

Show HN: I made a website to search for half loaves of bread

Show HN: I made a website to search for half loaves of bread

Show HN: I made a website to search for half loaves of bread

Show HN: Razer x Lambda Tensorbook

Hi all, long time lurker, first time poster.<p>I want to share with you all something we've been working on for a while at Lambda: the Razer x Lambda Tensorbook: <a href="https://www.youtube.com/watch?v=wMh6Dhq7P_Q" rel="nofollow">https://www.youtube.com/watch?v=wMh6Dhq7P_Q</a><p>But before I tell you about it, I want to make this all about me, because I built this for me.<p>See, while I'm genuinely interested in hearing from the community what you think as this is the culmination of a lot of effort from a lot of people across so many different fields (seriously, the number of folks across manufacturing, engineering, design, logistics, and marketing who have had to work together to launch this is nuts), I really just want to tie the larger motivations for Tensorbook as a product back to a personal narrative to explain why I'm so proud.<p>So, flashback to 2018, and I'm a hardware engineer focusing on the compute system at Lyft's autonomous vehicle (AV) program, Level5 (L5). Here was a project that that would save lives, that would improve the human condition, that was all ready to go. I saw my role as coming in to product-ize, to take what was close to the finish line and get it over it. The disappointment was pretty brutal when I realized just how wrong I was.<p>It's one thing to nod along when reading Knuth write "premature optimization is the root of all evil"; it's another to experience it firsthand.<p>At Lyft L5 I thought I would be applying specialized inference accelerators (Habana, Groq, Graphcore, etc.) into the vehicle compute system. Instead, the only requirement that mattered org-wide was: "Don't do anything that slows down the perception team". Forget testing silicon with the potential to reduce power requirements by 10x, I was lucky to get a willing ear to hear my case for changing a flag in the TensorFlow runtime to perform inference at FP16 instead of FP32.<p>Don't get me wrong, there were a multitude of other difficult technical challenges to solve outside of the deep learning ones that were gating, but I had underestimated just how not-ready the CNNs for object detection and classification were. Something I thought was a solved problem was very much not, and ultimately resulted in my team and others building a 5,000 watt monster of server (+ power distribution, + thermals, + chassis, etc etc) that took up an entire rear row of seating. I'm happy to talk about that experience in the comments because I have a lot of fond memories from my time there.<p>Anyway, the takeaway I have from Lyft, and my first motivation here is that there is no such thing as over-provisioning or too much compute in a deep learning engineer's mind. Anything less than the most possible is a detriment to their workflow. I still truly believe AVs will save lives; so by extension, enabling deep learning engineers enables AVs enables improvement to the human condition. Transitive property, :thumbsup:<p>So moving on, my following role in industry was characterized by working closely with the least technical people I have ever had the opportunity to work with in my life. And I mean opportunity genuinely, because doing so gave me so much perspective on the things that you and I here probably take for granted. (How do we know that Ctrl+Alt+T will open a terminal? Why does `touch` make a file? How do I quit vim?)<p>So, the takeaway from that experience, and motivation #2 for me is that computers can be so unaccessible in surprising ways. I have a deep respect and appreciation for Linux, and I want others to see things the same way, so anything I can do to make easier the process of "self-serving" or "bootstrapping" to my level of understanding, is something worth doing to me.<p>So, with those two personal motivations outlined, I present to you, for your consideration, the Razer x Lambda Tensorbook. A laptop with a no-compromise approach to speeds-and-feeds and shipping with OEM support for Ubuntu.<p>sincerely, Vinay. Product Marketing @ Lambda

Show HN: Razer x Lambda Tensorbook

Hi all, long time lurker, first time poster.<p>I want to share with you all something we've been working on for a while at Lambda: the Razer x Lambda Tensorbook: <a href="https://www.youtube.com/watch?v=wMh6Dhq7P_Q" rel="nofollow">https://www.youtube.com/watch?v=wMh6Dhq7P_Q</a><p>But before I tell you about it, I want to make this all about me, because I built this for me.<p>See, while I'm genuinely interested in hearing from the community what you think as this is the culmination of a lot of effort from a lot of people across so many different fields (seriously, the number of folks across manufacturing, engineering, design, logistics, and marketing who have had to work together to launch this is nuts), I really just want to tie the larger motivations for Tensorbook as a product back to a personal narrative to explain why I'm so proud.<p>So, flashback to 2018, and I'm a hardware engineer focusing on the compute system at Lyft's autonomous vehicle (AV) program, Level5 (L5). Here was a project that that would save lives, that would improve the human condition, that was all ready to go. I saw my role as coming in to product-ize, to take what was close to the finish line and get it over it. The disappointment was pretty brutal when I realized just how wrong I was.<p>It's one thing to nod along when reading Knuth write "premature optimization is the root of all evil"; it's another to experience it firsthand.<p>At Lyft L5 I thought I would be applying specialized inference accelerators (Habana, Groq, Graphcore, etc.) into the vehicle compute system. Instead, the only requirement that mattered org-wide was: "Don't do anything that slows down the perception team". Forget testing silicon with the potential to reduce power requirements by 10x, I was lucky to get a willing ear to hear my case for changing a flag in the TensorFlow runtime to perform inference at FP16 instead of FP32.<p>Don't get me wrong, there were a multitude of other difficult technical challenges to solve outside of the deep learning ones that were gating, but I had underestimated just how not-ready the CNNs for object detection and classification were. Something I thought was a solved problem was very much not, and ultimately resulted in my team and others building a 5,000 watt monster of server (+ power distribution, + thermals, + chassis, etc etc) that took up an entire rear row of seating. I'm happy to talk about that experience in the comments because I have a lot of fond memories from my time there.<p>Anyway, the takeaway I have from Lyft, and my first motivation here is that there is no such thing as over-provisioning or too much compute in a deep learning engineer's mind. Anything less than the most possible is a detriment to their workflow. I still truly believe AVs will save lives; so by extension, enabling deep learning engineers enables AVs enables improvement to the human condition. Transitive property, :thumbsup:<p>So moving on, my following role in industry was characterized by working closely with the least technical people I have ever had the opportunity to work with in my life. And I mean opportunity genuinely, because doing so gave me so much perspective on the things that you and I here probably take for granted. (How do we know that Ctrl+Alt+T will open a terminal? Why does `touch` make a file? How do I quit vim?)<p>So, the takeaway from that experience, and motivation #2 for me is that computers can be so unaccessible in surprising ways. I have a deep respect and appreciation for Linux, and I want others to see things the same way, so anything I can do to make easier the process of "self-serving" or "bootstrapping" to my level of understanding, is something worth doing to me.<p>So, with those two personal motivations outlined, I present to you, for your consideration, the Razer x Lambda Tensorbook. A laptop with a no-compromise approach to speeds-and-feeds and shipping with OEM support for Ubuntu.<p>sincerely, Vinay. Product Marketing @ Lambda

Show HN: AV1 and WebRTC

Show HN: AV1 and WebRTC

< 1 2 3 ... 678 679 680 681 682 ... 924 925 926 >