The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: My solar-powered, ePaper digital photo frame

This is version 2 of my ongoing heirloom device project, a digital photo frame built with the goal of lasting longer than your typical gadget.<p>There's a part of me that wishes to commercialize a polished version of this product, but the more I speak to people, the more I become convinced that I belong to a very small minority.

Show HN: A little web server in C

A little web server written in C for Linux.<p>Supports: CGI, Reverse Proxy.<p>Single threaded using I/O multiplexing (select).

Show HN: A little web server in C

A little web server written in C for Linux.<p>Supports: CGI, Reverse Proxy.<p>Single threaded using I/O multiplexing (select).

Show HN: Oblivus GPU Cloud – Affordable and scalable GPU servers from $0.29/hr

Greetings HN!<p>This is Doruk from Oblivus, and I'm excited to announce the launch of our platform, Oblivus Cloud. After more than a year of beta testing, we're excited to offer you a platform where you can deploy affordable and scalable GPU virtual machines in as little as 30 seconds! <a href="https://oblivus.com/cloud" rel="nofollow">https://oblivus.com/cloud</a><p>- What sets Oblivus Cloud apart?<p>At the start of our journey, we had two primary goals in mind: to democratize High-Performance Computing and make it as straightforward as possible. We understand that maintaining GPU servers through major cloud service providers can be expensive, with hidden fees adding to the burden of running and maintaining servers. Additionally, the cloud can sometimes be overly complex for individuals who don't have much knowledge but still require powerful computing resources. That's why we decided to create a platform that offers affordable pricing, easy usability, and high-quality performance.<p>- Features<p>1. Fully customizable infrastructure that lets you switch between CPU and GPU configurations to suit your needs.<p>2. Transparent and affordable per-minute-based Pay-As-You-Go pricing with no hidden fees. Plus, free data ingress and egress. (Pricing: <a href="https://oblivus.com/pricing/" rel="nofollow">https://oblivus.com/pricing/</a>)<p>3. Optimized cost with storage and IP address-only billing when the virtual machine is shut down.<p>4. Each virtual machine comes with 10Gbps to 40Gbps public network connectivity.<p>5. NVMe ($0.00011/GB/hr) and HDD ($0.00006/GB/hr) storage that is 3x replicated to fulfill your storage needs.<p>6. Choose from a variety of cutting-edge CPUs and 10 state-of-the-art GPU SKUs. (Availability: <a href="https://oblivus.com/availability/" rel="nofollow">https://oblivus.com/availability/</a>)<p>7. OblivusAI OS images come with pre-installed ML libraries, so you can start training your models right away without the hassle of installing and configuring the necessary libraries.<p>8. If you're working with a team, utilize our organization feature to simplify the billing process. Everyone in your organization uses the same billing profile, so you don't need to keep track of multiple accounts.<p>9. No quotas or complex verification processes. Whether you represent a company, an institution, or you're a researcher, you have full access to our infrastructure without any limitations.<p>10. Easy-to-use API with detailed documentation so that you can integrate your code with ours.<p>- Pricing<p>At Oblivus Cloud, we provide pricing that is affordable, transparent, and up to 80% cheaper than major cloud service providers. Here is a breakdown of our pricing:<p>1. CPU-based virtual machines starting from just $0.019/hour.<p>2. NVIDIA Quadro RTX 4000s starting from $0.27/hour.<p>3. Tesla V100s starting from $0.51/hour.<p>4. NVIDIA A40s and RTX A6000s starting from $1.41/hour.<p>We also offer 6 other GPU SKUs to help you accurately size your workloads and only pay for what you need. Say goodbye to hidden fees and unpredictable costs.<p>If you represent a company, be sure to register for a business account to access even better pricing rates.<p>- Promo Code<p>Join us in celebrating the launch of Oblivus Cloud by claiming your $1 free credit! This may sound small, but it's enough to get started with us and experience the power of our platform. With $1, you can get over 3 hours of computing on our most affordable GPU-based configuration, or over 50 hours of computing on our cheapest CPU-based configuration.<p>To redeem this free credit, simply use the code HN_1 on the 'Add Balance' page after registration.<p>Register now at <a href="https://console.oblivus.com/register" rel="nofollow">https://console.oblivus.com/register</a><p>- Quick Links<p>Website: <a href="https://oblivus.com/" rel="nofollow">https://oblivus.com/</a><p>Console: <a href="https://console.oblivus.com/" rel="nofollow">https://console.oblivus.com/</a><p>Company Documentation: <a href="https://docs.oblivus.com/" rel="nofollow">https://docs.oblivus.com/</a><p>API Documentation: <a href="https://documenter.getpostman.com/view/21699896/UzBtoQ3e" rel="nofollow">https://documenter.getpostman.com/view/21699896/UzBtoQ3e</a><p>If you have any questions, feel free to post them below and I'll be happy to assist you. You can also directly email me at doruk@oblivus.com!

Show HN: Oblivus GPU Cloud – Affordable and scalable GPU servers from $0.29/hr

Greetings HN!<p>This is Doruk from Oblivus, and I'm excited to announce the launch of our platform, Oblivus Cloud. After more than a year of beta testing, we're excited to offer you a platform where you can deploy affordable and scalable GPU virtual machines in as little as 30 seconds! <a href="https://oblivus.com/cloud" rel="nofollow">https://oblivus.com/cloud</a><p>- What sets Oblivus Cloud apart?<p>At the start of our journey, we had two primary goals in mind: to democratize High-Performance Computing and make it as straightforward as possible. We understand that maintaining GPU servers through major cloud service providers can be expensive, with hidden fees adding to the burden of running and maintaining servers. Additionally, the cloud can sometimes be overly complex for individuals who don't have much knowledge but still require powerful computing resources. That's why we decided to create a platform that offers affordable pricing, easy usability, and high-quality performance.<p>- Features<p>1. Fully customizable infrastructure that lets you switch between CPU and GPU configurations to suit your needs.<p>2. Transparent and affordable per-minute-based Pay-As-You-Go pricing with no hidden fees. Plus, free data ingress and egress. (Pricing: <a href="https://oblivus.com/pricing/" rel="nofollow">https://oblivus.com/pricing/</a>)<p>3. Optimized cost with storage and IP address-only billing when the virtual machine is shut down.<p>4. Each virtual machine comes with 10Gbps to 40Gbps public network connectivity.<p>5. NVMe ($0.00011/GB/hr) and HDD ($0.00006/GB/hr) storage that is 3x replicated to fulfill your storage needs.<p>6. Choose from a variety of cutting-edge CPUs and 10 state-of-the-art GPU SKUs. (Availability: <a href="https://oblivus.com/availability/" rel="nofollow">https://oblivus.com/availability/</a>)<p>7. OblivusAI OS images come with pre-installed ML libraries, so you can start training your models right away without the hassle of installing and configuring the necessary libraries.<p>8. If you're working with a team, utilize our organization feature to simplify the billing process. Everyone in your organization uses the same billing profile, so you don't need to keep track of multiple accounts.<p>9. No quotas or complex verification processes. Whether you represent a company, an institution, or you're a researcher, you have full access to our infrastructure without any limitations.<p>10. Easy-to-use API with detailed documentation so that you can integrate your code with ours.<p>- Pricing<p>At Oblivus Cloud, we provide pricing that is affordable, transparent, and up to 80% cheaper than major cloud service providers. Here is a breakdown of our pricing:<p>1. CPU-based virtual machines starting from just $0.019/hour.<p>2. NVIDIA Quadro RTX 4000s starting from $0.27/hour.<p>3. Tesla V100s starting from $0.51/hour.<p>4. NVIDIA A40s and RTX A6000s starting from $1.41/hour.<p>We also offer 6 other GPU SKUs to help you accurately size your workloads and only pay for what you need. Say goodbye to hidden fees and unpredictable costs.<p>If you represent a company, be sure to register for a business account to access even better pricing rates.<p>- Promo Code<p>Join us in celebrating the launch of Oblivus Cloud by claiming your $1 free credit! This may sound small, but it's enough to get started with us and experience the power of our platform. With $1, you can get over 3 hours of computing on our most affordable GPU-based configuration, or over 50 hours of computing on our cheapest CPU-based configuration.<p>To redeem this free credit, simply use the code HN_1 on the 'Add Balance' page after registration.<p>Register now at <a href="https://console.oblivus.com/register" rel="nofollow">https://console.oblivus.com/register</a><p>- Quick Links<p>Website: <a href="https://oblivus.com/" rel="nofollow">https://oblivus.com/</a><p>Console: <a href="https://console.oblivus.com/" rel="nofollow">https://console.oblivus.com/</a><p>Company Documentation: <a href="https://docs.oblivus.com/" rel="nofollow">https://docs.oblivus.com/</a><p>API Documentation: <a href="https://documenter.getpostman.com/view/21699896/UzBtoQ3e" rel="nofollow">https://documenter.getpostman.com/view/21699896/UzBtoQ3e</a><p>If you have any questions, feel free to post them below and I'll be happy to assist you. You can also directly email me at doruk@oblivus.com!

Show HN: Smol Developer – Human-Centric and Coherent Whole Program Synthesis

Show HN: Openlayer – Test, fix, and improve your ML models

Hey HN, my name is Vikas, and my cofounders Rish, Gabe and I are building Openlayer: <a href="http://openlayer.com/">http://openlayer.com/</a><p>Openlayer is an ML testing, evaluation, and observability platform designed to help teams pinpoint and resolve issues in their models.<p>We were ML engineers experiencing the struggle that goes into properly evaluating models, making them robust to the myriad of unexpected edge cases they encounter in production, and understanding the reasons behind their mistakes. It was like playing an endless game of whack-a-mole with Jupyter notebooks and CSV files — fix one issue and another pops up. This shouldn’t be the case. Error analysis is vital to establishing guardrails for AI and ensuring fairness across model predictions.<p>Traditional software testing platforms are designed for deterministic systems, where a given input produces an expected output. Since ML models are probabilistic, testing them reliably has been a challenge. What sets Openlayer apart from other companies in the space is our end-to-end approach to tackling both pre- and post-deployment stages of the ML pipeline. This "shift-left" approach emphasizes the importance of thorough validation before you ship, rather than relying solely on monitoring after you deploy. Having a strong evaluation process pre-ship means fewer bugs for your users, shorter and more efficient dev-cycles, and lower chances of getting into a PR disaster or having to recall a model.<p>Openlayer provides ML teams and individuals with a suite of powerful tools to understand models and data beyond your typical metrics. The platform offers insights about the quality of your training and validation sets, the performance of your model across subpopulations of your data, and much more. Each of these insights can be turned into a “goal.” As you commit new versions of your models and data, you can see how your model progresses towards these goals, as you guard against regressions you may have otherwise not picked up on and continually raise the bar.<p>Here's a quick rundown of the Openlayer workflow:<p>1. Add a hook in your training / data ingestion pipeline to upload your data and model predictions to Openlayer via our API<p>2. Explore insights about your models and data and create goals around them [1]<p>3. Diagnose issues with the help of our platform, using powerful tools like explainability (e.g. SHAP values) to get actionable recommendations on how to improve<p>4. Track the progress over time towards your goals with our UI and API and create new ones to keep improving<p>We've got a free sandbox for you to try out the platform today! You can sign up here: <a href="https://app.openlayer.com/">https://app.openlayer.com/</a>. We are also soon adding support for even more ML tasks, so please reach out if your use case is not supported and we can add you to a waitlist.<p>Give Openlayer a spin and join us in revolutionizing ML development for greater efficiency and success. Let us know what you think, or if you have any questions about Openlayer or model evaluation in general.<p>[1] A quick run-down of the categories of goals you can track:<p>- <i>Integrity</i> goals measure the quality of your validation and training sets<p>- <i>Consistency</i> goals guard against drift between your datasets<p>- <i>Performance</i> goals evaluate your model's performance across subpopulations of the data<p>- <i>Robustness</i> goals stress-test your model using synthetic data to uncover edge cases<p>- <i>Fairness</i> goals help you understand biases in your model on sensitive populations

Show HN: Hat-syslog – Syslog Server with real time web UI

Show HN: Sortabase, a collaborative, visual database builder for communities

We built Sortabase to let communities collaborate on visual databases of the things they know and care about.<p>The fields of each database are defined by its moderators using a no-code drag-n-drop interface, and the resulting database is easy to search, filter, sort, and contribute to.<p>The platform is 100% free to use - take a look, and feedback is appreciated!

Show HN: Sortabase, a collaborative, visual database builder for communities

We built Sortabase to let communities collaborate on visual databases of the things they know and care about.<p>The fields of each database are defined by its moderators using a no-code drag-n-drop interface, and the resulting database is easy to search, filter, sort, and contribute to.<p>The platform is 100% free to use - take a look, and feedback is appreciated!

Show HN: Use ChatGPT, Bing, Bard and Claude in One App

Show HN: Use ChatGPT, Bing, Bard and Claude in One App

Show HN: Willow – Open-source privacy-focused voice assistant hardware

As the Home Assistant project says, it's the year of voice!<p>I love Home Assistant and I've always thought the ESP BOX[0] hardware is cool. I finally got around to starting a project to use the ESP BOX hardware with Home Assistant and other platforms. Why?<p>- It's actually "Alexa/Echo competitive". Wake word detection, voice activity detection, echo cancellation, automatic gain control, and high quality audio for $50 means with Willow and the support of Home Assistant there are no compromises on looks, quality, accuracy, speed, and cost.<p>- It's cheap. With a touch LCD display, dual microphones, speaker, enclosure, buttons, etc it can be bought today for $50 all-in.<p>- It's ready to go. Take it out of the box, flash with Willow, put it somewhere.<p>- It's not creepy. Voice is either sent to a self-hosted inference server or commands are recognized locally on the ESP BOX.<p>- It doesn't hassle or try to sell you. If I hear "Did you know?" one more time from Alexa I think I'm going to lose it.<p>- It's open source.<p>- It's capable. This is the first "release" of Willow and I don't think we've even begun scratching the surface of what the hardware and software components are capable of.<p>- It can integrate with anything. Simple on the wire format - speech output text is sent via HTTP POST to whatever URI you configure. Send it anywhere, and do anything!<p>- It still does cool maker stuff. With 16 GPIOs exposed on the back of the enclosure there are all kinds of interesting possibilities.<p>This is the first (and VERY early) release but we're really interested to hear what HN thinks!<p>[0] - <a href="https://github.com/espressif/esp-box">https://github.com/espressif/esp-box</a>

Show HN: Willow – Open-source privacy-focused voice assistant hardware

As the Home Assistant project says, it's the year of voice!<p>I love Home Assistant and I've always thought the ESP BOX[0] hardware is cool. I finally got around to starting a project to use the ESP BOX hardware with Home Assistant and other platforms. Why?<p>- It's actually "Alexa/Echo competitive". Wake word detection, voice activity detection, echo cancellation, automatic gain control, and high quality audio for $50 means with Willow and the support of Home Assistant there are no compromises on looks, quality, accuracy, speed, and cost.<p>- It's cheap. With a touch LCD display, dual microphones, speaker, enclosure, buttons, etc it can be bought today for $50 all-in.<p>- It's ready to go. Take it out of the box, flash with Willow, put it somewhere.<p>- It's not creepy. Voice is either sent to a self-hosted inference server or commands are recognized locally on the ESP BOX.<p>- It doesn't hassle or try to sell you. If I hear "Did you know?" one more time from Alexa I think I'm going to lose it.<p>- It's open source.<p>- It's capable. This is the first "release" of Willow and I don't think we've even begun scratching the surface of what the hardware and software components are capable of.<p>- It can integrate with anything. Simple on the wire format - speech output text is sent via HTTP POST to whatever URI you configure. Send it anywhere, and do anything!<p>- It still does cool maker stuff. With 16 GPIOs exposed on the back of the enclosure there are all kinds of interesting possibilities.<p>This is the first (and VERY early) release but we're really interested to hear what HN thinks!<p>[0] - <a href="https://github.com/espressif/esp-box">https://github.com/espressif/esp-box</a>

Show HN: Willow – Open-source privacy-focused voice assistant hardware

As the Home Assistant project says, it's the year of voice!<p>I love Home Assistant and I've always thought the ESP BOX[0] hardware is cool. I finally got around to starting a project to use the ESP BOX hardware with Home Assistant and other platforms. Why?<p>- It's actually "Alexa/Echo competitive". Wake word detection, voice activity detection, echo cancellation, automatic gain control, and high quality audio for $50 means with Willow and the support of Home Assistant there are no compromises on looks, quality, accuracy, speed, and cost.<p>- It's cheap. With a touch LCD display, dual microphones, speaker, enclosure, buttons, etc it can be bought today for $50 all-in.<p>- It's ready to go. Take it out of the box, flash with Willow, put it somewhere.<p>- It's not creepy. Voice is either sent to a self-hosted inference server or commands are recognized locally on the ESP BOX.<p>- It doesn't hassle or try to sell you. If I hear "Did you know?" one more time from Alexa I think I'm going to lose it.<p>- It's open source.<p>- It's capable. This is the first "release" of Willow and I don't think we've even begun scratching the surface of what the hardware and software components are capable of.<p>- It can integrate with anything. Simple on the wire format - speech output text is sent via HTTP POST to whatever URI you configure. Send it anywhere, and do anything!<p>- It still does cool maker stuff. With 16 GPIOs exposed on the back of the enclosure there are all kinds of interesting possibilities.<p>This is the first (and VERY early) release but we're really interested to hear what HN thinks!<p>[0] - <a href="https://github.com/espressif/esp-box">https://github.com/espressif/esp-box</a>

Show HN: Create unique avatars in any style for your apps and websites

Show HN: Free Planning Poker to estimate tasks with ease

Show HN: WhyBot, making GPT-4 question itself

Hi HN — we’re John and Vish! We built WhyBot, a tool to help you deeply explore a question or topic. You ask a question, and WhyBot responds by building an ever-expanding knowledge graph. It does this by recursively generating answers and follow-up questions. You can change its persona to change the flavor of the generations (try toddler mode!).<p>We originally built this for the AngelList Agent Hackathon (<a href="https://twitter.com/AqeelMeetsWorld/status/1650279974405042178?s=20" rel="nofollow">https://twitter.com/AqeelMeetsWorld/status/16502799744050421...</a>) and got a lot of interest from folks asking to play around with it. So we thought it’d be fun to brush it up and release it as a web app! It’s a work in progress and we plan on adding more features, such as saving, sharing, focusing on one branch and potentially executing code.<p>We hope you enjoy playing around with it and would love to hear any of your feedback or thoughts.

Show HN: WhyBot, making GPT-4 question itself

Hi HN — we’re John and Vish! We built WhyBot, a tool to help you deeply explore a question or topic. You ask a question, and WhyBot responds by building an ever-expanding knowledge graph. It does this by recursively generating answers and follow-up questions. You can change its persona to change the flavor of the generations (try toddler mode!).<p>We originally built this for the AngelList Agent Hackathon (<a href="https://twitter.com/AqeelMeetsWorld/status/1650279974405042178?s=20" rel="nofollow">https://twitter.com/AqeelMeetsWorld/status/16502799744050421...</a>) and got a lot of interest from folks asking to play around with it. So we thought it’d be fun to brush it up and release it as a web app! It’s a work in progress and we plan on adding more features, such as saving, sharing, focusing on one branch and potentially executing code.<p>We hope you enjoy playing around with it and would love to hear any of your feedback or thoughts.

Show HN: I built my first Cyberdeck

< 1 2 3 ... 515 516 517 518 519 ... 958 959 960 >