The best Hacker News stories from All from the past day
Latest posts:
Osquery: An sqlite3 virtual table exposing operating system data to SQL
Does offering ChatGPT a tip cause it to generate better text?
Show HN: Nekoweb – a retro static web hosting
Microsoft is driving users away
Show HN: Reverse-Engineering a Switch Lite with 1,917 wires
Hey Hackers. This is a project I solo-developed that turns completed PCB assemblies into an easy to use boardview with some accompanying boardscans. There are lots of easier and better ways of doing this, but this is an experimentation to do it as cheaply as possible, with the highest quality and lowest chance of errors. The technical details are in the link.<p>Most public boardviews are almost entirely the result of industrial espionage, other than a few encrypted subscription based software platforms that provide extensive access. The process output is released as donationware, as my main concern is that even released as a low-cost purchase, there is a very strong culture to share this type of information at no cost. I would like to have a more sophisticated suggested donation system adaptive to user country, but I wasn't able to find a good solution.<p>In terms of 'good startup ideas', I don't think this is one of them. The very high level of soldering skill required makes it difficult to scale, and the prevailing piracy culture makes it challenging to monetize. My main advantage is that costs are very low now that I have the entire thing working. Other than forge ahead at a loss and hope for the best, or to pivot hard leveraging the imaging technology, I'm not sure what other options I have. It feels too complicated and repetitive for shoft-form video content. If you have any feedback, questions, suggestions, etc., I'd love to hear them.
Show HN: Reverse-Engineering a Switch Lite with 1,917 wires
Hey Hackers. This is a project I solo-developed that turns completed PCB assemblies into an easy to use boardview with some accompanying boardscans. There are lots of easier and better ways of doing this, but this is an experimentation to do it as cheaply as possible, with the highest quality and lowest chance of errors. The technical details are in the link.<p>Most public boardviews are almost entirely the result of industrial espionage, other than a few encrypted subscription based software platforms that provide extensive access. The process output is released as donationware, as my main concern is that even released as a low-cost purchase, there is a very strong culture to share this type of information at no cost. I would like to have a more sophisticated suggested donation system adaptive to user country, but I wasn't able to find a good solution.<p>In terms of 'good startup ideas', I don't think this is one of them. The very high level of soldering skill required makes it difficult to scale, and the prevailing piracy culture makes it challenging to monetize. My main advantage is that costs are very low now that I have the entire thing working. Other than forge ahead at a loss and hope for the best, or to pivot hard leveraging the imaging technology, I'm not sure what other options I have. It feels too complicated and repetitive for shoft-form video content. If you have any feedback, questions, suggestions, etc., I'd love to hear them.
Selfish reasons to want more humans
Earth just experienced its hottest 12 months in recorded history
Hallucination is inevitable: An innate limitation of large language models
Certain dogs are capable of learning the names for more than 100 different toys
A former Gizmodo writer changed name to 'Slackbot', stayed undetected for months
Generative Models: What do they know? Do they know things? Let's find out
Meta's new LLM-based test generator
Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
GPT in 500 Lines of SQL
Google helped destroy adoption of RSS feeds (2023)
Institutions try to preserve the problem to which they are the solution
Institutions try to preserve the problem to which they are the solution
Show HN: OK-Robot: open, modular home robot framework for pick-and-drop anywhere
Hi all, excited to share our latest work, OK-Robot, which is an open and modular framework to perform navigation and manipulation with a robot assistant in practically any homes without having to teach the robot anything new! You can simply unbox the target robot, install OK-Robot, give it a "scan" (think a 60 second iPhone video), and start asking the robot to move arbitrary things from A to B. We already tested it out in 10 home environments in New York city, and one environment each in Pittsburgh and Fremont.<p>We based everything off of the current best machine learning models, and so things don't quite work perfectly all the time, so we are hoping to build it together with the community! Our code is open: <a href="https://github.com/ok-robot/ok-robot">https://github.com/ok-robot/ok-robot</a> and we have a Discord server for discussion and support: <a href="https://discord.gg/wzzZJxqKYC" rel="nofollow">https://discord.gg/wzzZJxqKYC</a> If you are curious what works and what doesn't work, take a quick look at <a href="https://ok-robot.github.io/#analysis" rel="nofollow">https://ok-robot.github.io/#analysis</a> or read our paper for a detailed analysis: <a href="https://arxiv.org/abs/2401.12202" rel="nofollow">https://arxiv.org/abs/2401.12202</a><p>P.S.: while the code is open the project unfortunately isn't fully open source since one of our dependencies, AnyGrasp, has a closed-source, educational license. Apologize in advance, but we used it since that was the best grasping model we could have access to!<p>Would love to hear more thoughts and feedback on this project!
Show HN: OK-Robot: open, modular home robot framework for pick-and-drop anywhere
Hi all, excited to share our latest work, OK-Robot, which is an open and modular framework to perform navigation and manipulation with a robot assistant in practically any homes without having to teach the robot anything new! You can simply unbox the target robot, install OK-Robot, give it a "scan" (think a 60 second iPhone video), and start asking the robot to move arbitrary things from A to B. We already tested it out in 10 home environments in New York city, and one environment each in Pittsburgh and Fremont.<p>We based everything off of the current best machine learning models, and so things don't quite work perfectly all the time, so we are hoping to build it together with the community! Our code is open: <a href="https://github.com/ok-robot/ok-robot">https://github.com/ok-robot/ok-robot</a> and we have a Discord server for discussion and support: <a href="https://discord.gg/wzzZJxqKYC" rel="nofollow">https://discord.gg/wzzZJxqKYC</a> If you are curious what works and what doesn't work, take a quick look at <a href="https://ok-robot.github.io/#analysis" rel="nofollow">https://ok-robot.github.io/#analysis</a> or read our paper for a detailed analysis: <a href="https://arxiv.org/abs/2401.12202" rel="nofollow">https://arxiv.org/abs/2401.12202</a><p>P.S.: while the code is open the project unfortunately isn't fully open source since one of our dependencies, AnyGrasp, has a closed-source, educational license. Apologize in advance, but we used it since that was the best grasping model we could have access to!<p>Would love to hear more thoughts and feedback on this project!