The best Hacker News stories from Show from the past day
Latest posts:
Show HN: To prevent dry eyes and back pain, I created a macOS app
In 2019, I experienced eye soreness and back pain for a while because I was constantly working long hours in front of my 16 inch Macbook without any rest.<p>I decided to do something to change that. I’m not a fan of Apple Watch or smartbands. So the first thing I did was looking for some reminder software to remind me to take a break in the App Store, but none of them were smart enough for my needs. I wish the software could automatically tell if I was working, rather than requiring me to manually set an alarm. At the same time, when I go to the bathroom or drink coffee, it can automatically increase the time I can continue to work afterward.<p>So I created Eye Monitor. Eye Monitor is an automatic reminder tool. It judges whether you are using the computer through the use of the mouse and keyboard. (which means when a user is watching Youtube videos, Eye Monitor will consider it as not using computer. I haven't found a solution yet.) Whenever you use it continuously, your fatigue value will increase, and after a period of rest, your fatigue value will decrease automatically. When your fatigue value reaches the threshold you set, it will trigger a reminder (including the dock icon, status bar, notification, full-screen pop-up window, etc.).<p>After a year of iteration, Eye monitor now has a chart to show your usage of the day. And users now can customize the fatigue threshold, rest duration, reminder interval, reminder style, etc., and even customize the text of the notification (My customized notification text is “All work and no play makes Jack a dull boy”) or upload your favorite picture as the wallpaper of the full-screen pop-up window.(Not so useful, but I like it.)<p>I like to set the reminder interval very small, like 1 minute, so that when I turn off the reminder, 1 minute later the reminder will reappear again and I will decide to take a break.<p>This software is like a bit of a nagging mom, taking the trouble to remind you to rest. I hope you will like it. Here is the App Store URL: [<a href="https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12](https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12)" rel="nofollow">https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&...</a>
Show HN: To prevent dry eyes and back pain, I created a macOS app
In 2019, I experienced eye soreness and back pain for a while because I was constantly working long hours in front of my 16 inch Macbook without any rest.<p>I decided to do something to change that. I’m not a fan of Apple Watch or smartbands. So the first thing I did was looking for some reminder software to remind me to take a break in the App Store, but none of them were smart enough for my needs. I wish the software could automatically tell if I was working, rather than requiring me to manually set an alarm. At the same time, when I go to the bathroom or drink coffee, it can automatically increase the time I can continue to work afterward.<p>So I created Eye Monitor. Eye Monitor is an automatic reminder tool. It judges whether you are using the computer through the use of the mouse and keyboard. (which means when a user is watching Youtube videos, Eye Monitor will consider it as not using computer. I haven't found a solution yet.) Whenever you use it continuously, your fatigue value will increase, and after a period of rest, your fatigue value will decrease automatically. When your fatigue value reaches the threshold you set, it will trigger a reminder (including the dock icon, status bar, notification, full-screen pop-up window, etc.).<p>After a year of iteration, Eye monitor now has a chart to show your usage of the day. And users now can customize the fatigue threshold, rest duration, reminder interval, reminder style, etc., and even customize the text of the notification (My customized notification text is “All work and no play makes Jack a dull boy”) or upload your favorite picture as the wallpaper of the full-screen pop-up window.(Not so useful, but I like it.)<p>I like to set the reminder interval very small, like 1 minute, so that when I turn off the reminder, 1 minute later the reminder will reappear again and I will decide to take a break.<p>This software is like a bit of a nagging mom, taking the trouble to remind you to rest. I hope you will like it. Here is the App Store URL: [<a href="https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12](https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12)" rel="nofollow">https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&...</a>
Show HN: To prevent dry eyes and back pain, I created a macOS app
In 2019, I experienced eye soreness and back pain for a while because I was constantly working long hours in front of my 16 inch Macbook without any rest.<p>I decided to do something to change that. I’m not a fan of Apple Watch or smartbands. So the first thing I did was looking for some reminder software to remind me to take a break in the App Store, but none of them were smart enough for my needs. I wish the software could automatically tell if I was working, rather than requiring me to manually set an alarm. At the same time, when I go to the bathroom or drink coffee, it can automatically increase the time I can continue to work afterward.<p>So I created Eye Monitor. Eye Monitor is an automatic reminder tool. It judges whether you are using the computer through the use of the mouse and keyboard. (which means when a user is watching Youtube videos, Eye Monitor will consider it as not using computer. I haven't found a solution yet.) Whenever you use it continuously, your fatigue value will increase, and after a period of rest, your fatigue value will decrease automatically. When your fatigue value reaches the threshold you set, it will trigger a reminder (including the dock icon, status bar, notification, full-screen pop-up window, etc.).<p>After a year of iteration, Eye monitor now has a chart to show your usage of the day. And users now can customize the fatigue threshold, rest duration, reminder interval, reminder style, etc., and even customize the text of the notification (My customized notification text is “All work and no play makes Jack a dull boy”) or upload your favorite picture as the wallpaper of the full-screen pop-up window.(Not so useful, but I like it.)<p>I like to set the reminder interval very small, like 1 minute, so that when I turn off the reminder, 1 minute later the reminder will reappear again and I will decide to take a break.<p>This software is like a bit of a nagging mom, taking the trouble to remind you to rest. I hope you will like it. Here is the App Store URL: [<a href="https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12](https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&mt=12)" rel="nofollow">https://apps.apple.com/cn/app/eye-monitor/id1527031341?l=en&...</a>
Show HN: Grid.js – Advanced table library that works everywhere (2020)
Show HN: Grid.js – Advanced table library that works everywhere (2020)
Show HN: Grid.js – Advanced table library that works everywhere (2020)
Show HN: ModelRunner – open source, speech-enabled data management platform
Warning: this whole post is a blatant plug for my Open Source project <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a><p>There is lot of discussion around no code platforms and why developers don’t like them. My view is that they can be very useful to quickly get through the boring parts of a project, like creating master data management screens for example. So I’ve built my own version which interprets models at run time and, it turns out, understands natural language queries too!<p>Hi, my name is Etienne, I love coding and I’ve been doing it for a few decades now so I’d rather focus on code that keeps me interested. Unfortunately, I find that there is always a lot to code before I get to the interesting stuff. So, like every other half-decent programmer, I’ve always tried to automate as much as possible and build reusable libraries by adding levels of indirection and parameters.<p>I’ve been doing this for so long now that my code has become ‘hyper’ parameterised, so much so that I had to store all the parameters in configuration files. These evolved into complete models which are basically a mix between ER models and UML diagrams: they include Entities and Attributes but also support all UML relationships (plus Back References) as well as formulas in object notation like “Product.Name” and “Sum(OrderLines.Amount)”. I’ve even extended the idea to include workflow models to specify what happens when an object is created, updated or deleted or when a pre-requisite condition becomes true.<p>To simplify managing the models, I’ve written a graphical editor, starting with Eclipse GEF but since I like to reinvent the wheel, I moved to plain HTML5/JS. To make it even easier, I’ve added Google Speech Recognition so I can now design models by just talking to Chrome and when I’m done, I can deploy them with one click or by saying something like ‘please deploy the application’. This will create a schema for the data and the ‘meta’ application will be ready to offer standard, web based, data management screens.<p>At this stage you’re probably thinking “Great, you can design and deploy data driven apps with your voice, so what?”<p>Ok, let’s move on to something more interesting then, which is what the ‘meta’ app can do because it has access to all the information in the model at run time, like for example, the ability to manipulate the data using natural language queries.<p>This works because having access to the semantics in the model removes the current gap between Machine Learning based Natural Language Understanding systems, which are very flexible but mostly ignorant of the domain model and, on the other hand, old fashioned back end systems with very rigid APIs. You can find a more detailed discussion here: <a href="https://modeling-languages.com/modelrunner-open-source-no-code-nlu-voice-modeling-data-platform/" rel="nofollow">https://modeling-languages.com/modelrunner-open-source-no-co...</a>.<p>So I’ve also added Google Speech Recognition to the ‘meta’ application and I can now just speak to it and tell it to “create a city called Melbourne and set postcode to 3000 and set notes to the most liveable city in the world” or “get me a list of customers living in Sydney aged 40” which I think is pretty cool and almost justifies all the hours and late nights I’ve spent coding it!<p>I think this has pretty obvious applications like for example, being able to manage your data on the go by just talking to your phone instead of trying to use a GUI on a small screen.<p>So, I highly recommend the parameterised indirection approach but if you don’t have a lot of time to write your own code, you might want to have a look at mine, it’s all Open Source with an MIT license: <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a>.<p>Or, if you just want to try it or watch a demo, just head to <a href="https://modelrunner.org" rel="nofollow">https://modelrunner.org</a>.<p>Now, it’s still very much a work in progress and I’ve spent more time on the core engine than on the UI so if you try to break it, you probably will! But, if you give it a try, please let me know how you went!<p>Thank you!
Show HN: ModelRunner – open source, speech-enabled data management platform
Warning: this whole post is a blatant plug for my Open Source project <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a><p>There is lot of discussion around no code platforms and why developers don’t like them. My view is that they can be very useful to quickly get through the boring parts of a project, like creating master data management screens for example. So I’ve built my own version which interprets models at run time and, it turns out, understands natural language queries too!<p>Hi, my name is Etienne, I love coding and I’ve been doing it for a few decades now so I’d rather focus on code that keeps me interested. Unfortunately, I find that there is always a lot to code before I get to the interesting stuff. So, like every other half-decent programmer, I’ve always tried to automate as much as possible and build reusable libraries by adding levels of indirection and parameters.<p>I’ve been doing this for so long now that my code has become ‘hyper’ parameterised, so much so that I had to store all the parameters in configuration files. These evolved into complete models which are basically a mix between ER models and UML diagrams: they include Entities and Attributes but also support all UML relationships (plus Back References) as well as formulas in object notation like “Product.Name” and “Sum(OrderLines.Amount)”. I’ve even extended the idea to include workflow models to specify what happens when an object is created, updated or deleted or when a pre-requisite condition becomes true.<p>To simplify managing the models, I’ve written a graphical editor, starting with Eclipse GEF but since I like to reinvent the wheel, I moved to plain HTML5/JS. To make it even easier, I’ve added Google Speech Recognition so I can now design models by just talking to Chrome and when I’m done, I can deploy them with one click or by saying something like ‘please deploy the application’. This will create a schema for the data and the ‘meta’ application will be ready to offer standard, web based, data management screens.<p>At this stage you’re probably thinking “Great, you can design and deploy data driven apps with your voice, so what?”<p>Ok, let’s move on to something more interesting then, which is what the ‘meta’ app can do because it has access to all the information in the model at run time, like for example, the ability to manipulate the data using natural language queries.<p>This works because having access to the semantics in the model removes the current gap between Machine Learning based Natural Language Understanding systems, which are very flexible but mostly ignorant of the domain model and, on the other hand, old fashioned back end systems with very rigid APIs. You can find a more detailed discussion here: <a href="https://modeling-languages.com/modelrunner-open-source-no-code-nlu-voice-modeling-data-platform/" rel="nofollow">https://modeling-languages.com/modelrunner-open-source-no-co...</a>.<p>So I’ve also added Google Speech Recognition to the ‘meta’ application and I can now just speak to it and tell it to “create a city called Melbourne and set postcode to 3000 and set notes to the most liveable city in the world” or “get me a list of customers living in Sydney aged 40” which I think is pretty cool and almost justifies all the hours and late nights I’ve spent coding it!<p>I think this has pretty obvious applications like for example, being able to manage your data on the go by just talking to your phone instead of trying to use a GUI on a small screen.<p>So, I highly recommend the parameterised indirection approach but if you don’t have a lot of time to write your own code, you might want to have a look at mine, it’s all Open Source with an MIT license: <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a>.<p>Or, if you just want to try it or watch a demo, just head to <a href="https://modelrunner.org" rel="nofollow">https://modelrunner.org</a>.<p>Now, it’s still very much a work in progress and I’ve spent more time on the core engine than on the UI so if you try to break it, you probably will! But, if you give it a try, please let me know how you went!<p>Thank you!
Show HN: ModelRunner – open source, speech-enabled data management platform
Warning: this whole post is a blatant plug for my Open Source project <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a><p>There is lot of discussion around no code platforms and why developers don’t like them. My view is that they can be very useful to quickly get through the boring parts of a project, like creating master data management screens for example. So I’ve built my own version which interprets models at run time and, it turns out, understands natural language queries too!<p>Hi, my name is Etienne, I love coding and I’ve been doing it for a few decades now so I’d rather focus on code that keeps me interested. Unfortunately, I find that there is always a lot to code before I get to the interesting stuff. So, like every other half-decent programmer, I’ve always tried to automate as much as possible and build reusable libraries by adding levels of indirection and parameters.<p>I’ve been doing this for so long now that my code has become ‘hyper’ parameterised, so much so that I had to store all the parameters in configuration files. These evolved into complete models which are basically a mix between ER models and UML diagrams: they include Entities and Attributes but also support all UML relationships (plus Back References) as well as formulas in object notation like “Product.Name” and “Sum(OrderLines.Amount)”. I’ve even extended the idea to include workflow models to specify what happens when an object is created, updated or deleted or when a pre-requisite condition becomes true.<p>To simplify managing the models, I’ve written a graphical editor, starting with Eclipse GEF but since I like to reinvent the wheel, I moved to plain HTML5/JS. To make it even easier, I’ve added Google Speech Recognition so I can now design models by just talking to Chrome and when I’m done, I can deploy them with one click or by saying something like ‘please deploy the application’. This will create a schema for the data and the ‘meta’ application will be ready to offer standard, web based, data management screens.<p>At this stage you’re probably thinking “Great, you can design and deploy data driven apps with your voice, so what?”<p>Ok, let’s move on to something more interesting then, which is what the ‘meta’ app can do because it has access to all the information in the model at run time, like for example, the ability to manipulate the data using natural language queries.<p>This works because having access to the semantics in the model removes the current gap between Machine Learning based Natural Language Understanding systems, which are very flexible but mostly ignorant of the domain model and, on the other hand, old fashioned back end systems with very rigid APIs. You can find a more detailed discussion here: <a href="https://modeling-languages.com/modelrunner-open-source-no-code-nlu-voice-modeling-data-platform/" rel="nofollow">https://modeling-languages.com/modelrunner-open-source-no-co...</a>.<p>So I’ve also added Google Speech Recognition to the ‘meta’ application and I can now just speak to it and tell it to “create a city called Melbourne and set postcode to 3000 and set notes to the most liveable city in the world” or “get me a list of customers living in Sydney aged 40” which I think is pretty cool and almost justifies all the hours and late nights I’ve spent coding it!<p>I think this has pretty obvious applications like for example, being able to manage your data on the go by just talking to your phone instead of trying to use a GUI on a small screen.<p>So, I highly recommend the parameterised indirection approach but if you don’t have a lot of time to write your own code, you might want to have a look at mine, it’s all Open Source with an MIT license: <a href="https://github.com/etiennesillon/ModelRunner" rel="nofollow">https://github.com/etiennesillon/ModelRunner</a>.<p>Or, if you just want to try it or watch a demo, just head to <a href="https://modelrunner.org" rel="nofollow">https://modelrunner.org</a>.<p>Now, it’s still very much a work in progress and I’ve spent more time on the core engine than on the UI so if you try to break it, you probably will! But, if you give it a try, please let me know how you went!<p>Thank you!
Show HN: GraphQL Client in the Terminal
Show HN: GraphQL Client in the Terminal
Show HN: GraphQL Client in the Terminal
Show HN: GraphQL Client in the Terminal
Show HN: Plasmo – a framework for building modern Chrome extensions
Hey HN, we're excited to have people try out our framework! When we built out a Chrome extension earlier this year, we noticed that the config was too imperative. You had to constantly tell Chrome via the manifest.json file where your files were, what your permissions should be, etc.<p>So we thought it might be interesting to build a more declarative framework. When we built a proof of concept, we enjoyed working with it and decided to invest more time into making it usable and adding more features.<p>We're still pretty early in building it out, and there's a bunch more we want to add, but this feels like a good time to showcase it and hear what people think!
Show HN: Plasmo – a framework for building modern Chrome extensions
Hey HN, we're excited to have people try out our framework! When we built out a Chrome extension earlier this year, we noticed that the config was too imperative. You had to constantly tell Chrome via the manifest.json file where your files were, what your permissions should be, etc.<p>So we thought it might be interesting to build a more declarative framework. When we built a proof of concept, we enjoyed working with it and decided to invest more time into making it usable and adding more features.<p>We're still pretty early in building it out, and there's a bunch more we want to add, but this feels like a good time to showcase it and hear what people think!
Show HN: Fast Deep Reinforcement Learning Course
I worked on this applied Deep Reinforcement Learning course for the better part of 2021. I made a Datacamp course [0] before, and this served as my inspiration to make an applied Deep RL series.<p>Normally, Deep RL courses teach a lot of mathematically involved theory. You get the practical applications near the end (if at all).<p>I have tried to turn that on its head. In the top-down approach, you learn practical skills first, then go deeper later. This is much more fun.<p>This course (the first in a planned multi-part series) shows how to use the Deep Reinforcement Learning framework RLlib to solve OpenAI Gym environments. I provide a big-picture overview of RL and show how to use the tools to get the job done. This approach is similar to learning Deep Learning by building and training various deep networks using a high-level framework e.g. Keras.<p>In the next course in the series (open for pre-enrollment), we move on to solving real-world Deep RL problems using custom environments and various tricks that make the algorithms work better [1].<p>The main advantage of this sequence is that these practical skills can be picked up fast and used in real life immediately. The involved mathematical bits can be picked up later. RLlib is the industry standard, so you won't need to change tools as you progress.<p>This is the first time that I made a course on my own. I learned flip-chart drawing to illustrate the slides and notebooks. That was fun, considering how much I suck at drawing. I am using Teachable as the LMS, Latex (Beamer) for the slides, Sketchbook for illustrations, Blue Yeti for audio recording, OBS Studio for screencasting, and Filmora for video editing. The captions are first auto-generated on YouTube and then hand edited to fix errors and improve formatting. I do the majority of the production on Linux and then switch to Windows for video editing.<p>I released the course last month and the makers of RLlib got in touch to show their approval. That's the best thing to happen so far.<p>Please feel free to try it and ask any questions. I am around and will do my best to answer them.<p>[0] <a href="https://www.datacamp.com/courses/unit-testing-for-data-science-in-python" rel="nofollow">https://www.datacamp.com/courses/unit-testing-for-data-scien...</a>
[1] <a href="https://courses.dibya.online/p/realdeeprl" rel="nofollow">https://courses.dibya.online/p/realdeeprl</a>
Show HN: Fast Deep Reinforcement Learning Course
I worked on this applied Deep Reinforcement Learning course for the better part of 2021. I made a Datacamp course [0] before, and this served as my inspiration to make an applied Deep RL series.<p>Normally, Deep RL courses teach a lot of mathematically involved theory. You get the practical applications near the end (if at all).<p>I have tried to turn that on its head. In the top-down approach, you learn practical skills first, then go deeper later. This is much more fun.<p>This course (the first in a planned multi-part series) shows how to use the Deep Reinforcement Learning framework RLlib to solve OpenAI Gym environments. I provide a big-picture overview of RL and show how to use the tools to get the job done. This approach is similar to learning Deep Learning by building and training various deep networks using a high-level framework e.g. Keras.<p>In the next course in the series (open for pre-enrollment), we move on to solving real-world Deep RL problems using custom environments and various tricks that make the algorithms work better [1].<p>The main advantage of this sequence is that these practical skills can be picked up fast and used in real life immediately. The involved mathematical bits can be picked up later. RLlib is the industry standard, so you won't need to change tools as you progress.<p>This is the first time that I made a course on my own. I learned flip-chart drawing to illustrate the slides and notebooks. That was fun, considering how much I suck at drawing. I am using Teachable as the LMS, Latex (Beamer) for the slides, Sketchbook for illustrations, Blue Yeti for audio recording, OBS Studio for screencasting, and Filmora for video editing. The captions are first auto-generated on YouTube and then hand edited to fix errors and improve formatting. I do the majority of the production on Linux and then switch to Windows for video editing.<p>I released the course last month and the makers of RLlib got in touch to show their approval. That's the best thing to happen so far.<p>Please feel free to try it and ask any questions. I am around and will do my best to answer them.<p>[0] <a href="https://www.datacamp.com/courses/unit-testing-for-data-science-in-python" rel="nofollow">https://www.datacamp.com/courses/unit-testing-for-data-scien...</a>
[1] <a href="https://courses.dibya.online/p/realdeeprl" rel="nofollow">https://courses.dibya.online/p/realdeeprl</a>
Show HN: Fast Deep Reinforcement Learning Course
I worked on this applied Deep Reinforcement Learning course for the better part of 2021. I made a Datacamp course [0] before, and this served as my inspiration to make an applied Deep RL series.<p>Normally, Deep RL courses teach a lot of mathematically involved theory. You get the practical applications near the end (if at all).<p>I have tried to turn that on its head. In the top-down approach, you learn practical skills first, then go deeper later. This is much more fun.<p>This course (the first in a planned multi-part series) shows how to use the Deep Reinforcement Learning framework RLlib to solve OpenAI Gym environments. I provide a big-picture overview of RL and show how to use the tools to get the job done. This approach is similar to learning Deep Learning by building and training various deep networks using a high-level framework e.g. Keras.<p>In the next course in the series (open for pre-enrollment), we move on to solving real-world Deep RL problems using custom environments and various tricks that make the algorithms work better [1].<p>The main advantage of this sequence is that these practical skills can be picked up fast and used in real life immediately. The involved mathematical bits can be picked up later. RLlib is the industry standard, so you won't need to change tools as you progress.<p>This is the first time that I made a course on my own. I learned flip-chart drawing to illustrate the slides and notebooks. That was fun, considering how much I suck at drawing. I am using Teachable as the LMS, Latex (Beamer) for the slides, Sketchbook for illustrations, Blue Yeti for audio recording, OBS Studio for screencasting, and Filmora for video editing. The captions are first auto-generated on YouTube and then hand edited to fix errors and improve formatting. I do the majority of the production on Linux and then switch to Windows for video editing.<p>I released the course last month and the makers of RLlib got in touch to show their approval. That's the best thing to happen so far.<p>Please feel free to try it and ask any questions. I am around and will do my best to answer them.<p>[0] <a href="https://www.datacamp.com/courses/unit-testing-for-data-science-in-python" rel="nofollow">https://www.datacamp.com/courses/unit-testing-for-data-scien...</a>
[1] <a href="https://courses.dibya.online/p/realdeeprl" rel="nofollow">https://courses.dibya.online/p/realdeeprl</a>
Show HN: I restored Palm's webOS App Catalog, SDK and online help system
My pandemic project was to find, restore and organize scattered and archived remnants of Palm/HP's mobile webOS platform to help keep these delightful little devices alive.
Show HN: I restored Palm's webOS App Catalog, SDK and online help system
My pandemic project was to find, restore and organize scattered and archived remnants of Palm/HP's mobile webOS platform to help keep these delightful little devices alive.