The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I built an open-source tool to make on-call suck less

Hey HN,<p>I am building an open source platform to make on-call better and less stressful for engineers. We are building a tool that can silence alerts and help with debugging and root cause analysis. We also want to automate tedious parts of being on-call (running runbooks manually, answering questions on Slack, dealing with Pagerduty). Here is a quick video of how it works: <a href="https://youtu.be/m_K9Dq1kZDw" rel="nofollow">https://youtu.be/m_K9Dq1kZDw</a><p>I hated being on-call for a couple of reasons:<p>* Alert volume: The number of alerts kept increasing over time. It was hard to maintain existing alerts. This would lead to a lot of noisy and unactionable alerts. I have lost count of the number of times I got woken up by alert that auto-resolved 5 minutes later.<p>* Debugging: Debugging an alert or a customer support ticket would need me to gain context on a service that I might not have worked on before. These companies used many observability tools that would make debugging challenging. There are always a time pressure to resolve issues quickly.<p>There were some more tangential issues that used to take up a lot of on-call time<p>* Support: Answering questions from other teams. A lot of times these questions were repetitive and have been answered before.<p>* Dealing with PagerDuty: These tools are hard to use. e.g. It was hard to schedule an override in PD or do holiday schedules.<p>I am building an on-call tool that is Slack-native since that has become the de-facto tool for on-call engineers.<p>We heard from a lot of engineers that maintaining good alert hygiene is a challenge.<p>To start off, Opslane integrates with Datadog and can classify alerts as actionable or noisy.<p>We analyze your alert history across various signals:<p>1. Alert frequency<p>2. How quickly the alerts have resolved in the past<p>3. Alert priority<p>4. Alert response history<p>Our classification is conservative and it can be tuned as teams get more confidence in the predictions. We want to make sure that you aren't accidentally missing a critical alert.<p>Additionally, we generate a weekly report based on all your alerts to give you a picture of your overall alert hygiene.<p>What’s next?<p>1. Building more integrations (Prometheus, Splunk, Sentry, PagerDuty) to continue making on-call quality of life better<p>2. Help make debugging and root cause analysis easier.<p>3. Runbook automation<p>We’re still pretty early in development and we want to make on-call quality of life better. Any feedback would be much appreciated!

Show HN: I built an open-source tool to make on-call suck less

Hey HN,<p>I am building an open source platform to make on-call better and less stressful for engineers. We are building a tool that can silence alerts and help with debugging and root cause analysis. We also want to automate tedious parts of being on-call (running runbooks manually, answering questions on Slack, dealing with Pagerduty). Here is a quick video of how it works: <a href="https://youtu.be/m_K9Dq1kZDw" rel="nofollow">https://youtu.be/m_K9Dq1kZDw</a><p>I hated being on-call for a couple of reasons:<p>* Alert volume: The number of alerts kept increasing over time. It was hard to maintain existing alerts. This would lead to a lot of noisy and unactionable alerts. I have lost count of the number of times I got woken up by alert that auto-resolved 5 minutes later.<p>* Debugging: Debugging an alert or a customer support ticket would need me to gain context on a service that I might not have worked on before. These companies used many observability tools that would make debugging challenging. There are always a time pressure to resolve issues quickly.<p>There were some more tangential issues that used to take up a lot of on-call time<p>* Support: Answering questions from other teams. A lot of times these questions were repetitive and have been answered before.<p>* Dealing with PagerDuty: These tools are hard to use. e.g. It was hard to schedule an override in PD or do holiday schedules.<p>I am building an on-call tool that is Slack-native since that has become the de-facto tool for on-call engineers.<p>We heard from a lot of engineers that maintaining good alert hygiene is a challenge.<p>To start off, Opslane integrates with Datadog and can classify alerts as actionable or noisy.<p>We analyze your alert history across various signals:<p>1. Alert frequency<p>2. How quickly the alerts have resolved in the past<p>3. Alert priority<p>4. Alert response history<p>Our classification is conservative and it can be tuned as teams get more confidence in the predictions. We want to make sure that you aren't accidentally missing a critical alert.<p>Additionally, we generate a weekly report based on all your alerts to give you a picture of your overall alert hygiene.<p>What’s next?<p>1. Building more integrations (Prometheus, Splunk, Sentry, PagerDuty) to continue making on-call quality of life better<p>2. Help make debugging and root cause analysis easier.<p>3. Runbook automation<p>We’re still pretty early in development and we want to make on-call quality of life better. Any feedback would be much appreciated!

Show HN: I built an open-source tool to make on-call suck less

Hey HN,<p>I am building an open source platform to make on-call better and less stressful for engineers. We are building a tool that can silence alerts and help with debugging and root cause analysis. We also want to automate tedious parts of being on-call (running runbooks manually, answering questions on Slack, dealing with Pagerduty). Here is a quick video of how it works: <a href="https://youtu.be/m_K9Dq1kZDw" rel="nofollow">https://youtu.be/m_K9Dq1kZDw</a><p>I hated being on-call for a couple of reasons:<p>* Alert volume: The number of alerts kept increasing over time. It was hard to maintain existing alerts. This would lead to a lot of noisy and unactionable alerts. I have lost count of the number of times I got woken up by alert that auto-resolved 5 minutes later.<p>* Debugging: Debugging an alert or a customer support ticket would need me to gain context on a service that I might not have worked on before. These companies used many observability tools that would make debugging challenging. There are always a time pressure to resolve issues quickly.<p>There were some more tangential issues that used to take up a lot of on-call time<p>* Support: Answering questions from other teams. A lot of times these questions were repetitive and have been answered before.<p>* Dealing with PagerDuty: These tools are hard to use. e.g. It was hard to schedule an override in PD or do holiday schedules.<p>I am building an on-call tool that is Slack-native since that has become the de-facto tool for on-call engineers.<p>We heard from a lot of engineers that maintaining good alert hygiene is a challenge.<p>To start off, Opslane integrates with Datadog and can classify alerts as actionable or noisy.<p>We analyze your alert history across various signals:<p>1. Alert frequency<p>2. How quickly the alerts have resolved in the past<p>3. Alert priority<p>4. Alert response history<p>Our classification is conservative and it can be tuned as teams get more confidence in the predictions. We want to make sure that you aren't accidentally missing a critical alert.<p>Additionally, we generate a weekly report based on all your alerts to give you a picture of your overall alert hygiene.<p>What’s next?<p>1. Building more integrations (Prometheus, Splunk, Sentry, PagerDuty) to continue making on-call quality of life better<p>2. Help make debugging and root cause analysis easier.<p>3. Runbook automation<p>We’re still pretty early in development and we want to make on-call quality of life better. Any feedback would be much appreciated!

Show HN: Ray Tracing in One Weekend v4.0.0

Since this is a major new release (three and a half years in development), I think this should be ok for Show HN.<p>This release has lots of new material, fixes, and updates across the three books in this series. All three books are online and free, with accompanying code available from GitHub. Enjoy!

Show HN: Word Slicer

Hi. I created a small word game and would love someone to give it a try and let me know what you think. Thanks!

Show HN: Word Slicer

Hi. I created a small word game and would love someone to give it a try and let me know what you think. Thanks!

Show HN: Preprocessor I've been working 4 years now

Hey there,<p>I'm here today to share with you a software I've been working on for 4 years now. I'm not full time dedicated to it, as I need to make a living. My inspiration to develop it came when I started using Sass for real in production. I really appreciated the hierarchy of nesting rules instead of the way CSS vanilla used to do. The obvious nesting rules was easy to read and understand just by looking at. That was something I personally admirated very much. I wondered why there was no HTML preprocessors as revolutionary as there is for CSS and JavaScript. All popular preprocessors for HTML have one thing in common. All replace the angle brackets by something else (usually identation) and then add some functionalities on top of it. If the symbols for markups are a problem to the experience of developing a visual structure, just replacing it for something else doesn't fix the problem. You are just changing the character of marcation for another.<p>With that in mind, I started Pretty Markup. Not just replacing the clutter of angle brackets by something else, but removing it completely. Very much inspired by Sass. No special characters needed, except by the quotes. The project still in its early stage, but its already useable. I reached a point where it has a stable base to work. Now, I'm plannig the layer of features that will make it usefull and revolutionary as Sass and TypeScript. Its important to note that I didn't started directly in Pretty Markup. I had a previous package called htmlpp-com-github-mopires. Yes, terrible name, but it was a start. Later a decided to make it more professional and with a friendly name.<p>You can give it a shot by having NodeJS and installing it with:<p>npm install pretty-markup<p>I created a syntax hightlighter for VS Code to attract more devs to it. You can use it by searching "pretty markup" on the extensions tab. Now, it's the first one. The next step will be a package to create a basic starter project. Something like the good old create-react-app.<p>Any feedback, suggestion or even a contribution about anything is very welcome.<p>Thank you very much for your attention,<p>Matheus<p>PS: The package available in runkit is very old(and I don't know how to update it there), I do not recommend you to test there.

Show HN: Preprocessor I've been working 4 years now

Hey there,<p>I'm here today to share with you a software I've been working on for 4 years now. I'm not full time dedicated to it, as I need to make a living. My inspiration to develop it came when I started using Sass for real in production. I really appreciated the hierarchy of nesting rules instead of the way CSS vanilla used to do. The obvious nesting rules was easy to read and understand just by looking at. That was something I personally admirated very much. I wondered why there was no HTML preprocessors as revolutionary as there is for CSS and JavaScript. All popular preprocessors for HTML have one thing in common. All replace the angle brackets by something else (usually identation) and then add some functionalities on top of it. If the symbols for markups are a problem to the experience of developing a visual structure, just replacing it for something else doesn't fix the problem. You are just changing the character of marcation for another.<p>With that in mind, I started Pretty Markup. Not just replacing the clutter of angle brackets by something else, but removing it completely. Very much inspired by Sass. No special characters needed, except by the quotes. The project still in its early stage, but its already useable. I reached a point where it has a stable base to work. Now, I'm plannig the layer of features that will make it usefull and revolutionary as Sass and TypeScript. Its important to note that I didn't started directly in Pretty Markup. I had a previous package called htmlpp-com-github-mopires. Yes, terrible name, but it was a start. Later a decided to make it more professional and with a friendly name.<p>You can give it a shot by having NodeJS and installing it with:<p>npm install pretty-markup<p>I created a syntax hightlighter for VS Code to attract more devs to it. You can use it by searching "pretty markup" on the extensions tab. Now, it's the first one. The next step will be a package to create a basic starter project. Something like the good old create-react-app.<p>Any feedback, suggestion or even a contribution about anything is very welcome.<p>Thank you very much for your attention,<p>Matheus<p>PS: The package available in runkit is very old(and I don't know how to update it there), I do not recommend you to test there.

Show HN: Semantic Grep – A Word2Vec-powered search tool

Much improved new version. Search for words similar to the query. For example, "death" will find "death", "dying", "dead", "killing"... Incredibly useful for exploring large text datasets where exact matches are too restrictive.

Show HN: Semantic Grep – A Word2Vec-powered search tool

Much improved new version. Search for words similar to the query. For example, "death" will find "death", "dying", "dead", "killing"... Incredibly useful for exploring large text datasets where exact matches are too restrictive.

Show HN: Semantic Grep – A Word2Vec-powered search tool

Much improved new version. Search for words similar to the query. For example, "death" will find "death", "dying", "dead", "killing"... Incredibly useful for exploring large text datasets where exact matches are too restrictive.

Show HN: A personalised AI tutor with < 1s voice responses

TLDR: We created a personalised Andrej Karpathy tutor that can response to questions about his Youtube videos in sub 1 second responses (voice-to-voice). We do this using a voice enabled RAG agent. See later in the post for demo link, Github Repo and blog write up.<p>A few weeks ago we released the worlds fastest voice bot, achieving 500ms voice-to-voice response times, including a 200ms delay waiting for a user to stop speaking.<p>After reaching the front page of HN, we thought about how we could take this a step further based on feedback we were getting from the community. Many companies were looking for a way to implement function calling and RAG with voice interfaces while retaining a low enough latency. We couldn’t find many resources about how to do this online that:<p>1. Allowed us to achieve sub-second voice-to-voice latency 2. Was more flexible than existing solutions. Vapi, Retell, [Bland.ai](<a href="http://Bland.ai" rel="nofollow">http://Bland.ai</a>) are too opinionated plus since they just orchestrate API’s which incur network latency at every step. See requirement above 3. The unit economics actually work at scale.<p>So we decided to create a implementation of our own.<p>Process:<p>As we mentioned in our previous release, if you want to achieve response times this low you need to make everything as local as possible. So below was our setup<p>- Local STT: Deepgram model - Local Embedding model: Nomic v1.5 - Local VectorDB: Turso - Local LLM: Llama 3B - Local TTS: Deepgram model<p>From our previous example, the only new components where:<p>- Local Embedding model: We chose Nomic Embed text v1.5 model that gave a processing time of roughly ~200ms - Turso offers local embedded replicas combined with edgeDB’s which meant we were able to achieve 0.01 second read times. Pinecone also gave us good times of 0.043 seconds.<p>The above changes led us to achieve sub 1 second voice-to-voice response times<p>Application:<p>With Andrej Karpathy’s announcement around [Eureka Labs](<a href="https://eurekalabs.ai/" rel="nofollow">https://eurekalabs.ai/</a>), a new AI+Education company we thought we would create our very own personalised Andrej tutor.<p>Listen to anyone of his Youtube lectures, as soon as your start specking, the video will pause and he will reply. Once your question has been answered you can then tell him to continue with the lecture and the video will automatically start playing.<p>Demo: <a href="https://educationbot.cerebrium.ai/">https://educationbot.cerebrium.ai/</a><p>Blog: <a href="https://www.cerebrium.ai/blog/creating-a-realtime-rag-voice-agent">https://www.cerebrium.ai/blog/creating-a-realtime-rag-voice-...</a><p>Github Repo: <a href="https://github.com/CerebriumAI/examples/tree/master/19-voice-rag-agent">https://github.com/CerebriumAI/examples/tree/master/19-voice...</a><p>For demo purposes:<p>- We used OpenAI for GPT-4-mini and embeddings (its cheaper to run on a CPU than GPU’s when running demos at scale. These changes add about ~1 second to the response time - We used Eleven labs to clone his voice to make replies sound more realistic. This adds about 300ms to the response time.<p>The improvements that can be made which we would like the community to contribute to are:<p>- Embed the video screens as well that when you ask certain questions it can show you the relevant lecture slide for the same chuck that it got context from to answer. - Insert the timestamps in the vectorDB timestamps so that if a question will be answered later in the lecture he can let you know<p>This unlocks so many use cases in education, employee training, sales etc that it would be great to see what the community builds!

Show HN: Patchwork – Open-source framework to automate development gruntwork

Hi HN! We’re Asankhaya and Rohan and we are building Patchwork.<p>Patchwork tackles development gruntwork—like reviews, docs, linting, and security fixes—through customizable, code-first 'patchflows' using LLMs and modular code management steps, all in Python. Here's a quick overview video: <a href="https://youtu.be/MLyn6B3bFMU" rel="nofollow">https://youtu.be/MLyn6B3bFMU</a><p>From our time building DevSecOps tools, we experienced first-hand the frustrations our users faced as they built complex delivery pipelines. Almost a third of developer time is spent on code management tasks[1], yet backlogs remain.<p>Patchwork lets you combine well-defined prompts with effective workflow orchestration to automate as much as 80% of these gruntwork tasks using LLMs[2]. For instance, the AutoFix patchflow can resolve 82% of issues flagged by semgrep using gpt-4 (or 68% with llama-3.1-8B) without fine-tuning or providing specialized context [3]. Success rates are higher for text-based patchflows like PR Review and Generate Docstring, but lower for more complex tasks like Dependency Upgrades.<p>We are not a coding assistant or a black-box GitHub bot. Our automation workflows run outside your IDE via the CLI or CI scripts without your active involvement.<p>We are also not an ‘AI agent’ framework. In our experience, LLM agents struggle with planning and rarely identify the right execution path. Instead, Patchwork requires explicitly defined workflows that provide greater success and full control.<p>Patchwork is open-source so you can build your own patchflows, integrate your preferred LLM endpoints, and fully self-host, ensuring privacy and compliance for large teams.<p>As devs, we prefer to build our own ‘AI-enabled automation’ given how easy it is to consume LLM APIs. If you do, try patchwork via a simple 'pip install patchwork-cli' or find us on Github[4].<p>Sources:<p>[1] <a href="https://blog.tidelift.com/developers-spend-30-of-their-time-on-code-maintenance-our-latest-survey-results-part-3" rel="nofollow">https://blog.tidelift.com/developers-spend-30-of-their-time-...</a><p>[2] <a href="https://www.patched.codes/blog/patched-rtc-evaluating-llms-for-diverse-software-development-tasks">https://www.patched.codes/blog/patched-rtc-evaluating-llms-f...</a><p>[3] <a href="https://www.patched.codes/blog/how-good-are-llms">https://www.patched.codes/blog/how-good-are-llms</a><p>[4] <a href="https://github.com/patched-codes/patchwork">https://github.com/patched-codes/patchwork</a><p>[Sample PRs] <a href="https://github.com/patched-demo/sample-injection/pulls">https://github.com/patched-demo/sample-injection/pulls</a>

Show HN: Patchwork – Open-source framework to automate development gruntwork

Hi HN! We’re Asankhaya and Rohan and we are building Patchwork.<p>Patchwork tackles development gruntwork—like reviews, docs, linting, and security fixes—through customizable, code-first 'patchflows' using LLMs and modular code management steps, all in Python. Here's a quick overview video: <a href="https://youtu.be/MLyn6B3bFMU" rel="nofollow">https://youtu.be/MLyn6B3bFMU</a><p>From our time building DevSecOps tools, we experienced first-hand the frustrations our users faced as they built complex delivery pipelines. Almost a third of developer time is spent on code management tasks[1], yet backlogs remain.<p>Patchwork lets you combine well-defined prompts with effective workflow orchestration to automate as much as 80% of these gruntwork tasks using LLMs[2]. For instance, the AutoFix patchflow can resolve 82% of issues flagged by semgrep using gpt-4 (or 68% with llama-3.1-8B) without fine-tuning or providing specialized context [3]. Success rates are higher for text-based patchflows like PR Review and Generate Docstring, but lower for more complex tasks like Dependency Upgrades.<p>We are not a coding assistant or a black-box GitHub bot. Our automation workflows run outside your IDE via the CLI or CI scripts without your active involvement.<p>We are also not an ‘AI agent’ framework. In our experience, LLM agents struggle with planning and rarely identify the right execution path. Instead, Patchwork requires explicitly defined workflows that provide greater success and full control.<p>Patchwork is open-source so you can build your own patchflows, integrate your preferred LLM endpoints, and fully self-host, ensuring privacy and compliance for large teams.<p>As devs, we prefer to build our own ‘AI-enabled automation’ given how easy it is to consume LLM APIs. If you do, try patchwork via a simple 'pip install patchwork-cli' or find us on Github[4].<p>Sources:<p>[1] <a href="https://blog.tidelift.com/developers-spend-30-of-their-time-on-code-maintenance-our-latest-survey-results-part-3" rel="nofollow">https://blog.tidelift.com/developers-spend-30-of-their-time-...</a><p>[2] <a href="https://www.patched.codes/blog/patched-rtc-evaluating-llms-for-diverse-software-development-tasks">https://www.patched.codes/blog/patched-rtc-evaluating-llms-f...</a><p>[3] <a href="https://www.patched.codes/blog/how-good-are-llms">https://www.patched.codes/blog/how-good-are-llms</a><p>[4] <a href="https://github.com/patched-codes/patchwork">https://github.com/patched-codes/patchwork</a><p>[Sample PRs] <a href="https://github.com/patched-demo/sample-injection/pulls">https://github.com/patched-demo/sample-injection/pulls</a>

Show HN: Patchwork – Open-source framework to automate development gruntwork

Hi HN! We’re Asankhaya and Rohan and we are building Patchwork.<p>Patchwork tackles development gruntwork—like reviews, docs, linting, and security fixes—through customizable, code-first 'patchflows' using LLMs and modular code management steps, all in Python. Here's a quick overview video: <a href="https://youtu.be/MLyn6B3bFMU" rel="nofollow">https://youtu.be/MLyn6B3bFMU</a><p>From our time building DevSecOps tools, we experienced first-hand the frustrations our users faced as they built complex delivery pipelines. Almost a third of developer time is spent on code management tasks[1], yet backlogs remain.<p>Patchwork lets you combine well-defined prompts with effective workflow orchestration to automate as much as 80% of these gruntwork tasks using LLMs[2]. For instance, the AutoFix patchflow can resolve 82% of issues flagged by semgrep using gpt-4 (or 68% with llama-3.1-8B) without fine-tuning or providing specialized context [3]. Success rates are higher for text-based patchflows like PR Review and Generate Docstring, but lower for more complex tasks like Dependency Upgrades.<p>We are not a coding assistant or a black-box GitHub bot. Our automation workflows run outside your IDE via the CLI or CI scripts without your active involvement.<p>We are also not an ‘AI agent’ framework. In our experience, LLM agents struggle with planning and rarely identify the right execution path. Instead, Patchwork requires explicitly defined workflows that provide greater success and full control.<p>Patchwork is open-source so you can build your own patchflows, integrate your preferred LLM endpoints, and fully self-host, ensuring privacy and compliance for large teams.<p>As devs, we prefer to build our own ‘AI-enabled automation’ given how easy it is to consume LLM APIs. If you do, try patchwork via a simple 'pip install patchwork-cli' or find us on Github[4].<p>Sources:<p>[1] <a href="https://blog.tidelift.com/developers-spend-30-of-their-time-on-code-maintenance-our-latest-survey-results-part-3" rel="nofollow">https://blog.tidelift.com/developers-spend-30-of-their-time-...</a><p>[2] <a href="https://www.patched.codes/blog/patched-rtc-evaluating-llms-for-diverse-software-development-tasks">https://www.patched.codes/blog/patched-rtc-evaluating-llms-f...</a><p>[3] <a href="https://www.patched.codes/blog/how-good-are-llms">https://www.patched.codes/blog/how-good-are-llms</a><p>[4] <a href="https://github.com/patched-codes/patchwork">https://github.com/patched-codes/patchwork</a><p>[Sample PRs] <a href="https://github.com/patched-demo/sample-injection/pulls">https://github.com/patched-demo/sample-injection/pulls</a>

Show HN: Tiny Moon – Swift library to calculate the moon phase

Tiny Moon is a tiny Swift library to calculate the moon phase for any given date, works super fast, and works completely offline.<p>All of this started when I realized that we only have 12, sometimes 13, full moon's in a year. That doesn’t seem like that many.<p>I set out to build a MacOS app to remind me when a full moon occurs, so that I could take a moment and step outside to appreciate it.<p>The MacOS app I ended up creating can be found at <a href="https://apps.apple.com/us/app/tiny-moon/id6502374344" rel="nofollow">https://apps.apple.com/us/app/tiny-moon/id6502374344</a> along with the source code [0], all powered by the Tiny Moon library.<p>I knew that I wanted the app to work offline, so working with a network request was out of the picture. Taking inspiration from SunCalc [1] and Moontool for Windows [2], I decided to create my own library and wrote Tiny Moon as a Swift Package to power my app.<p>The app tries to be as minimal as possible, does what it does very fast, and works completely offline.<p>[0] <a href="https://github.com/mannylopez/TinyMoonApp">https://github.com/mannylopez/TinyMoonApp</a><p>[1] <a href="https://github.com/mourner/suncalc">https://github.com/mourner/suncalc</a><p>[2] <a href="https://www.fourmilab.ch/moontoolw/" rel="nofollow">https://www.fourmilab.ch/moontoolw/</a>

Show HN: Tiny Moon – Swift library to calculate the moon phase

Tiny Moon is a tiny Swift library to calculate the moon phase for any given date, works super fast, and works completely offline.<p>All of this started when I realized that we only have 12, sometimes 13, full moon's in a year. That doesn’t seem like that many.<p>I set out to build a MacOS app to remind me when a full moon occurs, so that I could take a moment and step outside to appreciate it.<p>The MacOS app I ended up creating can be found at <a href="https://apps.apple.com/us/app/tiny-moon/id6502374344" rel="nofollow">https://apps.apple.com/us/app/tiny-moon/id6502374344</a> along with the source code [0], all powered by the Tiny Moon library.<p>I knew that I wanted the app to work offline, so working with a network request was out of the picture. Taking inspiration from SunCalc [1] and Moontool for Windows [2], I decided to create my own library and wrote Tiny Moon as a Swift Package to power my app.<p>The app tries to be as minimal as possible, does what it does very fast, and works completely offline.<p>[0] <a href="https://github.com/mannylopez/TinyMoonApp">https://github.com/mannylopez/TinyMoonApp</a><p>[1] <a href="https://github.com/mourner/suncalc">https://github.com/mourner/suncalc</a><p>[2] <a href="https://www.fourmilab.ch/moontoolw/" rel="nofollow">https://www.fourmilab.ch/moontoolw/</a>

Show HN: Tiny Moon – Swift library to calculate the moon phase

Tiny Moon is a tiny Swift library to calculate the moon phase for any given date, works super fast, and works completely offline.<p>All of this started when I realized that we only have 12, sometimes 13, full moon's in a year. That doesn’t seem like that many.<p>I set out to build a MacOS app to remind me when a full moon occurs, so that I could take a moment and step outside to appreciate it.<p>The MacOS app I ended up creating can be found at <a href="https://apps.apple.com/us/app/tiny-moon/id6502374344" rel="nofollow">https://apps.apple.com/us/app/tiny-moon/id6502374344</a> along with the source code [0], all powered by the Tiny Moon library.<p>I knew that I wanted the app to work offline, so working with a network request was out of the picture. Taking inspiration from SunCalc [1] and Moontool for Windows [2], I decided to create my own library and wrote Tiny Moon as a Swift Package to power my app.<p>The app tries to be as minimal as possible, does what it does very fast, and works completely offline.<p>[0] <a href="https://github.com/mannylopez/TinyMoonApp">https://github.com/mannylopez/TinyMoonApp</a><p>[1] <a href="https://github.com/mourner/suncalc">https://github.com/mourner/suncalc</a><p>[2] <a href="https://www.fourmilab.ch/moontoolw/" rel="nofollow">https://www.fourmilab.ch/moontoolw/</a>

Show HN: Hooper – AI-driven stats and highlights for basketball play

Hey everyone, OP here. Wanted to share a bit more about Hooper — I started building it with a good friend of mine six months ago. We play a lot of pickup together and were arguing about who has a better jump shot and ended up hacking together an app to settle it<p>The way Hooper works is you can record yourself using the app and ideally a tripod (optional). The app will track everyone, whether its a solo practice, a 3v3, or a 5v5. We think there’s a lot of stuff out there for basketball drills but what we really wanted Hooper to be for is actual game play. That means, it can do things like track multiple players, sync two half court recordings, and differentiate 2s vs 3s.<p>Once you finish recording, it’ll process for a bit and then ask you to tag yourself (and optionally other players). Then it spits back out a few things: you can watch the full footage as well as a clipped version that's only the interesting plays; you can see highlights and offensive box stats for every player.<p>When you sign up, you get a Hooper profile that tracks your overall stats across the sessions. You can also do things like build a mixtape from your highlights for insta. You can friend other players on Hooper to see their profile & comment on their games (also add us “grub” and “kangexpress”!)<p>We’ve been in a closed beta for about 3 months now, fixing bugs and getting things to work with a set of early adopters. We are now starting our open beta ! If anyone here wants to try it out, you can just download the app on <a href="https://www.hooper.gg/?utm_campaign=h1" rel="nofollow">https://www.hooper.gg/?utm_campaign=h1</a>. We are early in our journey, and lots of improvements to be made in the next few months, but we would love your feedback and ideas!

Show HN: Hooper – AI-driven stats and highlights for basketball play

Hey everyone, OP here. Wanted to share a bit more about Hooper — I started building it with a good friend of mine six months ago. We play a lot of pickup together and were arguing about who has a better jump shot and ended up hacking together an app to settle it<p>The way Hooper works is you can record yourself using the app and ideally a tripod (optional). The app will track everyone, whether its a solo practice, a 3v3, or a 5v5. We think there’s a lot of stuff out there for basketball drills but what we really wanted Hooper to be for is actual game play. That means, it can do things like track multiple players, sync two half court recordings, and differentiate 2s vs 3s.<p>Once you finish recording, it’ll process for a bit and then ask you to tag yourself (and optionally other players). Then it spits back out a few things: you can watch the full footage as well as a clipped version that's only the interesting plays; you can see highlights and offensive box stats for every player.<p>When you sign up, you get a Hooper profile that tracks your overall stats across the sessions. You can also do things like build a mixtape from your highlights for insta. You can friend other players on Hooper to see their profile & comment on their games (also add us “grub” and “kangexpress”!)<p>We’ve been in a closed beta for about 3 months now, fixing bugs and getting things to work with a set of early adopters. We are now starting our open beta ! If anyone here wants to try it out, you can just download the app on <a href="https://www.hooper.gg/?utm_campaign=h1" rel="nofollow">https://www.hooper.gg/?utm_campaign=h1</a>. We are early in our journey, and lots of improvements to be made in the next few months, but we would love your feedback and ideas!

< 1 2 3 ... 58 59 60 61 62 ... 719 720 721 >