The best Hacker News stories from Show from the past day
Latest posts:
Show HN: HN Profiles – Searchable Database of People “Who Want to Be Hired”
Show HN: 'Hello, World ' in x86 assembly, but make it gibberish
Show HN: 'Hello, World ' in x86 assembly, but make it gibberish
Show HN: 'Hello, World ' in x86 assembly, but make it gibberish
Show HN: 'Hello, World ' in x86 assembly, but make it gibberish
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: Scriptable.run, make your product extendable by anyone.
Show HN: Learn Python with Minecraft
Looking for feedback on my project to teach python by writing code that interacts with a Minecraft World.
Show HN: Learn Python with Minecraft
Looking for feedback on my project to teach python by writing code that interacts with a Minecraft World.
Show HN: Learn Python with Minecraft
Looking for feedback on my project to teach python by writing code that interacts with a Minecraft World.
Show HN: Learn Python with Minecraft
Looking for feedback on my project to teach python by writing code that interacts with a Minecraft World.
Show HN: Quality News – Towards a fairer ranking algorithm for Hacker News
Hello HN!<p>TLDR;<p>- Quality News is a Hacker News client that provides additional data and insights on submissions, notably, the upvoteRate metric.<p>- We propose that this metric could be used to improve the Hacker News ranking score.<p>- In-depth explanation: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>The Hacker News ranking score is directly proportional to upvotes, which is a problem because it creates a feedback loop: higher rank leads to more upvotes leads to higher rank, and so on...<p><pre><code> →
↗ ↘
Higher Rank More Upvotes
↖ ↙
←
</code></pre>
As a consequence, success on HN depends almost entirely on getting enough upvotes in the first hour or so to make the front page and get caught in this feedback loop. And getting these early upvotes is largely a matter of timing, luck, and moderator decisions. And so the best stories don't always make the front page, and the stories on the front page are not always the best.<p>Our proposed solution is to use upvoteRate instead of upvotes in the ranking formula. upvoteRate is an estimate of how much more or less likely users are to upvote a story compared to the average story, taking account how much attention the story as received, based on a history of the ranks and times at which it has been shown. You can read about how we calculate this metric in more detail here: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>About 1.5 years ago, we published an article with this basic idea of counteracting the rank-upvotes feedback loop by using attention as negative feedback. We received very valuable input from the HN community (<a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>). Quality News has been created based largely on this feedback.<p>Currently, Quality News shows the upvoteRate metric for live Hacker News data, as well as charts of the rank and upvote history of each story. We have not yet implemented an alternative ranking algorithm, because we don't have access to data on flags and moderator actions, which are a major component of the HN ranking score.<p>We'd love to see the Hacker News team experiment with the new formula, perhaps on an alternative front page. This will allow the community to evaluate whether the new ranking formula is an improvement over the current one.<p>We look forward discussing our approach with you!<p>Links:<p>Site: <a href="https://news.social-protocols.org/" rel="nofollow">https://news.social-protocols.org/</a><p>Readme: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>Previous Blog Post: <a href="https://felx.me/2021/08/29/improving-the-hacker-news-ranking-algorithm.html" rel="nofollow">https://felx.me/2021/08/29/improving-the-hacker-news-ranking...</a><p>Previous Discussion: <a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>
Show HN: Quality News – Towards a fairer ranking algorithm for Hacker News
Hello HN!<p>TLDR;<p>- Quality News is a Hacker News client that provides additional data and insights on submissions, notably, the upvoteRate metric.<p>- We propose that this metric could be used to improve the Hacker News ranking score.<p>- In-depth explanation: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>The Hacker News ranking score is directly proportional to upvotes, which is a problem because it creates a feedback loop: higher rank leads to more upvotes leads to higher rank, and so on...<p><pre><code> →
↗ ↘
Higher Rank More Upvotes
↖ ↙
←
</code></pre>
As a consequence, success on HN depends almost entirely on getting enough upvotes in the first hour or so to make the front page and get caught in this feedback loop. And getting these early upvotes is largely a matter of timing, luck, and moderator decisions. And so the best stories don't always make the front page, and the stories on the front page are not always the best.<p>Our proposed solution is to use upvoteRate instead of upvotes in the ranking formula. upvoteRate is an estimate of how much more or less likely users are to upvote a story compared to the average story, taking account how much attention the story as received, based on a history of the ranks and times at which it has been shown. You can read about how we calculate this metric in more detail here: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>About 1.5 years ago, we published an article with this basic idea of counteracting the rank-upvotes feedback loop by using attention as negative feedback. We received very valuable input from the HN community (<a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>). Quality News has been created based largely on this feedback.<p>Currently, Quality News shows the upvoteRate metric for live Hacker News data, as well as charts of the rank and upvote history of each story. We have not yet implemented an alternative ranking algorithm, because we don't have access to data on flags and moderator actions, which are a major component of the HN ranking score.<p>We'd love to see the Hacker News team experiment with the new formula, perhaps on an alternative front page. This will allow the community to evaluate whether the new ranking formula is an improvement over the current one.<p>We look forward discussing our approach with you!<p>Links:<p>Site: <a href="https://news.social-protocols.org/" rel="nofollow">https://news.social-protocols.org/</a><p>Readme: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>Previous Blog Post: <a href="https://felx.me/2021/08/29/improving-the-hacker-news-ranking-algorithm.html" rel="nofollow">https://felx.me/2021/08/29/improving-the-hacker-news-ranking...</a><p>Previous Discussion: <a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>
Show HN: Quality News – Towards a fairer ranking algorithm for Hacker News
Hello HN!<p>TLDR;<p>- Quality News is a Hacker News client that provides additional data and insights on submissions, notably, the upvoteRate metric.<p>- We propose that this metric could be used to improve the Hacker News ranking score.<p>- In-depth explanation: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>The Hacker News ranking score is directly proportional to upvotes, which is a problem because it creates a feedback loop: higher rank leads to more upvotes leads to higher rank, and so on...<p><pre><code> →
↗ ↘
Higher Rank More Upvotes
↖ ↙
←
</code></pre>
As a consequence, success on HN depends almost entirely on getting enough upvotes in the first hour or so to make the front page and get caught in this feedback loop. And getting these early upvotes is largely a matter of timing, luck, and moderator decisions. And so the best stories don't always make the front page, and the stories on the front page are not always the best.<p>Our proposed solution is to use upvoteRate instead of upvotes in the ranking formula. upvoteRate is an estimate of how much more or less likely users are to upvote a story compared to the average story, taking account how much attention the story as received, based on a history of the ranks and times at which it has been shown. You can read about how we calculate this metric in more detail here: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>About 1.5 years ago, we published an article with this basic idea of counteracting the rank-upvotes feedback loop by using attention as negative feedback. We received very valuable input from the HN community (<a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>). Quality News has been created based largely on this feedback.<p>Currently, Quality News shows the upvoteRate metric for live Hacker News data, as well as charts of the rank and upvote history of each story. We have not yet implemented an alternative ranking algorithm, because we don't have access to data on flags and moderator actions, which are a major component of the HN ranking score.<p>We'd love to see the Hacker News team experiment with the new formula, perhaps on an alternative front page. This will allow the community to evaluate whether the new ranking formula is an improvement over the current one.<p>We look forward discussing our approach with you!<p>Links:<p>Site: <a href="https://news.social-protocols.org/" rel="nofollow">https://news.social-protocols.org/</a><p>Readme: <a href="https://github.com/social-protocols/news#readme">https://github.com/social-protocols/news#readme</a><p>Previous Blog Post: <a href="https://felx.me/2021/08/29/improving-the-hacker-news-ranking-algorithm.html" rel="nofollow">https://felx.me/2021/08/29/improving-the-hacker-news-ranking...</a><p>Previous Discussion: <a href="https://news.ycombinator.com/item?id=28391659" rel="nofollow">https://news.ycombinator.com/item?id=28391659</a>
Show HN: Can you beat my dad at Scrabble?
Show HN: Can you beat my dad at Scrabble?
Show HN: Can you beat my dad at Scrabble?