The best Hacker News stories from Show from the past day
Latest posts:
Show HN: 0xDEAD//TYPE – A fast-paced typing shooter with retro vibes
Show HN: 0xDEAD//TYPE – A fast-paced typing shooter with retro vibes
Show HN: A 'Choose Your Own Adventure' written in Emacs Org Mode
I authored and developed an interactive children's book about entrepreneurship and money management. The journey started with Twinery, the open-source tool for making interactive fiction, discovered right here on HN. The tool kindled memories of reading CYOA style books when I was a kid, and I thought the format would be awesome for writing a story my kids could follow along, incorporating play money to learn about transactions as they occurred in the story.<p>Twinery is a fantastic tool, and I used it to layout the story map. I really wanted to write the content of the story in Emacs and Org Mode however. Thankfully, Twinery provided the ability to write custom Story Formats that defined how a story was exported. I wrote a Story Format called Twiorg that would export the Twinery file to an Org file and then a Org export backend (ox-twee) to do the reverse. With these tools, I could go back and forth between Emacs and Twinery for authoring the story.<p>The project snowballed and I ended up with the book in digital and physical book formats. The Web Book is created using another Org export backend.<p>Ten Dollar Adventure: <a href="https://tendollaradventure.com" rel="nofollow">https://tendollaradventure.com</a><p>Sample the Web Book (one complete storyline/adventure): <a href="https://tendollaradventure.com/sample/" rel="nofollow">https://tendollaradventure.com/sample/</a><p>I couldn't muster the effort to write a special org export backend for the physical books unfortunately and used a commercial editor to format these.<p>Twiorg: <a href="https://github.com/danishec/twiorg">https://github.com/danishec/twiorg</a><p>ox-twee: <a href="https://github.com/danishec/ox-twee">https://github.com/danishec/ox-twee</a><p>Previous HN post on writing the transaction logic using an LLM in Emacs: <a href="https://blog.tendollaradventure.com/automating-story-logic-with-llms/" rel="nofollow">https://blog.tendollaradventure.com/automating-story-logic-w...</a><p>Twinery 2: <<a href="https://twinery.org/" rel="nofollow">https://twinery.org/</a>> and discussion on HN: <a href="https://news.ycombinator.com/item?id=32788965">https://news.ycombinator.com/item?id=32788965</a>
Show HN: A 'Choose Your Own Adventure' written in Emacs Org Mode
I authored and developed an interactive children's book about entrepreneurship and money management. The journey started with Twinery, the open-source tool for making interactive fiction, discovered right here on HN. The tool kindled memories of reading CYOA style books when I was a kid, and I thought the format would be awesome for writing a story my kids could follow along, incorporating play money to learn about transactions as they occurred in the story.<p>Twinery is a fantastic tool, and I used it to layout the story map. I really wanted to write the content of the story in Emacs and Org Mode however. Thankfully, Twinery provided the ability to write custom Story Formats that defined how a story was exported. I wrote a Story Format called Twiorg that would export the Twinery file to an Org file and then a Org export backend (ox-twee) to do the reverse. With these tools, I could go back and forth between Emacs and Twinery for authoring the story.<p>The project snowballed and I ended up with the book in digital and physical book formats. The Web Book is created using another Org export backend.<p>Ten Dollar Adventure: <a href="https://tendollaradventure.com" rel="nofollow">https://tendollaradventure.com</a><p>Sample the Web Book (one complete storyline/adventure): <a href="https://tendollaradventure.com/sample/" rel="nofollow">https://tendollaradventure.com/sample/</a><p>I couldn't muster the effort to write a special org export backend for the physical books unfortunately and used a commercial editor to format these.<p>Twiorg: <a href="https://github.com/danishec/twiorg">https://github.com/danishec/twiorg</a><p>ox-twee: <a href="https://github.com/danishec/ox-twee">https://github.com/danishec/ox-twee</a><p>Previous HN post on writing the transaction logic using an LLM in Emacs: <a href="https://blog.tendollaradventure.com/automating-story-logic-with-llms/" rel="nofollow">https://blog.tendollaradventure.com/automating-story-logic-w...</a><p>Twinery 2: <<a href="https://twinery.org/" rel="nofollow">https://twinery.org/</a>> and discussion on HN: <a href="https://news.ycombinator.com/item?id=32788965">https://news.ycombinator.com/item?id=32788965</a>
Show HN: A 'Choose Your Own Adventure' written in Emacs Org Mode
I authored and developed an interactive children's book about entrepreneurship and money management. The journey started with Twinery, the open-source tool for making interactive fiction, discovered right here on HN. The tool kindled memories of reading CYOA style books when I was a kid, and I thought the format would be awesome for writing a story my kids could follow along, incorporating play money to learn about transactions as they occurred in the story.<p>Twinery is a fantastic tool, and I used it to layout the story map. I really wanted to write the content of the story in Emacs and Org Mode however. Thankfully, Twinery provided the ability to write custom Story Formats that defined how a story was exported. I wrote a Story Format called Twiorg that would export the Twinery file to an Org file and then a Org export backend (ox-twee) to do the reverse. With these tools, I could go back and forth between Emacs and Twinery for authoring the story.<p>The project snowballed and I ended up with the book in digital and physical book formats. The Web Book is created using another Org export backend.<p>Ten Dollar Adventure: <a href="https://tendollaradventure.com" rel="nofollow">https://tendollaradventure.com</a><p>Sample the Web Book (one complete storyline/adventure): <a href="https://tendollaradventure.com/sample/" rel="nofollow">https://tendollaradventure.com/sample/</a><p>I couldn't muster the effort to write a special org export backend for the physical books unfortunately and used a commercial editor to format these.<p>Twiorg: <a href="https://github.com/danishec/twiorg">https://github.com/danishec/twiorg</a><p>ox-twee: <a href="https://github.com/danishec/ox-twee">https://github.com/danishec/ox-twee</a><p>Previous HN post on writing the transaction logic using an LLM in Emacs: <a href="https://blog.tendollaradventure.com/automating-story-logic-with-llms/" rel="nofollow">https://blog.tendollaradventure.com/automating-story-logic-w...</a><p>Twinery 2: <<a href="https://twinery.org/" rel="nofollow">https://twinery.org/</a>> and discussion on HN: <a href="https://news.ycombinator.com/item?id=32788965">https://news.ycombinator.com/item?id=32788965</a>
Show HN: Improving search ranking with chess Elo scores
Hello HN,<p>I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.<p>We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.<p>We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps:
* Collect soft preferences between pairs of documents using an ensemble of LLMs.
* Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores.
* Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.<p>You can try the models either through our API (<a href="https://docs.zeroentropy.dev/models">https://docs.zeroentropy.dev/models</a>), or via HuggingFace (<a href="https://huggingface.co/zeroentropy/zerank-1-small" rel="nofollow">https://huggingface.co/zeroentropy/zerank-1-small</a>).<p>We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.<p>Thank you!
Show HN: Improving search ranking with chess Elo scores
Hello HN,<p>I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.<p>We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.<p>We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps:
* Collect soft preferences between pairs of documents using an ensemble of LLMs.
* Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores.
* Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.<p>You can try the models either through our API (<a href="https://docs.zeroentropy.dev/models">https://docs.zeroentropy.dev/models</a>), or via HuggingFace (<a href="https://huggingface.co/zeroentropy/zerank-1-small" rel="nofollow">https://huggingface.co/zeroentropy/zerank-1-small</a>).<p>We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.<p>Thank you!
Show HN: Improving search ranking with chess Elo scores
Hello HN,<p>I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.<p>We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.<p>We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps:
* Collect soft preferences between pairs of documents using an ensemble of LLMs.
* Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores.
* Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.<p>You can try the models either through our API (<a href="https://docs.zeroentropy.dev/models">https://docs.zeroentropy.dev/models</a>), or via HuggingFace (<a href="https://huggingface.co/zeroentropy/zerank-1-small" rel="nofollow">https://huggingface.co/zeroentropy/zerank-1-small</a>).<p>We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.<p>Thank you!
Show HN: Improving search ranking with chess Elo scores
Hello HN,<p>I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.<p>We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.<p>We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps:
* Collect soft preferences between pairs of documents using an ensemble of LLMs.
* Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores.
* Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.<p>You can try the models either through our API (<a href="https://docs.zeroentropy.dev/models">https://docs.zeroentropy.dev/models</a>), or via HuggingFace (<a href="https://huggingface.co/zeroentropy/zerank-1-small" rel="nofollow">https://huggingface.co/zeroentropy/zerank-1-small</a>).<p>We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.<p>Thank you!
Show HN: Improving search ranking with chess Elo scores
Hello HN,<p>I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.<p>We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.<p>We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps:
* Collect soft preferences between pairs of documents using an ensemble of LLMs.
* Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores.
* Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.<p>You can try the models either through our API (<a href="https://docs.zeroentropy.dev/models">https://docs.zeroentropy.dev/models</a>), or via HuggingFace (<a href="https://huggingface.co/zeroentropy/zerank-1-small" rel="nofollow">https://huggingface.co/zeroentropy/zerank-1-small</a>).<p>We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.<p>Thank you!
Show HN: Beyond Z²+C, Plot Any Fractal
I've always been dissatisfied that simple Mandelbrot explorers proport themselves as a Fractal Graphing Calculator. In summer break between semesters, I started making a real graphing calculator, parsing LaTeX to WebGL to let you graph most any combination of z and c.<p>Fun ones to try include
- sin(z^2+c)
- c^z
- z^{1.7}+c<p>Also supports animation, just enter any other letter and turn it into a variable. Supports Mandelbrot or Julia Set style calculation.<p>Use with a graphics card or integrated graphics
Show HN: Beyond Z²+C, Plot Any Fractal
I've always been dissatisfied that simple Mandelbrot explorers proport themselves as a Fractal Graphing Calculator. In summer break between semesters, I started making a real graphing calculator, parsing LaTeX to WebGL to let you graph most any combination of z and c.<p>Fun ones to try include
- sin(z^2+c)
- c^z
- z^{1.7}+c<p>Also supports animation, just enter any other letter and turn it into a variable. Supports Mandelbrot or Julia Set style calculation.<p>Use with a graphics card or integrated graphics
Show HN: We made our own inference engine for Apple Silicon
We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Show HN: We made our own inference engine for Apple Silicon
We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Show HN: We made our own inference engine for Apple Silicon
We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL
Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL
Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL
Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL
Show HN: Built a desktop app to organize photos locally with duplicate detection