The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I indexed 10M Shopify products to build an API
Show HN: Calculate Your Revenue
Show HN: Marksmith – a GitHub-style Markdown editor for Ruby on Rails
Show HN: Marksmith – a GitHub-style Markdown editor for Ruby on Rails
Show HN: Marksmith – a GitHub-style Markdown editor for Ruby on Rails
Show HN: Mandarin Word Segmenter with Translation
I've built mandoBot, a web app that segments and translates Mandarin Chinese text. This is a Django API (using Django-Ninja and PostgreSQL) and a NextJS front-end (with Typescript and Chakra). For a sample of what this app does, head to <a href="https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y" rel="nofollow">https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y</a>. This is my presentation of the first chapter of a classic story from the Republican era of Chinese fiction, Diary of a Madman by Lu Xun. Other chapters are located in the "Reading Room" section of the app.<p>This app exists because reading Mandarin is very hard for learners (like me), since Mandarin text does not separate words using spaces in the same way Western languages do. But extensive reading is the most effective way to learn vocabulary and grammar. Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.<p>I'm solving this problem by allowing users to input Mandarin text, which is then computationally segmented and machine translated by my server, which also adds dictionary definitions for each word and character. The hard part is the segmentation: it turns out that "Chinese Word Segmentation"[0] is <i>the</i> central problem in Chinese Natural Language Processing; no current solutions reach 100% accuracy, whether they're from Stanford[1], Academia Sinica[2], or Tsing Hua University[3]. This includes every LLM currently available.<p>I could talk about this for hours, but the bottom line is that this app is a way to develop my full-stack skills; the backend should be fast, accurate, secure, well-tested, and well-documented, and the front-end should be pretty, secure, well-tested, responsive, and accessible. I am the sole developer, and I'm open to any comments and suggestions: roberto.loja+hn@gmail.com<p>Thanks HN!<p>[0] <a href="https://en.wikipedia.org/wiki/Chinese_word-segmented_writing" rel="nofollow">https://en.wikipedia.org/wiki/Chinese_word-segmented_writing</a><p>[1] <a href="https://nlp.stanford.edu/software/segmenter.shtml" rel="nofollow">https://nlp.stanford.edu/software/segmenter.shtml</a><p>[2] <a href="https://ckip.iis.sinica.edu.tw/project/ws" rel="nofollow">https://ckip.iis.sinica.edu.tw/project/ws</a><p>[3] <a href="http://thulac.thunlp.org/" rel="nofollow">http://thulac.thunlp.org/</a>
Show HN: Mandarin Word Segmenter with Translation
I've built mandoBot, a web app that segments and translates Mandarin Chinese text. This is a Django API (using Django-Ninja and PostgreSQL) and a NextJS front-end (with Typescript and Chakra). For a sample of what this app does, head to <a href="https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y" rel="nofollow">https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y</a>. This is my presentation of the first chapter of a classic story from the Republican era of Chinese fiction, Diary of a Madman by Lu Xun. Other chapters are located in the "Reading Room" section of the app.<p>This app exists because reading Mandarin is very hard for learners (like me), since Mandarin text does not separate words using spaces in the same way Western languages do. But extensive reading is the most effective way to learn vocabulary and grammar. Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.<p>I'm solving this problem by allowing users to input Mandarin text, which is then computationally segmented and machine translated by my server, which also adds dictionary definitions for each word and character. The hard part is the segmentation: it turns out that "Chinese Word Segmentation"[0] is <i>the</i> central problem in Chinese Natural Language Processing; no current solutions reach 100% accuracy, whether they're from Stanford[1], Academia Sinica[2], or Tsing Hua University[3]. This includes every LLM currently available.<p>I could talk about this for hours, but the bottom line is that this app is a way to develop my full-stack skills; the backend should be fast, accurate, secure, well-tested, and well-documented, and the front-end should be pretty, secure, well-tested, responsive, and accessible. I am the sole developer, and I'm open to any comments and suggestions: roberto.loja+hn@gmail.com<p>Thanks HN!<p>[0] <a href="https://en.wikipedia.org/wiki/Chinese_word-segmented_writing" rel="nofollow">https://en.wikipedia.org/wiki/Chinese_word-segmented_writing</a><p>[1] <a href="https://nlp.stanford.edu/software/segmenter.shtml" rel="nofollow">https://nlp.stanford.edu/software/segmenter.shtml</a><p>[2] <a href="https://ckip.iis.sinica.edu.tw/project/ws" rel="nofollow">https://ckip.iis.sinica.edu.tw/project/ws</a><p>[3] <a href="http://thulac.thunlp.org/" rel="nofollow">http://thulac.thunlp.org/</a>
Show HN: Mandarin Word Segmenter with Translation
I've built mandoBot, a web app that segments and translates Mandarin Chinese text. This is a Django API (using Django-Ninja and PostgreSQL) and a NextJS front-end (with Typescript and Chakra). For a sample of what this app does, head to <a href="https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y" rel="nofollow">https://mandobot.netlify.app/?share_id=e8PZ8KFE5Y</a>. This is my presentation of the first chapter of a classic story from the Republican era of Chinese fiction, Diary of a Madman by Lu Xun. Other chapters are located in the "Reading Room" section of the app.<p>This app exists because reading Mandarin is very hard for learners (like me), since Mandarin text does not separate words using spaces in the same way Western languages do. But extensive reading is the most effective way to learn vocabulary and grammar. Thus, learning Mandarin by reading requires first memorizing hundreds or thousands of words, before you can even know where one word ends and the next word begins.<p>I'm solving this problem by allowing users to input Mandarin text, which is then computationally segmented and machine translated by my server, which also adds dictionary definitions for each word and character. The hard part is the segmentation: it turns out that "Chinese Word Segmentation"[0] is <i>the</i> central problem in Chinese Natural Language Processing; no current solutions reach 100% accuracy, whether they're from Stanford[1], Academia Sinica[2], or Tsing Hua University[3]. This includes every LLM currently available.<p>I could talk about this for hours, but the bottom line is that this app is a way to develop my full-stack skills; the backend should be fast, accurate, secure, well-tested, and well-documented, and the front-end should be pretty, secure, well-tested, responsive, and accessible. I am the sole developer, and I'm open to any comments and suggestions: roberto.loja+hn@gmail.com<p>Thanks HN!<p>[0] <a href="https://en.wikipedia.org/wiki/Chinese_word-segmented_writing" rel="nofollow">https://en.wikipedia.org/wiki/Chinese_word-segmented_writing</a><p>[1] <a href="https://nlp.stanford.edu/software/segmenter.shtml" rel="nofollow">https://nlp.stanford.edu/software/segmenter.shtml</a><p>[2] <a href="https://ckip.iis.sinica.edu.tw/project/ws" rel="nofollow">https://ckip.iis.sinica.edu.tw/project/ws</a><p>[3] <a href="http://thulac.thunlp.org/" rel="nofollow">http://thulac.thunlp.org/</a>
Show HN: Haystack Code Reviewer – Perform code reviews on a canvas
Hi HN!<p>We’re building Haystack Code Reviewer, a tool that lays out code diffs for a GitHub pull request on an interactive canvas. Instead of scrolling through diffs line-by-line, you can view all changes in a more connected, visual format – similar to viewing a call graph. We hope this will make it easier and less cognitively taxing to understand how different changes across files work together.<p>For a quick overview, check out our short demo video: <a href="https://www.youtube.com/watch?v=QeOz70x0WPE" rel="nofollow">https://www.youtube.com/watch?v=QeOz70x0WPE</a>. If you would like to give it a spin, head over to <a href="https://haystackeditor.dev" rel="nofollow">https://haystackeditor.dev</a>, click the “Review pull request button” in the top toolbar, and load any pull request via URL or pick a pull request from a dropdown.<p>We built Haystack Code Reviewer because we found pull requests difficult to review in a pure textual format — especially when hopping between multiple files or trying to break down complex changes. Oftentimes, pull request authors would have to specifically structure their commits so that code reviews would be easier to tackle, which is a time-consuming and error-prone process. Our goal is to make any pull request easy to understand at a glance, and reduce the effort needed from both reviewers and authors to craft a good code review.<p>Haystack Code Reviewer works on private repositories! We have authentication to ensure that someone cannot open the server for your pull request without having access to that pull request on GitHub. For additional security, we plan to build self-hosting soon. Please contact us if you’re interested in this.<p>Alternatively, a completely local option would be to download desktop Haystack and then navigate to your pull request from there. This is great for trying out the feature without exposing any data on the cloud!<p>In the near future, we plan to:<p>1. Introduce step-by-step navigation to guide reviewers through each part of the changeset<p>2. Allow for self-hosting<p>We’d love to hear your thoughts, suggestions, and any feedback on our approach or potential features!
Show HN: Haystack Code Reviewer – Perform code reviews on a canvas
Hi HN!<p>We’re building Haystack Code Reviewer, a tool that lays out code diffs for a GitHub pull request on an interactive canvas. Instead of scrolling through diffs line-by-line, you can view all changes in a more connected, visual format – similar to viewing a call graph. We hope this will make it easier and less cognitively taxing to understand how different changes across files work together.<p>For a quick overview, check out our short demo video: <a href="https://www.youtube.com/watch?v=QeOz70x0WPE" rel="nofollow">https://www.youtube.com/watch?v=QeOz70x0WPE</a>. If you would like to give it a spin, head over to <a href="https://haystackeditor.dev" rel="nofollow">https://haystackeditor.dev</a>, click the “Review pull request button” in the top toolbar, and load any pull request via URL or pick a pull request from a dropdown.<p>We built Haystack Code Reviewer because we found pull requests difficult to review in a pure textual format — especially when hopping between multiple files or trying to break down complex changes. Oftentimes, pull request authors would have to specifically structure their commits so that code reviews would be easier to tackle, which is a time-consuming and error-prone process. Our goal is to make any pull request easy to understand at a glance, and reduce the effort needed from both reviewers and authors to craft a good code review.<p>Haystack Code Reviewer works on private repositories! We have authentication to ensure that someone cannot open the server for your pull request without having access to that pull request on GitHub. For additional security, we plan to build self-hosting soon. Please contact us if you’re interested in this.<p>Alternatively, a completely local option would be to download desktop Haystack and then navigate to your pull request from there. This is great for trying out the feature without exposing any data on the cloud!<p>In the near future, we plan to:<p>1. Introduce step-by-step navigation to guide reviewers through each part of the changeset<p>2. Allow for self-hosting<p>We’d love to hear your thoughts, suggestions, and any feedback on our approach or potential features!
Show HN: Haystack Code Reviewer – Perform code reviews on a canvas
Hi HN!<p>We’re building Haystack Code Reviewer, a tool that lays out code diffs for a GitHub pull request on an interactive canvas. Instead of scrolling through diffs line-by-line, you can view all changes in a more connected, visual format – similar to viewing a call graph. We hope this will make it easier and less cognitively taxing to understand how different changes across files work together.<p>For a quick overview, check out our short demo video: <a href="https://www.youtube.com/watch?v=QeOz70x0WPE" rel="nofollow">https://www.youtube.com/watch?v=QeOz70x0WPE</a>. If you would like to give it a spin, head over to <a href="https://haystackeditor.dev" rel="nofollow">https://haystackeditor.dev</a>, click the “Review pull request button” in the top toolbar, and load any pull request via URL or pick a pull request from a dropdown.<p>We built Haystack Code Reviewer because we found pull requests difficult to review in a pure textual format — especially when hopping between multiple files or trying to break down complex changes. Oftentimes, pull request authors would have to specifically structure their commits so that code reviews would be easier to tackle, which is a time-consuming and error-prone process. Our goal is to make any pull request easy to understand at a glance, and reduce the effort needed from both reviewers and authors to craft a good code review.<p>Haystack Code Reviewer works on private repositories! We have authentication to ensure that someone cannot open the server for your pull request without having access to that pull request on GitHub. For additional security, we plan to build self-hosting soon. Please contact us if you’re interested in this.<p>Alternatively, a completely local option would be to download desktop Haystack and then navigate to your pull request from there. This is great for trying out the feature without exposing any data on the cloud!<p>In the near future, we plan to:<p>1. Introduce step-by-step navigation to guide reviewers through each part of the changeset<p>2. Allow for self-hosting<p>We’d love to hear your thoughts, suggestions, and any feedback on our approach or potential features!
Show HN: Check Supply – Send Checks in the Mail
When I lived in SF, my landlord required rent payments via check. For a while I just used my bank's bill-pay. If you remember Simple, they eventually killed their bill pay feature, and then they later shutdown altogether. I didn't want to buy a checkbook, stamps, and envelopes just for this one bill.<p>That's why I built Checks Supply with a friend to make check sending as simple as sending cash on Venmo. With our app, you can fill out your check details and have your payment processing within minutes after downloading.<p>Check writing is becoming a rarity, and many first-time senders find the process daunting. We hope Check Supply is a quick and convenient option for those moments you're puzzled why someone is asking you to pay by check.
Show HN: Check Supply – Send Checks in the Mail
When I lived in SF, my landlord required rent payments via check. For a while I just used my bank's bill-pay. If you remember Simple, they eventually killed their bill pay feature, and then they later shutdown altogether. I didn't want to buy a checkbook, stamps, and envelopes just for this one bill.<p>That's why I built Checks Supply with a friend to make check sending as simple as sending cash on Venmo. With our app, you can fill out your check details and have your payment processing within minutes after downloading.<p>Check writing is becoming a rarity, and many first-time senders find the process daunting. We hope Check Supply is a quick and convenient option for those moments you're puzzled why someone is asking you to pay by check.
Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output
We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation
- Dual analysis combining log probabilities and semantic understanding
- Structured JSON output with actionable insights
- Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type}
- high_confidence: array of {step, probability, token, context}
- risk_areas: array of {type, steps[], motivation}
- suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple:
`pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a>
- Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)
Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output
We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation
- Dual analysis combining log probabilities and semantic understanding
- Structured JSON output with actionable insights
- Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type}
- high_confidence: array of {step, probability, token, context}
- risk_areas: array of {type, steps[], motivation}
- suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple:
`pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a>
- Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)
Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output
We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation
- Dual analysis combining log probabilities and semantic understanding
- Structured JSON output with actionable insights
- Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type}
- high_confidence: array of {step, probability, token, context}
- risk_areas: array of {type, steps[], motivation}
- suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple:
`pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a>
- Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)
Show HN: I convert videos to printed flipbooks for living
I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.
Show HN: I convert videos to printed flipbooks for living
I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.
Show HN: I convert videos to printed flipbooks for living
I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.
Show HN: I convert videos to printed flipbooks for living
I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.