The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Wirequery – Full-stack session replay and more
Show HN: FXYT – Tiny, esoteric, stack-based, postfix, canvas colouring language
Show HN: FXYT – Tiny, esoteric, stack-based, postfix, canvas colouring language
Show HN: Glossarie – a new, immersive way to learn a language
Hi HN, For over two years I've been working on an App to learn languages (currently French, Italian and Spanish), together with my partner, a language teacher. I think it is finally ready to share with this community!<p>The idea is to introduce vocabulary and grammar whilst you read eBooks in your own language. I've found that it is easier to remember vocabulary 'in context' and with regular repetition. Plus you don't have to carve out dedicated time for language learning. Other apps require you to build a habit around various exercises or ‘games’, whereas lots of people already read books.<p>From testing with early users so far it's proving effective for building a basic understanding of a language and quickly getting to the point where you can read and broadly understand text in the target language. It’s even better in combination with other apps that help with listening/speaking like Pimsleur.<p>There were lots of technical challenges making this. It turned out to be (reassuringly) hard to get accuracy to an acceptable level, requiring a rabbit-hole into machine translation. There was a lot of testing required to optimise the engine that chooses the translations to show and to reduce the friction when reading books. And the backend to support uploading books is a beast in itself. I’d love to share details if there is interest.<p>Roadmap<p>- Accuracy - 100% accuracy is the target, but at present there can be errors. Feedback from users will be important here so that accuracy issues can be generalised and solved at scale. Errors can be reported within the app - please do so if you spot anything!<p>- Dynamic difficulty - rather than have a progression of difficulty levels I’d prefer to introduce vocabulary and grammar automatically in response to user progress, balancing against the friction of seeing unfamiliar words. There’s a lot ‘under the hood’ to manage this today, but plenty of room to improve.<p>- More practice features - to reinforce vocabulary/grammar and support writing, listening and speaking.<p>- Better eBook support - improving the formatting of eBooks within the app and providing more methods for finding good books to read.<p>Use of AI<p>- LLMs provided a step change in accuracy and have enabled a feature that explains translations and grammar to the user<p>- vastly improving the utility versus a year ago.<p>- I believe apps like this, which use AI to enhance or scale functionality rather than simply acting as a wrapper over APIs, will be the major beneficiaries as LLMs improve.<p>Take a look, and let me know your thoughts or questions!
Show HN: Glossarie – a new, immersive way to learn a language
Hi HN, For over two years I've been working on an App to learn languages (currently French, Italian and Spanish), together with my partner, a language teacher. I think it is finally ready to share with this community!<p>The idea is to introduce vocabulary and grammar whilst you read eBooks in your own language. I've found that it is easier to remember vocabulary 'in context' and with regular repetition. Plus you don't have to carve out dedicated time for language learning. Other apps require you to build a habit around various exercises or ‘games’, whereas lots of people already read books.<p>From testing with early users so far it's proving effective for building a basic understanding of a language and quickly getting to the point where you can read and broadly understand text in the target language. It’s even better in combination with other apps that help with listening/speaking like Pimsleur.<p>There were lots of technical challenges making this. It turned out to be (reassuringly) hard to get accuracy to an acceptable level, requiring a rabbit-hole into machine translation. There was a lot of testing required to optimise the engine that chooses the translations to show and to reduce the friction when reading books. And the backend to support uploading books is a beast in itself. I’d love to share details if there is interest.<p>Roadmap<p>- Accuracy - 100% accuracy is the target, but at present there can be errors. Feedback from users will be important here so that accuracy issues can be generalised and solved at scale. Errors can be reported within the app - please do so if you spot anything!<p>- Dynamic difficulty - rather than have a progression of difficulty levels I’d prefer to introduce vocabulary and grammar automatically in response to user progress, balancing against the friction of seeing unfamiliar words. There’s a lot ‘under the hood’ to manage this today, but plenty of room to improve.<p>- More practice features - to reinforce vocabulary/grammar and support writing, listening and speaking.<p>- Better eBook support - improving the formatting of eBooks within the app and providing more methods for finding good books to read.<p>Use of AI<p>- LLMs provided a step change in accuracy and have enabled a feature that explains translations and grammar to the user<p>- vastly improving the utility versus a year ago.<p>- I believe apps like this, which use AI to enhance or scale functionality rather than simply acting as a wrapper over APIs, will be the major beneficiaries as LLMs improve.<p>Take a look, and let me know your thoughts or questions!
Show HN: Rotary Phone Project
Show HN: Rotary Phone Project
Show HN: Rotary Phone Project
Show HN: Lapdev, a new open-source remote dev environment management software
Show HN: Lapdev, a new open-source remote dev environment management software
Show HN: Lapdev, a new open-source remote dev environment management software
Show HN: Leaping – Debug Python tests instantly with an LLM debugger
Hi HN! We’re Adrien and Kanav. We met at our previous job, where we spent about a third of our lives combating a constant firehose of bugs. In the hope of reducing this pain for others in the future, we’re working on automating debugging.<p>We’re currently working on a platform that ingests logs and then automatically reproduces, root causes and ultimately fixes production bugs as they happen. You can see some of our work on this here - <a href="https://news.ycombinator.com/item?id=39528087">https://news.ycombinator.com/item?id=39528087</a><p>As we were building the root-cause phase of our automated debugger, we realized that we developed something that resembled an omniscient debugger. Like an omniscient debugger, it also keeps track of variable assignments over time but, you can interact with it at a higher level than a debugger using natural language. We ended up sticking it in a pytest plugin and have been using it ourselves for development over the past few weeks.<p>Using this pytest plugin, you’re able to reason at a much higher level than conventional debuggers and can ask questions like:<p>- Why did function x get hit?<p>- Why was variable y set to this value?<p>- What changes can I make to this code to make this test pass?<p>Here’s a brief demo of this in action: <a href="https://www.loom.com/share/94ebe34097a343c39876d7109f2a1428" rel="nofollow">https://www.loom.com/share/94ebe34097a343c39876d7109f2a1428</a><p>To achieve this, we first instrument the test using sys.settrace (or, on versions of python >3.12, the far better sys.monitoring!) to keep a history of all the functions that were called, along with the calling line numbers. We then re-run the test and use AST parsing to find all the variable assignments and keep track of those changes over time. We also use AST parsing to obtain the source code for these functions. We then neatly format and pass all this context to GPT.<p>We’d love it if you checked the pytest plugin out and we welcome all feedback :) . If you want to chat bugs, our emails are also always open - kanav@leaping.io
Show HN: Leaping – Debug Python tests instantly with an LLM debugger
Hi HN! We’re Adrien and Kanav. We met at our previous job, where we spent about a third of our lives combating a constant firehose of bugs. In the hope of reducing this pain for others in the future, we’re working on automating debugging.<p>We’re currently working on a platform that ingests logs and then automatically reproduces, root causes and ultimately fixes production bugs as they happen. You can see some of our work on this here - <a href="https://news.ycombinator.com/item?id=39528087">https://news.ycombinator.com/item?id=39528087</a><p>As we were building the root-cause phase of our automated debugger, we realized that we developed something that resembled an omniscient debugger. Like an omniscient debugger, it also keeps track of variable assignments over time but, you can interact with it at a higher level than a debugger using natural language. We ended up sticking it in a pytest plugin and have been using it ourselves for development over the past few weeks.<p>Using this pytest plugin, you’re able to reason at a much higher level than conventional debuggers and can ask questions like:<p>- Why did function x get hit?<p>- Why was variable y set to this value?<p>- What changes can I make to this code to make this test pass?<p>Here’s a brief demo of this in action: <a href="https://www.loom.com/share/94ebe34097a343c39876d7109f2a1428" rel="nofollow">https://www.loom.com/share/94ebe34097a343c39876d7109f2a1428</a><p>To achieve this, we first instrument the test using sys.settrace (or, on versions of python >3.12, the far better sys.monitoring!) to keep a history of all the functions that were called, along with the calling line numbers. We then re-run the test and use AST parsing to find all the variable assignments and keep track of those changes over time. We also use AST parsing to obtain the source code for these functions. We then neatly format and pass all this context to GPT.<p>We’d love it if you checked the pytest plugin out and we welcome all feedback :) . If you want to chat bugs, our emails are also always open - kanav@leaping.io
Show HN: Free Plain-Text Bookmarking
Show HN: Free Plain-Text Bookmarking
Show HN: magick.css – Minimalist CSS for Wizards
Show HN: magick.css – Minimalist CSS for Wizards
Mapping almost every law, regulation and case in Australia
Hey HN,<p>After months of hard work, I am excited to share the first ever semantic map of Australian law.<p>My map represents the first attempt to map Australian laws, cases and regulations across the Commonwealth, States and Territories semantically, that is, by their underlying meaning.<p>Each point on the map is a unique document in the Open Australian Legal Corpus, the largest open database of Australian law (which, full disclosure, I created). The closer any two points are on the map, the more similar they are in underlying meaning.<p>As I cover in my article, there’s a lot you can learn by mapping Australian law. Some of the most interesting insights to come out of this initiative are that:<p>⦁ Migration, family and substantive criminal law are the most isolated branches of case law on the map;<p>⦁ Migration, family and substantive criminal law are the most distant branches of case law from legislation on the map;<p>⦁ Development law is the closest branch of case law to legislation on the map;<p>⦁ Case law is more of a continuum than a rigidly defined structure and the borders between branches of case law can often be quite porous; and<p>⦁ The map does not reveal any noticeable distinctions between Australian state and federal law, whether it be in style, principles of interpretation or general jurisprudence.<p>If you’re interested in learning more about what the map has to teach us about Australian law or if you’d like to find out how you can create semantic maps of your own, check out the full article on my blog, which provides a detailed analysis of my map and also covers the finer details of how I built it, with code examples offered along the way.
Show HN: Nebula – A network agnostic DHT crawler
Show HN: Ragas – Open-source library for evaluating RAG pipelines
Ragas is an open-source library for evaluating and testing RAG and other LLM applications. Github: <a href="https://docs.ragas.io/en/stable/">https://docs.ragas.io/en/stable/</a>, docs: <a href="https://docs.ragas.io/">https://docs.ragas.io/</a>.<p>Ragas provides you with different sets of metrics and methods like synthetic test data generation to help you evaluate your RAG applications. Ragas started off by scratching our own itch for evaluating our RAG chatbots last year.<p><i>Problems Ragas can solve</i><p>- How do you choose the best components for your RAG, such as the retriever, reranker, and LLM?<p>- How do you formulate a test dataset without spending tons of money and time?<p>We believe there needs to be an open-source standard for evaluating and testing LLM applications, and our vision is to build it for the community. We are tackling this challenge by evolving the ideas from the traditional ML lifecycle for LLM applications.<p><i>ML Testing Evolved for LLM Applications</i><p>We built Ragas on the principles of metrics-driven development and aim to develop and innovate techniques inspired by state-of-the-art research to solve the problems in evaluating and testing LLM applications.<p>We don't believe that the problem of evaluating and testing applications can be solved by building a fancy tracing tool; rather, we want to solve the problem from a layer under the stack. For this, we are introducing methods like automated synthetic test data curation, metrics, and feedback utilisation, which are inspired by lessons learned from deploying stochastic models in our careers as ML engineers.<p>While currently focused on RAG pipelines, our goal is to extend Ragas for testing a wide array of compound systems, including those based on RAGs, agentic workflows, and various transformations.<p>Try out Ragas here <a href="https://colab.research.google.com/github/shahules786/openai-cookbook/blob/ragas/examples/evaluation/ragas/openai-ragas-eval-cookbook.ipynb" rel="nofollow">https://colab.research.google.com/github/shahules786/openai-...</a> in Google Colab. Read our docs - <a href="https://docs.ragas.io/">https://docs.ragas.io/</a> to know more<p>We would love to hear feedback from the HN community :)