The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Latex.to – LaTeX to image converter running in the browser
I've made a website to easily share a LaTeX math formula.<p>- The image is created in the browser (i.e. the LaTeX is not send to a server for rendering)<p>- Native share dialog (share via WhatsApp etc.)<p>- Extra keyboard buttons for symbols like "$" or "\" on mobile<p>- Share via png or unicode<p>Demo video: <a href="https://www.youtube.com/shorts/fGuTns5Nt9Q" rel="nofollow">https://www.youtube.com/shorts/fGuTns5Nt9Q</a><p>Please let me know any feedback on how to improve the website.
Show HN: Latex.to – LaTeX to image converter running in the browser
I've made a website to easily share a LaTeX math formula.<p>- The image is created in the browser (i.e. the LaTeX is not send to a server for rendering)<p>- Native share dialog (share via WhatsApp etc.)<p>- Extra keyboard buttons for symbols like "$" or "\" on mobile<p>- Share via png or unicode<p>Demo video: <a href="https://www.youtube.com/shorts/fGuTns5Nt9Q" rel="nofollow">https://www.youtube.com/shorts/fGuTns5Nt9Q</a><p>Please let me know any feedback on how to improve the website.
Show HN: Latex.to – LaTeX to image converter running in the browser
I've made a website to easily share a LaTeX math formula.<p>- The image is created in the browser (i.e. the LaTeX is not send to a server for rendering)<p>- Native share dialog (share via WhatsApp etc.)<p>- Extra keyboard buttons for symbols like "$" or "\" on mobile<p>- Share via png or unicode<p>Demo video: <a href="https://www.youtube.com/shorts/fGuTns5Nt9Q" rel="nofollow">https://www.youtube.com/shorts/fGuTns5Nt9Q</a><p>Please let me know any feedback on how to improve the website.
Show HN: Kasama – an IntelliJ plugin to keep track of your coding practices
Hi HN,<p>I want to share with you an IntelliJ plugin I have developed and launched.<p>Based on my own needs, I wanted a plugin that monitors my coding practices and gives me stats about them in order for me to improve on them.<p>So, here is Kasama: An IDE plugin that works like a sport fitness tracker, gathering data on:<p>- your coding sessions, i.e. how long are you active in the IDE and for which project<p>- your activity in different modules, and how the activity is split between test code and prod code<p>- your version control (git) interactions: how often you commit, the lifespan your branches, and the types of branches you work on over time (feature, bugfix, etc.)<p>- your testing interaction: how often are you running tests, how often are they failing, how large are they<p>- the refactoring interactions: which tool-driven refactorings you use<p>- the build tasks you are running, and in which you spend the most time in<p>The plugin runs locally and provides graph visualization for the different stats.<p>It can be directly installed from the JetBrains marketplace - it works with IntelliJ IDEA as well as with other JetBrains IDEs: <a href="https://plugins.jetbrains.com/plugin/24683-kasama" rel="nofollow">https://plugins.jetbrains.com/plugin/24683-kasama</a><p>You can find more documentation here: <a href="https://spark-teams.github.io/kasama-intellij-support/" rel="nofollow">https://spark-teams.github.io/kasama-intellij-support/</a><p>Coming soon, it will show even more stats, including records and achievements. I’m also exploring additional data to collect, such as the proportion of AI generated code compared to manually written code.<p>I’d love your feedback and questions! You can reach me at kasama@sparkteams.de
Show HN: Kasama – an IntelliJ plugin to keep track of your coding practices
Hi HN,<p>I want to share with you an IntelliJ plugin I have developed and launched.<p>Based on my own needs, I wanted a plugin that monitors my coding practices and gives me stats about them in order for me to improve on them.<p>So, here is Kasama: An IDE plugin that works like a sport fitness tracker, gathering data on:<p>- your coding sessions, i.e. how long are you active in the IDE and for which project<p>- your activity in different modules, and how the activity is split between test code and prod code<p>- your version control (git) interactions: how often you commit, the lifespan your branches, and the types of branches you work on over time (feature, bugfix, etc.)<p>- your testing interaction: how often are you running tests, how often are they failing, how large are they<p>- the refactoring interactions: which tool-driven refactorings you use<p>- the build tasks you are running, and in which you spend the most time in<p>The plugin runs locally and provides graph visualization for the different stats.<p>It can be directly installed from the JetBrains marketplace - it works with IntelliJ IDEA as well as with other JetBrains IDEs: <a href="https://plugins.jetbrains.com/plugin/24683-kasama" rel="nofollow">https://plugins.jetbrains.com/plugin/24683-kasama</a><p>You can find more documentation here: <a href="https://spark-teams.github.io/kasama-intellij-support/" rel="nofollow">https://spark-teams.github.io/kasama-intellij-support/</a><p>Coming soon, it will show even more stats, including records and achievements. I’m also exploring additional data to collect, such as the proportion of AI generated code compared to manually written code.<p>I’d love your feedback and questions! You can reach me at kasama@sparkteams.de
Show HN: Kasama – an IntelliJ plugin to keep track of your coding practices
Hi HN,<p>I want to share with you an IntelliJ plugin I have developed and launched.<p>Based on my own needs, I wanted a plugin that monitors my coding practices and gives me stats about them in order for me to improve on them.<p>So, here is Kasama: An IDE plugin that works like a sport fitness tracker, gathering data on:<p>- your coding sessions, i.e. how long are you active in the IDE and for which project<p>- your activity in different modules, and how the activity is split between test code and prod code<p>- your version control (git) interactions: how often you commit, the lifespan your branches, and the types of branches you work on over time (feature, bugfix, etc.)<p>- your testing interaction: how often are you running tests, how often are they failing, how large are they<p>- the refactoring interactions: which tool-driven refactorings you use<p>- the build tasks you are running, and in which you spend the most time in<p>The plugin runs locally and provides graph visualization for the different stats.<p>It can be directly installed from the JetBrains marketplace - it works with IntelliJ IDEA as well as with other JetBrains IDEs: <a href="https://plugins.jetbrains.com/plugin/24683-kasama" rel="nofollow">https://plugins.jetbrains.com/plugin/24683-kasama</a><p>You can find more documentation here: <a href="https://spark-teams.github.io/kasama-intellij-support/" rel="nofollow">https://spark-teams.github.io/kasama-intellij-support/</a><p>Coming soon, it will show even more stats, including records and achievements. I’m also exploring additional data to collect, such as the proportion of AI generated code compared to manually written code.<p>I’d love your feedback and questions! You can reach me at kasama@sparkteams.de
Show HN: Kasama – an IntelliJ plugin to keep track of your coding practices
Hi HN,<p>I want to share with you an IntelliJ plugin I have developed and launched.<p>Based on my own needs, I wanted a plugin that monitors my coding practices and gives me stats about them in order for me to improve on them.<p>So, here is Kasama: An IDE plugin that works like a sport fitness tracker, gathering data on:<p>- your coding sessions, i.e. how long are you active in the IDE and for which project<p>- your activity in different modules, and how the activity is split between test code and prod code<p>- your version control (git) interactions: how often you commit, the lifespan your branches, and the types of branches you work on over time (feature, bugfix, etc.)<p>- your testing interaction: how often are you running tests, how often are they failing, how large are they<p>- the refactoring interactions: which tool-driven refactorings you use<p>- the build tasks you are running, and in which you spend the most time in<p>The plugin runs locally and provides graph visualization for the different stats.<p>It can be directly installed from the JetBrains marketplace - it works with IntelliJ IDEA as well as with other JetBrains IDEs: <a href="https://plugins.jetbrains.com/plugin/24683-kasama" rel="nofollow">https://plugins.jetbrains.com/plugin/24683-kasama</a><p>You can find more documentation here: <a href="https://spark-teams.github.io/kasama-intellij-support/" rel="nofollow">https://spark-teams.github.io/kasama-intellij-support/</a><p>Coming soon, it will show even more stats, including records and achievements. I’m also exploring additional data to collect, such as the proportion of AI generated code compared to manually written code.<p>I’d love your feedback and questions! You can reach me at kasama@sparkteams.de
Show HN: I built an app to use a QR code as my doorbell
I didn’t have a doorbell before (multiple reasons) and my house feels unwelcoming without one. So I built a doorbell app that uses a QR code - visitors scan the QR code to ring the doorbell and I get notified on my phone.<p>Here is an example of the QR code I have on my door. You can scan it and say hello: <a href="https://www.thebacklog.net/img/2024/10/show-hn.png" rel="nofollow">https://www.thebacklog.net/img/2024/10/show-hn.png</a><p>This was also a great excuse to build my first app for Android and iPhone.<p>I’d love to get some feedback before I spend more time polishing the app. Please try it out and feel free to ask me any questions! No logins or accounts needed.
Show HN: I built an app to use a QR code as my doorbell
I didn’t have a doorbell before (multiple reasons) and my house feels unwelcoming without one. So I built a doorbell app that uses a QR code - visitors scan the QR code to ring the doorbell and I get notified on my phone.<p>Here is an example of the QR code I have on my door. You can scan it and say hello: <a href="https://www.thebacklog.net/img/2024/10/show-hn.png" rel="nofollow">https://www.thebacklog.net/img/2024/10/show-hn.png</a><p>This was also a great excuse to build my first app for Android and iPhone.<p>I’d love to get some feedback before I spend more time polishing the app. Please try it out and feel free to ask me any questions! No logins or accounts needed.
Show HN: I built an app to use a QR code as my doorbell
I didn’t have a doorbell before (multiple reasons) and my house feels unwelcoming without one. So I built a doorbell app that uses a QR code - visitors scan the QR code to ring the doorbell and I get notified on my phone.<p>Here is an example of the QR code I have on my door. You can scan it and say hello: <a href="https://www.thebacklog.net/img/2024/10/show-hn.png" rel="nofollow">https://www.thebacklog.net/img/2024/10/show-hn.png</a><p>This was also a great excuse to build my first app for Android and iPhone.<p>I’d love to get some feedback before I spend more time polishing the app. Please try it out and feel free to ask me any questions! No logins or accounts needed.
Show HN: Finstruments - Financial instrument library built with Python
finstruments is a Python library designed for modeling financial instruments. It comes with the core financial instruments, such as forwards and options, out of the box, as well as position, trade, and portfolio models. finstruments comes with the basic building blocks, making it easy to extend and build new instruments for any asset class. These building blocks also provide the functionality to serialize and deserialize to and from JSON, enabling the ability to store a serialized format in a document database. This library is ideal for quantitative researchers, traders, and developers who need a streamlined way to build and interact with financial instruments.
Show HN: Finstruments - Financial instrument library built with Python
finstruments is a Python library designed for modeling financial instruments. It comes with the core financial instruments, such as forwards and options, out of the box, as well as position, trade, and portfolio models. finstruments comes with the basic building blocks, making it easy to extend and build new instruments for any asset class. These building blocks also provide the functionality to serialize and deserialize to and from JSON, enabling the ability to store a serialized format in a document database. This library is ideal for quantitative researchers, traders, and developers who need a streamlined way to build and interact with financial instruments.
Show HN: Marmite – Zero-config static site generator
Just run `marmite` on a folder full of markdown files and get a full website/blog running in seconds.<p>"I'm a big user of other SSGs but it is frequently frustrating that it takes so much setup to get started.
Just having a directory of markdown files and running a single command sounds really useful.
— marmite user."
Show HN: Marmite – Zero-config static site generator
Just run `marmite` on a folder full of markdown files and get a full website/blog running in seconds.<p>"I'm a big user of other SSGs but it is frequently frustrating that it takes so much setup to get started.
Just having a directory of markdown files and running a single command sounds really useful.
— marmite user."
Show HN: Trench – Open-source analytics infrastructure
Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.
Show HN: Trench – Open-source analytics infrastructure
Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.
Show HN: Trench – Open-source analytics infrastructure
Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.
Show HN: Trench – Open-source analytics infrastructure
Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.
Show HN: Ezcrypt – A file encryption tool (simple, strong, public domain)
Show HN: Ezcrypt – A file encryption tool (simple, strong, public domain)