The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Finstruments - Financial instrument library built with Python

finstruments is a Python library designed for modeling financial instruments. It comes with the core financial instruments, such as forwards and options, out of the box, as well as position, trade, and portfolio models. finstruments comes with the basic building blocks, making it easy to extend and build new instruments for any asset class. These building blocks also provide the functionality to serialize and deserialize to and from JSON, enabling the ability to store a serialized format in a document database. This library is ideal for quantitative researchers, traders, and developers who need a streamlined way to build and interact with financial instruments.

Show HN: Finstruments - Financial instrument library built with Python

finstruments is a Python library designed for modeling financial instruments. It comes with the core financial instruments, such as forwards and options, out of the box, as well as position, trade, and portfolio models. finstruments comes with the basic building blocks, making it easy to extend and build new instruments for any asset class. These building blocks also provide the functionality to serialize and deserialize to and from JSON, enabling the ability to store a serialized format in a document database. This library is ideal for quantitative researchers, traders, and developers who need a streamlined way to build and interact with financial instruments.

Show HN: Marmite – Zero-config static site generator

Just run `marmite` on a folder full of markdown files and get a full website/blog running in seconds.<p>"I'm a big user of other SSGs but it is frequently frustrating that it takes so much setup to get started. Just having a directory of markdown files and running a single command sounds really useful. — marmite user."

Show HN: Marmite – Zero-config static site generator

Just run `marmite` on a folder full of markdown files and get a full website/blog running in seconds.<p>"I'm a big user of other SSGs but it is frequently frustrating that it takes so much setup to get started. Just having a directory of markdown files and running a single command sounds really useful. — marmite user."

Show HN: Trench – Open-source analytics infrastructure

Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.

Show HN: Trench – Open-source analytics infrastructure

Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.

Show HN: Trench – Open-source analytics infrastructure

Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.

Show HN: Trench – Open-source analytics infrastructure

Hey HN! I want to share a new open source project I've been working on called Trench (<a href="https://trench.dev" rel="nofollow">https://trench.dev</a>). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.<p><a href="https://github.com/frigadehq/trench">https://github.com/frigadehq/trench</a><p>I built Trench because the Postgres table we used for tracking events at our startup (<a href="http://frigade.com/">http://frigade.com/</a>) was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.<p>Many companies run into the same problem as us (e.g. Stripe, Heroku: <a href="https://brandur.org/fragments/events" rel="nofollow">https://brandur.org/fragments/events</a>). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.<p>With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.<p>Here are some of the core features:<p>* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.<p>* Can handle thousands of events per second on a single node<p>* Query tracking data in real-time with read-after-write guarantees<p>* Send data anywhere with throttled and batched webhooks<p>* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.<p>* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent<p>Trench can be used for a range of use cases. Here are some possibilities:<p>1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks<p>2. Event Replay and Debugging: Capture all user interactions in real-time for event replay<p>3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users<p>4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites<p>5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL<p>The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.<p>Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.

Show HN: Ezcrypt – A file encryption tool (simple, strong, public domain)

Show HN: Ezcrypt – A file encryption tool (simple, strong, public domain)

Show HN: A text-only blog engine using Cloudflare workers and KV store

Show HN: A text-only blog engine using Cloudflare workers and KV store

Show HN: A text-only blog engine using Cloudflare workers and KV store

Show HN: Mdx – Execute your Markdown code blocks, now in Go

Hey HN! I recently came across makedown here on HN and loved the concept. Wanting to learn Go, I thought this could be a great starter project - so I started working on my own Go implementation, which I’m calling mdx (<a href="https://github.com/dim0x69/mdx">https://github.com/dim0x69/mdx</a>).<p>Key Features:<p>- Define dependencies between commands<p>- Supports shebangs<p>- Ability to pass arguments to code blocks<p>Would love feedback and thoughts!<p>Ref. makedown: <a href="https://github.com/tzador/makedown">https://github.com/tzador/makedown</a>. Thanks for the idea! :)

Show HN: Mdx – Execute your Markdown code blocks, now in Go

Hey HN! I recently came across makedown here on HN and loved the concept. Wanting to learn Go, I thought this could be a great starter project - so I started working on my own Go implementation, which I’m calling mdx (<a href="https://github.com/dim0x69/mdx">https://github.com/dim0x69/mdx</a>).<p>Key Features:<p>- Define dependencies between commands<p>- Supports shebangs<p>- Ability to pass arguments to code blocks<p>Would love feedback and thoughts!<p>Ref. makedown: <a href="https://github.com/tzador/makedown">https://github.com/tzador/makedown</a>. Thanks for the idea! :)

Show HN: Mdx – Execute your Markdown code blocks, now in Go

Hey HN! I recently came across makedown here on HN and loved the concept. Wanting to learn Go, I thought this could be a great starter project - so I started working on my own Go implementation, which I’m calling mdx (<a href="https://github.com/dim0x69/mdx">https://github.com/dim0x69/mdx</a>).<p>Key Features:<p>- Define dependencies between commands<p>- Supports shebangs<p>- Ability to pass arguments to code blocks<p>Would love feedback and thoughts!<p>Ref. makedown: <a href="https://github.com/tzador/makedown">https://github.com/tzador/makedown</a>. Thanks for the idea! :)

Show HN: Mdx – Execute your Markdown code blocks, now in Go

Hey HN! I recently came across makedown here on HN and loved the concept. Wanting to learn Go, I thought this could be a great starter project - so I started working on my own Go implementation, which I’m calling mdx (<a href="https://github.com/dim0x69/mdx">https://github.com/dim0x69/mdx</a>).<p>Key Features:<p>- Define dependencies between commands<p>- Supports shebangs<p>- Ability to pass arguments to code blocks<p>Would love feedback and thoughts!<p>Ref. makedown: <a href="https://github.com/tzador/makedown">https://github.com/tzador/makedown</a>. Thanks for the idea! :)

Show HN: Desktop Sandbox for Secure Cloud Computer Use

Show HN: Desktop Sandbox for Secure Cloud Computer Use

Show HN: Rust based AWS Lambda Logs Viewer (TUI)

My first Rust program with support from Cursor :)). few days for idea. 1 night for "talk and fight" with Cursor

< 1 2 3 ... 11 12 13 14 15 ... 717 718 719 >