The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Build a personal knowledge graph from the content you consume online

I've been working on this tool that lets you build a personal knowledge graph from articles, blog posts, podcasts, YouTube videos, and other content you find interesting online. You can safely forget everything and trust that Recall will resurface it when something new that is related comes up. Looking forward to your thoughts and feedback on how it could be improved!<p>The original version of Recall was posted last year nov on HN: <a href="https://news.ycombinator.com/item?id=33425947" rel="nofollow">https://news.ycombinator.com/item?id=33425947</a><p>Since then I have pivoted to a browser extension.

Show HN: Build a personal knowledge graph from the content you consume online

I've been working on this tool that lets you build a personal knowledge graph from articles, blog posts, podcasts, YouTube videos, and other content you find interesting online. You can safely forget everything and trust that Recall will resurface it when something new that is related comes up. Looking forward to your thoughts and feedback on how it could be improved!<p>The original version of Recall was posted last year nov on HN: <a href="https://news.ycombinator.com/item?id=33425947" rel="nofollow">https://news.ycombinator.com/item?id=33425947</a><p>Since then I have pivoted to a browser extension.

Show HN: Tremor 3.0 – Open-source library to build dashboards fast

Time to remove the Beta label. Tremor v3 is here, adding:<p>- Global theming via tailwind.config.js<p>- An out-of-the-box dark mode<p>- A new Tremor CLI helping you set up projects faster

Show HN: Tremor 3.0 – Open-source library to build dashboards fast

Time to remove the Beta label. Tremor v3 is here, adding:<p>- Global theming via tailwind.config.js<p>- An out-of-the-box dark mode<p>- A new Tremor CLI helping you set up projects faster

Show HN: Tremor 3.0 – Open-source library to build dashboards fast

Time to remove the Beta label. Tremor v3 is here, adding:<p>- Global theming via tailwind.config.js<p>- An out-of-the-box dark mode<p>- A new Tremor CLI helping you set up projects faster

Show HN: Tremor 3.0 – Open-source library to build dashboards fast

Time to remove the Beta label. Tremor v3 is here, adding:<p>- Global theming via tailwind.config.js<p>- An out-of-the-box dark mode<p>- A new Tremor CLI helping you set up projects faster

Show HN: Private, text to entity-relationship diagram tool

Show HN: Private, text to entity-relationship diagram tool

Show HN: Private, text to entity-relationship diagram tool

Show HN: Private, text to entity-relationship diagram tool

Show HN: Private, text to entity-relationship diagram tool

Show HN: Keep – Create production alerts from plain English

Hi Hacker News! Shahar and Tal from Keep Here.<p>We were tired of creating alerts for our applications, so we've built an open-source GitHub Bot that lets you write application alerts using plain English. The code is open-sourced: <a href="https://github.com/keephq/keep">https://github.com/keephq/keep</a> so you can review it yourself.<p>Every developer and DevOps professional is familiar with the fact that in order to ensure your application works in production, you need to access your observability tool's user interface (such as Grafana, Datadog, New Relic, etc.) and carefully determine how to create alerts that effectively monitor your application.<p>Instead, by installing Keep, every time you open a PR, the bot combines the alert description (alerts under the .keep directory) with the tool context (mostly the configuration of the alerts you already have) to generate (GPT) new alerts that keep you monitored.<p>So, for example, if you create a .keep/db-timeout.yaml and open a PR, the bot will comment on the PR with the actual alert you can deploy to your tool.<p># The alert text in plain English alert: | Alert when the connections to the database are slower than 5 seconds for more than 5 minutes provider: grafana<p>You can Install the bot and connect your providers via <a href="https://platform.keephq.dev">https://platform.keephq.dev</a> (after login, you'll start the installation flow) or just clone the repository and use docker-compose to start the web app and the installation flow.<p>Demo Video - <a href="https://www.loom.com/share/23541a03944c4dca99b0504a1753d1b4" rel="nofollow">https://www.loom.com/share/23541a03944c4dca99b0504a1753d1b4</a>

Show HN: Keep – Create production alerts from plain English

Hi Hacker News! Shahar and Tal from Keep Here.<p>We were tired of creating alerts for our applications, so we've built an open-source GitHub Bot that lets you write application alerts using plain English. The code is open-sourced: <a href="https://github.com/keephq/keep">https://github.com/keephq/keep</a> so you can review it yourself.<p>Every developer and DevOps professional is familiar with the fact that in order to ensure your application works in production, you need to access your observability tool's user interface (such as Grafana, Datadog, New Relic, etc.) and carefully determine how to create alerts that effectively monitor your application.<p>Instead, by installing Keep, every time you open a PR, the bot combines the alert description (alerts under the .keep directory) with the tool context (mostly the configuration of the alerts you already have) to generate (GPT) new alerts that keep you monitored.<p>So, for example, if you create a .keep/db-timeout.yaml and open a PR, the bot will comment on the PR with the actual alert you can deploy to your tool.<p># The alert text in plain English alert: | Alert when the connections to the database are slower than 5 seconds for more than 5 minutes provider: grafana<p>You can Install the bot and connect your providers via <a href="https://platform.keephq.dev">https://platform.keephq.dev</a> (after login, you'll start the installation flow) or just clone the repository and use docker-compose to start the web app and the installation flow.<p>Demo Video - <a href="https://www.loom.com/share/23541a03944c4dca99b0504a1753d1b4" rel="nofollow">https://www.loom.com/share/23541a03944c4dca99b0504a1753d1b4</a>

Show HN: Ezno, a TypeScript checker written in Rust, is now open source

Show HN: Ezno, a TypeScript checker written in Rust, is now open source

Show HN: Ezno, a TypeScript checker written in Rust, is now open source

Show HN: Arroyo – Write SQL on streaming data

Hey HN,<p>Arroyo is a modern, open-source stream processing engine, that lets anyone write complex queries on event streams just by writing SQL—windowing, aggregating, and joining events with sub-second latency.<p>Today data processing typically happens in batch data warehouses like BigQuery and Snowflake despite the fact that most of the data is coming in as streams. Data teams have to build complex orchestration systems to handle late-arriving data and job failures while trying to minimize latency. Stream processing offers an alternative approach, where the query is compiled into a streaming program that constantly updates as new data comes in, providing low-latency results as soon as the data is available.<p>I started the Arroyo project after spending the past five years building real-time platforms at Lyft and Splunk. I saw first hand how hard it is for users to build correct, reliable pipelines on top of existing systems like Flink and Spark Streaming, and how hard those pipelines are to operate for infra teams. I saw the need for a new system that would be easy enough for any data team to adopt, built on modern foundations and with the lessons of the past decade of research and industry development.<p>Arroyo works by taking SQL queries and compiling them into an optimized streaming dataflow program, a distributed DAG of computation with nodes that read from sources (like Kafka), perform stateful computations, and eventually write results to sinks. That state is consistently snapshotted using a variation of the Chandy-Lamport checkpointing algorithm for fault-tolerance and to enable fast rescaling and updates of the pipelines. The entire system is easy to self-host on Kubernetes and Nomad.<p>See it in action here: <a href="https://www.youtube.com/watch?v=X1Nv0gQy9TA">https://www.youtube.com/watch?v=X1Nv0gQy9TA</a> or follow the getting started guide (<a href="https://doc.arroyo.dev/getting-started">https://doc.arroyo.dev/getting-started</a>) to run it locally.

Show HN: Arroyo – Write SQL on streaming data

Hey HN,<p>Arroyo is a modern, open-source stream processing engine, that lets anyone write complex queries on event streams just by writing SQL—windowing, aggregating, and joining events with sub-second latency.<p>Today data processing typically happens in batch data warehouses like BigQuery and Snowflake despite the fact that most of the data is coming in as streams. Data teams have to build complex orchestration systems to handle late-arriving data and job failures while trying to minimize latency. Stream processing offers an alternative approach, where the query is compiled into a streaming program that constantly updates as new data comes in, providing low-latency results as soon as the data is available.<p>I started the Arroyo project after spending the past five years building real-time platforms at Lyft and Splunk. I saw first hand how hard it is for users to build correct, reliable pipelines on top of existing systems like Flink and Spark Streaming, and how hard those pipelines are to operate for infra teams. I saw the need for a new system that would be easy enough for any data team to adopt, built on modern foundations and with the lessons of the past decade of research and industry development.<p>Arroyo works by taking SQL queries and compiling them into an optimized streaming dataflow program, a distributed DAG of computation with nodes that read from sources (like Kafka), perform stateful computations, and eventually write results to sinks. That state is consistently snapshotted using a variation of the Chandy-Lamport checkpointing algorithm for fault-tolerance and to enable fast rescaling and updates of the pipelines. The entire system is easy to self-host on Kubernetes and Nomad.<p>See it in action here: <a href="https://www.youtube.com/watch?v=X1Nv0gQy9TA">https://www.youtube.com/watch?v=X1Nv0gQy9TA</a> or follow the getting started guide (<a href="https://doc.arroyo.dev/getting-started">https://doc.arroyo.dev/getting-started</a>) to run it locally.

Show HN: Arroyo – Write SQL on streaming data

Hey HN,<p>Arroyo is a modern, open-source stream processing engine, that lets anyone write complex queries on event streams just by writing SQL—windowing, aggregating, and joining events with sub-second latency.<p>Today data processing typically happens in batch data warehouses like BigQuery and Snowflake despite the fact that most of the data is coming in as streams. Data teams have to build complex orchestration systems to handle late-arriving data and job failures while trying to minimize latency. Stream processing offers an alternative approach, where the query is compiled into a streaming program that constantly updates as new data comes in, providing low-latency results as soon as the data is available.<p>I started the Arroyo project after spending the past five years building real-time platforms at Lyft and Splunk. I saw first hand how hard it is for users to build correct, reliable pipelines on top of existing systems like Flink and Spark Streaming, and how hard those pipelines are to operate for infra teams. I saw the need for a new system that would be easy enough for any data team to adopt, built on modern foundations and with the lessons of the past decade of research and industry development.<p>Arroyo works by taking SQL queries and compiling them into an optimized streaming dataflow program, a distributed DAG of computation with nodes that read from sources (like Kafka), perform stateful computations, and eventually write results to sinks. That state is consistently snapshotted using a variation of the Chandy-Lamport checkpointing algorithm for fault-tolerance and to enable fast rescaling and updates of the pipelines. The entire system is easy to self-host on Kubernetes and Nomad.<p>See it in action here: <a href="https://www.youtube.com/watch?v=X1Nv0gQy9TA">https://www.youtube.com/watch?v=X1Nv0gQy9TA</a> or follow the getting started guide (<a href="https://doc.arroyo.dev/getting-started">https://doc.arroyo.dev/getting-started</a>) to run it locally.

Show HN: Arroyo – Write SQL on streaming data

Hey HN,<p>Arroyo is a modern, open-source stream processing engine, that lets anyone write complex queries on event streams just by writing SQL—windowing, aggregating, and joining events with sub-second latency.<p>Today data processing typically happens in batch data warehouses like BigQuery and Snowflake despite the fact that most of the data is coming in as streams. Data teams have to build complex orchestration systems to handle late-arriving data and job failures while trying to minimize latency. Stream processing offers an alternative approach, where the query is compiled into a streaming program that constantly updates as new data comes in, providing low-latency results as soon as the data is available.<p>I started the Arroyo project after spending the past five years building real-time platforms at Lyft and Splunk. I saw first hand how hard it is for users to build correct, reliable pipelines on top of existing systems like Flink and Spark Streaming, and how hard those pipelines are to operate for infra teams. I saw the need for a new system that would be easy enough for any data team to adopt, built on modern foundations and with the lessons of the past decade of research and industry development.<p>Arroyo works by taking SQL queries and compiling them into an optimized streaming dataflow program, a distributed DAG of computation with nodes that read from sources (like Kafka), perform stateful computations, and eventually write results to sinks. That state is consistently snapshotted using a variation of the Chandy-Lamport checkpointing algorithm for fault-tolerance and to enable fast rescaling and updates of the pipelines. The entire system is easy to self-host on Kubernetes and Nomad.<p>See it in action here: <a href="https://www.youtube.com/watch?v=X1Nv0gQy9TA">https://www.youtube.com/watch?v=X1Nv0gQy9TA</a> or follow the getting started guide (<a href="https://doc.arroyo.dev/getting-started">https://doc.arroyo.dev/getting-started</a>) to run it locally.

< 1 2 3 ... 504 505 506 507 508 ... 958 959 960 >