The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: SHAllenge – Compete to get the lowest hash

I've always had an appreciation for the properties of hashing. I especially like the concept of Proof of Work, so I tried to make a little competition out of it.

Show HN: SHAllenge – Compete to get the lowest hash

I've always had an appreciation for the properties of hashing. I especially like the concept of Proof of Work, so I tried to make a little competition out of it.

Show HN: Pathway – Build Mission Critical ETL and RAG in Python (NATO, F1 Used)

Hi HN data folks,<p>I am excited to share Pathway, a Python data processing framework we built for ETL and RAG pipelines.<p><a href="https://github.com/pathwaycom/pathway">https://github.com/pathwaycom/pathway</a><p>We started Pathway to solve event processing for IoT and geospatial indexing. Think freight train operations in unmapped depots bringing key merchandise from China to Europe. This was not something we could use Flink or Elastic for.<p>Then we added more connectors for streaming ETL (Kafka, Postgres CDC…), data indexing (yay vectors!), and LLM wrappers for RAG. Today Pathway provides a data indexing layer for live data updates, stateless and stateful data transformations over streams, and retrieval of structured and unstructured data.<p>Pathway ships with a Python API and a Rust runtime based on Differential Dataflow to perform incremental computation. All the pipeline is kept in memory and can be easily deployed with Docker and Kubernetes (pipelines-as-code).<p>We built Pathway to support enterprises like F1 teams and NATO to build mission-critical data pipelines. We do this by putting security and performance first. For example, you can build and deploy self-hosted RAG pipelines with local LLM models and Pathway’s in-memory vector index, so no data ever leaves your infrastructure. Pathway connectors and transformations work with live data by default, so you can avoid expensive reprocessing and rely on fresh data.<p>You can install Pathway with pip and Docker, and get started with templates and notebooks: <a href="https://pathway.com/developers/showcases" rel="nofollow">https://pathway.com/developers/showcases</a><p>We also host demo RAG pipelines implemented 100% in Pathway, feel free to interact with their API endpoints: <a href="https://pathway.com/solutions/rag-pipelines#try-it-out" rel="nofollow">https://pathway.com/solutions/rag-pipelines#try-it-out</a><p>We'd love to hear what you think of Pathway!

Show HN: Pathway – Build Mission Critical ETL and RAG in Python (NATO, F1 Used)

Hi HN data folks,<p>I am excited to share Pathway, a Python data processing framework we built for ETL and RAG pipelines.<p><a href="https://github.com/pathwaycom/pathway">https://github.com/pathwaycom/pathway</a><p>We started Pathway to solve event processing for IoT and geospatial indexing. Think freight train operations in unmapped depots bringing key merchandise from China to Europe. This was not something we could use Flink or Elastic for.<p>Then we added more connectors for streaming ETL (Kafka, Postgres CDC…), data indexing (yay vectors!), and LLM wrappers for RAG. Today Pathway provides a data indexing layer for live data updates, stateless and stateful data transformations over streams, and retrieval of structured and unstructured data.<p>Pathway ships with a Python API and a Rust runtime based on Differential Dataflow to perform incremental computation. All the pipeline is kept in memory and can be easily deployed with Docker and Kubernetes (pipelines-as-code).<p>We built Pathway to support enterprises like F1 teams and NATO to build mission-critical data pipelines. We do this by putting security and performance first. For example, you can build and deploy self-hosted RAG pipelines with local LLM models and Pathway’s in-memory vector index, so no data ever leaves your infrastructure. Pathway connectors and transformations work with live data by default, so you can avoid expensive reprocessing and rely on fresh data.<p>You can install Pathway with pip and Docker, and get started with templates and notebooks: <a href="https://pathway.com/developers/showcases" rel="nofollow">https://pathway.com/developers/showcases</a><p>We also host demo RAG pipelines implemented 100% in Pathway, feel free to interact with their API endpoints: <a href="https://pathway.com/solutions/rag-pipelines#try-it-out" rel="nofollow">https://pathway.com/solutions/rag-pipelines#try-it-out</a><p>We'd love to hear what you think of Pathway!

Show HN: Paramount – Human Evals of AI Customer Support

Hey HN, Hakim here from Fini (YC S22), a startup focused on providing automated customer support bots for enterprises that have a high volume of support requests.<p>Today, one of the largest use cases of LLMs is for the purpose of automating support. As the space has evolved over the past year, there has subsequently been a need for evaluations of LLM outputs - and a sea of LLM Evals packages have been released. "LLM evals" refer to the evaluation of large language models, assessing how well these AI systems understand and generate human-like text. These packages have recently relied on "automatic evals," where algorithms (usually another LLM) automatically test and score AI responses without human intervention.<p>In our day to day work, we have found that Automatic Evals are not enough to get the required 95% accuracy for our Enterprise customers. Automatic Evals are efficient, but still often miss nuances that only human expertise can catch. Automatic Evals can never replace the feedback of a trained human who is deeply knowledgeable on an organization's latest product releases, knowledgebase, policies and support issues. The key to solve this is to stop ignoring the business side of the problem, and start involving knowledgeable experts in the evaluation process.<p>That is why we are releasing Paramount - an Open Source package which incorporates human feedback directly into the evaluation process. By simplifying the step of gathering feedback, ML Engineers can pinpoint and fix accuracy issues (prompts, knowledgebase issues) much faster. Paramount provides a framework for recording LLM function outputs (ground truth data) and facilitates human agent evaluations through a simple UI, reducing the time to identify and correct errors.<p>Developers can integrate Paramount with a Python decorator that logs LLM interactions into a database, followed by a straightforward UI for expert review. This process aids the debugging and validation phase of launching accurate support bots. We'd love to hear what you think!

Show HN: Paramount – Human Evals of AI Customer Support

Hey HN, Hakim here from Fini (YC S22), a startup focused on providing automated customer support bots for enterprises that have a high volume of support requests.<p>Today, one of the largest use cases of LLMs is for the purpose of automating support. As the space has evolved over the past year, there has subsequently been a need for evaluations of LLM outputs - and a sea of LLM Evals packages have been released. "LLM evals" refer to the evaluation of large language models, assessing how well these AI systems understand and generate human-like text. These packages have recently relied on "automatic evals," where algorithms (usually another LLM) automatically test and score AI responses without human intervention.<p>In our day to day work, we have found that Automatic Evals are not enough to get the required 95% accuracy for our Enterprise customers. Automatic Evals are efficient, but still often miss nuances that only human expertise can catch. Automatic Evals can never replace the feedback of a trained human who is deeply knowledgeable on an organization's latest product releases, knowledgebase, policies and support issues. The key to solve this is to stop ignoring the business side of the problem, and start involving knowledgeable experts in the evaluation process.<p>That is why we are releasing Paramount - an Open Source package which incorporates human feedback directly into the evaluation process. By simplifying the step of gathering feedback, ML Engineers can pinpoint and fix accuracy issues (prompts, knowledgebase issues) much faster. Paramount provides a framework for recording LLM function outputs (ground truth data) and facilitates human agent evaluations through a simple UI, reducing the time to identify and correct errors.<p>Developers can integrate Paramount with a Python decorator that logs LLM interactions into a database, followed by a straightforward UI for expert review. This process aids the debugging and validation phase of launching accurate support bots. We'd love to hear what you think!

Show HN: Paramount – Human Evals of AI Customer Support

Hey HN, Hakim here from Fini (YC S22), a startup focused on providing automated customer support bots for enterprises that have a high volume of support requests.<p>Today, one of the largest use cases of LLMs is for the purpose of automating support. As the space has evolved over the past year, there has subsequently been a need for evaluations of LLM outputs - and a sea of LLM Evals packages have been released. "LLM evals" refer to the evaluation of large language models, assessing how well these AI systems understand and generate human-like text. These packages have recently relied on "automatic evals," where algorithms (usually another LLM) automatically test and score AI responses without human intervention.<p>In our day to day work, we have found that Automatic Evals are not enough to get the required 95% accuracy for our Enterprise customers. Automatic Evals are efficient, but still often miss nuances that only human expertise can catch. Automatic Evals can never replace the feedback of a trained human who is deeply knowledgeable on an organization's latest product releases, knowledgebase, policies and support issues. The key to solve this is to stop ignoring the business side of the problem, and start involving knowledgeable experts in the evaluation process.<p>That is why we are releasing Paramount - an Open Source package which incorporates human feedback directly into the evaluation process. By simplifying the step of gathering feedback, ML Engineers can pinpoint and fix accuracy issues (prompts, knowledgebase issues) much faster. Paramount provides a framework for recording LLM function outputs (ground truth data) and facilitates human agent evaluations through a simple UI, reducing the time to identify and correct errors.<p>Developers can integrate Paramount with a Python decorator that logs LLM interactions into a database, followed by a straightforward UI for expert review. This process aids the debugging and validation phase of launching accurate support bots. We'd love to hear what you think!

Show HN: XDeck – An ad-blocking client app for macOS, like TweetDeck

Hi everyone,<p>XDeck is an client app for macOS as a TweetDeck alternative, with Ad-Blocking!<p>I developed this for myself after feeling disappointed that TweetDeck has become a paid service.<p>I hope you find it useful too.

Show HN: XDeck – An ad-blocking client app for macOS, like TweetDeck

Hi everyone,<p>XDeck is an client app for macOS as a TweetDeck alternative, with Ad-Blocking!<p>I developed this for myself after feeling disappointed that TweetDeck has become a paid service.<p>I hope you find it useful too.

Show HN: XDeck – An ad-blocking client app for macOS, like TweetDeck

Hi everyone,<p>XDeck is an client app for macOS as a TweetDeck alternative, with Ad-Blocking!<p>I developed this for myself after feeling disappointed that TweetDeck has become a paid service.<p>I hope you find it useful too.

Freenet 2024 – a drop-in decentralized replacement for the web [video]

Show HN: Shpool, a Lightweight Tmux Alternative

shpool is a terminal session persistence tool developed internally at google to support remote workflows, which we have open sourced.

Show HN: Shpool, a Lightweight Tmux Alternative

shpool is a terminal session persistence tool developed internally at google to support remote workflows, which we have open sourced.

Show HN: Shpool, a Lightweight Tmux Alternative

shpool is a terminal session persistence tool developed internally at google to support remote workflows, which we have open sourced.

Show HN: Shpool, a Lightweight Tmux Alternative

shpool is a terminal session persistence tool developed internally at google to support remote workflows, which we have open sourced.

Show HN: Collaborative ASCII Drawing with Telnet

Show HN: Collaborative ASCII Drawing with Telnet

Show HN: Collaborative ASCII Drawing with Telnet

Show HN: Collaborative ASCII Drawing with Telnet

Show HN: Semantic clusters and embeddings for 500k Hacker News comments

< 1 2 3 ... 80 81 82 83 84 ... 719 720 721 >