The best Hacker News stories from Show from the past day
Latest posts:
Show HN: R2R V2 – A open source RAG engine with prod features
Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!
Show HN: R2R V2 – A open source RAG engine with prod features
Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!
Show HN: R2R V2 – A open source RAG engine with prod features
Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!
Show HN: FiddleCube – Generate Q&A to test your LLM
Convert your vector embeddings into a set of questions and their ideal responses. Use this dataset to test your LLM and catch failures caused by prompt or RAG updates.<p>Get started in 3 lines of code:<p>```<p>pip3 install fiddlecube<p>```<p>```<p>from fiddlecube import FiddleCube<p>fc = FiddleCube(api_key="<api-key>")
dataset = fc.generate(
[
"The cat did not want to be petted.",
"The cat was not happy with the owner's behavior.",
],
10,
)
dataset<p>```<p>Generate your API key: <a href="https://dashboard.fiddlecube.ai/api-key">https://dashboard.fiddlecube.ai/api-key</a><p># Ideal QnA datasets for testing, eval and training LLMs<p>Testing, evaluation or training LLMs requires an ideal QnA dataset aka the golden dataset.<p>This dataset needs to be diverse, covering a wide range of queries with accurate responses.<p>Creating such a dataset takes significant manual effort.<p>As the prompt or RAG contexts are updated, which is nearly all the time for early applications, the dataset needs to be updated to match.<p># FiddleCube generates ideal QnA from vector embeddings<p>- The questions cover the entire RAG knowledge corpus.<p>- Complex reasoning, safety alignment and 5 other question types are generated.<p>- Filtered for correctness, context relevance and style.<p>- Auto-updated with prompt and RAG updates.
Show HN: Qq: like jq, but can transcode between many formats
qq is jq inspired interoperable config format transcoder with interactive querying. It features an optional interactive editor with autocomplete for structured data. And supports inputs and outputs for json, xml, ini, toml, yaml, hcl, tf, and csv to varying degrees of capability.
Show HN: Qq: like jq, but can transcode between many formats
qq is jq inspired interoperable config format transcoder with interactive querying. It features an optional interactive editor with autocomplete for structured data. And supports inputs and outputs for json, xml, ini, toml, yaml, hcl, tf, and csv to varying degrees of capability.
Show HN: Triplit – Open-source syncing database that runs on server and client
Hey HN, we’re Matt and Will, the co-founders of Triplit (<a href="https://www.triplit.dev">https://www.triplit.dev</a>). Triplit is an open-source database (<a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>) that combines a server-side database, client-side cache, and a sync engine into one cohesive product. You can try it out with a new project by running:<p><pre><code> (npm|bun|yarn) create triplit-app
</code></pre>
As a team, we’ve worked on several projects that aspired to the user experience of Linear or Superhuman, where every interaction feels instant like a native app while still having the collaborative and syncing features we expect from the web. Delivering this level of UX was incredibly challenging. In each app we built, we had to implement a local caching strategy, keep the cache up to date with optimistic writes, individually handle retries and rollbacks from failures, and do a lot of codegen to get Typescript to work properly. This was spread across multiple libraries and infrastructure providers and required constant maintenance.<p>We finally decided to build the system we always wanted. Triplit enables your app to work offline and sync in real-time over websockets with an enjoyable developer experience.<p>Triplit lets you (1) define your schema in Typescript and simply push to the server without writing migration files; (2) write queries that automatically update in real-time to both remote changes from the server and optimistic local mutations on the client—with complete Typescript types; (3) run the whole stack locally without having to run a bunch of Docker containers.<p>One interesting challenge of building a system like this is enabling partial replication and incremental query evaluation. In order to make loading times as fast as possible, Triplit will only fetch the minimal required data from the server to fulfill a specific query and then send granular updates to that client. This differs from other systems which either sync all of a user’s data (too slow for web apps) or repeatedly fetch the query to simulate a subscription (which bogs down your database and network bandwidth).<p>If you’re familiar with the complexity of cache-invalidation and syncing, you’ll know that Triplit is operating firmly in the distributed systems space. We did a lot of research and settled on a local first approach that uses a fairly simple CRDT (conflict-free replicated data type) that allows each client to work offline and guarantees that they will converge to a consistent state when syncing. It works by treating each attribute of an entity as a last writer wins register. Compared to more complex strategies, this approach ends up being faster and doesn’t require additional logic to handle conflicting edits between concurrent writers. It’s similar to the strategy Figma uses for their collaborative editor.<p>You can add Triplit to an existing project by installing the client NPM package. You may self-host the Triplit Server or pay us to manage an instance for you. One cool part is that whether you choose to self-host or deploy on Triplit Cloud, you can still use our Dashboard to configure your database or interactively manage your data in the Triplit Console, a spreadsheet-like GUI.<p>In the future, we plan to add APIs for authentication, file uploads, and presence to create a Supabase/Firebase-like experience.<p>You can get started by going to <a href="https://triplit.dev">https://triplit.dev</a> or find us on Github <a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>. Thanks for checking us out and we are looking forward to your feedback in the comments!
Show HN: Triplit – Open-source syncing database that runs on server and client
Hey HN, we’re Matt and Will, the co-founders of Triplit (<a href="https://www.triplit.dev">https://www.triplit.dev</a>). Triplit is an open-source database (<a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>) that combines a server-side database, client-side cache, and a sync engine into one cohesive product. You can try it out with a new project by running:<p><pre><code> (npm|bun|yarn) create triplit-app
</code></pre>
As a team, we’ve worked on several projects that aspired to the user experience of Linear or Superhuman, where every interaction feels instant like a native app while still having the collaborative and syncing features we expect from the web. Delivering this level of UX was incredibly challenging. In each app we built, we had to implement a local caching strategy, keep the cache up to date with optimistic writes, individually handle retries and rollbacks from failures, and do a lot of codegen to get Typescript to work properly. This was spread across multiple libraries and infrastructure providers and required constant maintenance.<p>We finally decided to build the system we always wanted. Triplit enables your app to work offline and sync in real-time over websockets with an enjoyable developer experience.<p>Triplit lets you (1) define your schema in Typescript and simply push to the server without writing migration files; (2) write queries that automatically update in real-time to both remote changes from the server and optimistic local mutations on the client—with complete Typescript types; (3) run the whole stack locally without having to run a bunch of Docker containers.<p>One interesting challenge of building a system like this is enabling partial replication and incremental query evaluation. In order to make loading times as fast as possible, Triplit will only fetch the minimal required data from the server to fulfill a specific query and then send granular updates to that client. This differs from other systems which either sync all of a user’s data (too slow for web apps) or repeatedly fetch the query to simulate a subscription (which bogs down your database and network bandwidth).<p>If you’re familiar with the complexity of cache-invalidation and syncing, you’ll know that Triplit is operating firmly in the distributed systems space. We did a lot of research and settled on a local first approach that uses a fairly simple CRDT (conflict-free replicated data type) that allows each client to work offline and guarantees that they will converge to a consistent state when syncing. It works by treating each attribute of an entity as a last writer wins register. Compared to more complex strategies, this approach ends up being faster and doesn’t require additional logic to handle conflicting edits between concurrent writers. It’s similar to the strategy Figma uses for their collaborative editor.<p>You can add Triplit to an existing project by installing the client NPM package. You may self-host the Triplit Server or pay us to manage an instance for you. One cool part is that whether you choose to self-host or deploy on Triplit Cloud, you can still use our Dashboard to configure your database or interactively manage your data in the Triplit Console, a spreadsheet-like GUI.<p>In the future, we plan to add APIs for authentication, file uploads, and presence to create a Supabase/Firebase-like experience.<p>You can get started by going to <a href="https://triplit.dev">https://triplit.dev</a> or find us on Github <a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>. Thanks for checking us out and we are looking forward to your feedback in the comments!
Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm
Hello HN, we're Philip and Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (<a href="https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s" rel="nofollow">https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s</a>) with quick start instructions.<p>Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.<p>We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.<p>Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.<p>We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.<p>Some of the features Glasskube already supports are<p>Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.<p>Browse our central package repository so there is no need to look for a Helm repository to find a specific package.<p>All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.<p>Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.<p>Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).<p>All features are available via UI or interactive CLI. You can also manage all packages via GitOps.<p>Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.<p>We also started working on a cloud version. You can pre-signup here in case you are interested: <a href="https://glasskube.cloud" rel="nofollow">https://glasskube.cloud</a><p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.
Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm
Hello HN, we're Philip and Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (<a href="https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s" rel="nofollow">https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s</a>) with quick start instructions.<p>Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.<p>We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.<p>Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.<p>We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.<p>Some of the features Glasskube already supports are<p>Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.<p>Browse our central package repository so there is no need to look for a Helm repository to find a specific package.<p>All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.<p>Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.<p>Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).<p>All features are available via UI or interactive CLI. You can also manage all packages via GitOps.<p>Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.<p>We also started working on a cloud version. You can pre-signup here in case you are interested: <a href="https://glasskube.cloud" rel="nofollow">https://glasskube.cloud</a><p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.
Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm
Hello HN, we're Philip and Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (<a href="https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s" rel="nofollow">https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s</a>) with quick start instructions.<p>Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.<p>We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.<p>Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.<p>We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.<p>Some of the features Glasskube already supports are<p>Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.<p>Browse our central package repository so there is no need to look for a Helm repository to find a specific package.<p>All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.<p>Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.<p>Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).<p>All features are available via UI or interactive CLI. You can also manage all packages via GitOps.<p>Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.<p>We also started working on a cloud version. You can pre-signup here in case you are interested: <a href="https://glasskube.cloud" rel="nofollow">https://glasskube.cloud</a><p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.
Show HN: From dotenv to dotenvx – better config management
Show HN: From dotenv to dotenvx – better config management
Show HN: I built a JavaScript-powered flipdisc display
Show HN: I built a JavaScript-powered flipdisc display
Show HN: I built a JavaScript-powered flipdisc display
Show HN: I made an AI-finance tracker that let's you chat with your wallet
Show HN: I Built a Tool to Break Free from YouTube's Addictive Algorithm
I built Watchlist to solve a problem that's been nagging at me (and I suspect many others): YouTube's addictive nature and its impact on productivity.<p>The Problem:<p>YouTube is an incredible source of knowledge, but its homepage is an unending scroll of rabbit holes. The algorithm is designed to maximize watch time, often at the expense of our intentions and productivity. I found myself wasting hours, jumping from video to video, and then blaming myself for the lack of self-control.<p>The Solution:<p>Watchlist! Watchlist is essentially YouTube playlists on steroids. Here's how it works:<p>1. Create custom "lists" for different topics (e.g., AI, Science, Programming)
2. Add relevant channels to each list
3. Watchlist automatically adds new uploads from these channels to your lists
4. Set custom notification schedules for each list (e.g., every morning at 8 AM, every Sunday at 5 PM)
5. Receive email or push notifications (your choice) when there are new unwatched videos<p>The result? You stay updated on the content you care about without falling into the YouTube homepage trap.<p>Tech Stack:<p>Built with Python, JavaScript, Supabase & Google Cloud Run. (Took six weeks)<p>Why I Built This:<p>As an electrical engineer turned software developer, I've always been fascinated by programming. This project combines my love for coding with a real-world problem I've experienced firsthand.<p>Try It Out:<p>To make it as easy as possible for you to try Watchlist, I've set up a 1-click dummy login:<p><a href="http://watchlist.so/login/dummy" rel="nofollow">http://watchlist.so/login/dummy</a><p>This generates a dummy email and password for you, valid for 14 days. Although this account has full access, please note some restrictions:<p>- You cannot reset the password
- Email notifications are disabled
- Data cannot be migrated to another account
- The account will be deactivated after 14 days<p>For those who prefer a proper account, you can register normally at: <a href="https://watchlist.so/signup" rel="nofollow">https://watchlist.so/signup</a><p>Feedback:<p>I've seen many people struggle with YouTube addiction without realizing the root cause. If you've faced similar issues or have thoughts on this approach, I'd love to hear your feedback. Your thoughts and suggestions are crucial for improving Watchlist. There's a dedicated feedback page at:<p><a href="https://watchlist.so/settings/feedback" rel="nofollow">https://watchlist.so/settings/feedback</a><p>I'm eager to hear your experiences and ideas! Thanks!<p>PS: This is my first Show HN. So, please excuse me if I made any mistakes. I tried my best to follow the guidelines.
Show HN: Store Text in Minesweeper
Show HN: Feedback on Sketch Colourisation
Hi
I am looking for some feedback on our new project "Sketch Colourisation". The envisioned UI and objectives are --<p>* An artist should have greater control on how to colour a sketch. While a text-to-image model lacks this fine-grained control, a per-pixel colourisation pipeline makes sketch colourisation a laborious process with high-entry barrier.<p>* What if an artist only draws a mask for a local region and specifies the colour palette for that local region? Then a neural network figures out how to colour the overall sketch -- while maintaining those local colour palette.<p>[I would really like a feedback if the above UI (i.e., mask and local colour palette) makes sense to users/designers. As researchers, we often have the wrong idea of what is desired by end-users.]<p>* On the exact implementation of the above concept, we designed a no-training based neural network framework -- and also make sure it runs on a Nvidia 4090. In other words, I will try to avoid any expensive training or inference -- which defeats the purpose of being useful to people (not just some research labs).<p>* Note, I am not so bothered about the exact implementation (or whether it is "novel") -- as long as it is useful.<p>* A shameless advertisement: The codebase (<a href="https://github.com/CHAITron/sketchdeco-code.git">https://github.com/CHAITron/sketchdeco-code.git</a>) is MIT License. It is no way near to being useful to people -- but I would really like to pursue this direction and your feedback/criticism will be immensely helpful.<p>Thanks