The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Voice bots with 500ms response times

Last year when GPT-4 was released I started making lots of little voice + LLM experiments. Voice interfaces are fun; there are several interesting new problem spaces to explore.<p>I'm convinced that voice is going to be a bigger and bigger part of how we all interact with generative AI. But one thing that's hard, today, is building voice bots that respond as quickly as humans do in conversation. A 500ms voice-to-voice response time is just <i>barely</i> possible with today's AI models.<p>You can get down to 500ms if you: host transcription, LLM inference, and voice generation all together in one place; are careful about how you route and pipeline all the data; and the gods of both wifi and vram caching smile on you.<p>Here's a demo of a 500ms-capable voice bot, plus a container you can deploy to run it yourself on an A10/A100/H100 if you want to:<p><a href="https://fastvoiceagent.cerebrium.ai/">https://fastvoiceagent.cerebrium.ai/</a><p>We've been collecting lots of metrics. Here are typical numbers (in milliseconds) for all the easily measurable parts of the voice-to-voice response cycle.<p><pre><code> macOS mic input 40 opus encoding 30 network stack and transit 10 packet handling 2 jitter buffer 40 opus decoding 30 transcription and endpointing 200 llm ttfb 100 sentence aggregation 100 tts ttfb 80 opus encoding 30 packet handling 2 network stack and transit 10 jitter buffer 40 opus decoding 30 macOS speaker output 15 ---------------------------------- total ms 759 </code></pre> Everything in AI is changing all the time. LLMs with native audio input and output capabilities will likely make it easier to build fast-responding voice bots soon. But for the moment, I think this is the fastest possible approach/tech stack.

Show HN: I built an indie, browser-based MMORPG

I've been working on an MMORPG that is now in alpha as a solo developer.<p>Here are the major open source technologies that I use:<p>Blender - 3D modeling software for creating the overall environment and every game object. I've gotten a lot of CC and Public Domain assets from <a href="https://poly.pizza" rel="nofollow">https://poly.pizza</a><p>GLTF - I export assets from blender to the GLTF asset format<p>JSON - I write a JSON config for every game object that describes things like its name, its interactions, its collisions, etc.<p>Node.js exporter - I iterate over the environment and every asset to create a scene hierarchy. I use gltf-transform for processing all GLTF files, compressing them, removing redundancies, etc.<p>Node.js server - Uses express and socket.io to process game state updates. It keeps track of every client's game state and issues delta's at each game tick (currently 600ms). The client can send interactions with different objects. The server validates those and updates the game state accordingly.<p>HTML/CSS/JavaScript/Three.js client - I use regular web technologies for the UI elements and three.js for the 3D rending on the browser. The client is responsible for rending the world state and providing the client with different interactions. All code is written in JavaScript which means less context switching. Performance seems to be good enough, and I figure I can always optimize the server code in C++ if necessary.<p>I am currently running two cheap shared instances but based on my testing, they can likely support about 200 users each. This is a low-poly browser based game so it should be compatible across many devices. The data a user needs to download to play, including all 3d assets, is approximately 2 MB, even though there are hundreds of assets.<p>Overall, it's been a fun project. Web development and open source software have progressed to the point that this is no longer an incredibly difficult feat. I feel like development is going pretty well and in a year or so there will be plenty of good content to play.

Show HN: I built an indie, browser-based MMORPG

I've been working on an MMORPG that is now in alpha as a solo developer.<p>Here are the major open source technologies that I use:<p>Blender - 3D modeling software for creating the overall environment and every game object. I've gotten a lot of CC and Public Domain assets from <a href="https://poly.pizza" rel="nofollow">https://poly.pizza</a><p>GLTF - I export assets from blender to the GLTF asset format<p>JSON - I write a JSON config for every game object that describes things like its name, its interactions, its collisions, etc.<p>Node.js exporter - I iterate over the environment and every asset to create a scene hierarchy. I use gltf-transform for processing all GLTF files, compressing them, removing redundancies, etc.<p>Node.js server - Uses express and socket.io to process game state updates. It keeps track of every client's game state and issues delta's at each game tick (currently 600ms). The client can send interactions with different objects. The server validates those and updates the game state accordingly.<p>HTML/CSS/JavaScript/Three.js client - I use regular web technologies for the UI elements and three.js for the 3D rending on the browser. The client is responsible for rending the world state and providing the client with different interactions. All code is written in JavaScript which means less context switching. Performance seems to be good enough, and I figure I can always optimize the server code in C++ if necessary.<p>I am currently running two cheap shared instances but based on my testing, they can likely support about 200 users each. This is a low-poly browser based game so it should be compatible across many devices. The data a user needs to download to play, including all 3d assets, is approximately 2 MB, even though there are hundreds of assets.<p>Overall, it's been a fun project. Web development and open source software have progressed to the point that this is no longer an incredibly difficult feat. I feel like development is going pretty well and in a year or so there will be plenty of good content to play.

Show HN: Rubbrband – A hosted ComfyUI alternative for image generation

Hey HN! My friends and I built a new platform for generating images. The app is easy to use for people who find ComfyUI hard to use, or just simply don’t have a GPU to use it on.<p>For those not familiar, ComfyUI is a great tool for using open-source models like Stable Diffusion. It’s primarily great because it’s a node-based tool, which means you can chain together models, upscalers, prompting nodes, etc… which let you create images in the exact aesthetic you want. There’s also a vibrant dev community behind ComfyUI, which means that there are a ton of nodes and customizability.<p>We’re users of Comfy, but there are some major problems we’ve had with it. First is that it runs primarily on your own hardware, so if you don’t have beefy GPUs it’s not possible to use on your machine. Second, we found that the interface is rather clunkly. Lastly, the ecosystem is very fragmented. All extensions and workflows are scattered around Github/Reddit/Discord, which means new tools are hard to find, and also often times incompatible with your local installation of Comfy, which is super frustrating.<p>We built Rubbrband as own take on an image-generation tool, taking the customizability of ComfyUI, with the ease-of-use of something like Midjourney.<p>Here are the key features:<p>- Fully hosted as a website - Use any Stable-Diffusion checkpoint or LORA from CivitAI - Unlimited image storage - Over 20 nodes, including SD, ControlNet, Masking nodes, GPT-4V, etc… - Color Palettes control, Image References(IP-adapter), etc… - A Playground page, for using workflows in a much simpler interface - A Community page, for sharing workflows with others<p>Would love to get your thoughts! You can use the app here: <a href="https://rubbrband.com">https://rubbrband.com</a><p>We’re looking to also create an API so that you can create nodes on our platform as well! If you’re interested in getting early-access, please let me know!

Show HN: Rubbrband – A hosted ComfyUI alternative for image generation

Hey HN! My friends and I built a new platform for generating images. The app is easy to use for people who find ComfyUI hard to use, or just simply don’t have a GPU to use it on.<p>For those not familiar, ComfyUI is a great tool for using open-source models like Stable Diffusion. It’s primarily great because it’s a node-based tool, which means you can chain together models, upscalers, prompting nodes, etc… which let you create images in the exact aesthetic you want. There’s also a vibrant dev community behind ComfyUI, which means that there are a ton of nodes and customizability.<p>We’re users of Comfy, but there are some major problems we’ve had with it. First is that it runs primarily on your own hardware, so if you don’t have beefy GPUs it’s not possible to use on your machine. Second, we found that the interface is rather clunkly. Lastly, the ecosystem is very fragmented. All extensions and workflows are scattered around Github/Reddit/Discord, which means new tools are hard to find, and also often times incompatible with your local installation of Comfy, which is super frustrating.<p>We built Rubbrband as own take on an image-generation tool, taking the customizability of ComfyUI, with the ease-of-use of something like Midjourney.<p>Here are the key features:<p>- Fully hosted as a website - Use any Stable-Diffusion checkpoint or LORA from CivitAI - Unlimited image storage - Over 20 nodes, including SD, ControlNet, Masking nodes, GPT-4V, etc… - Color Palettes control, Image References(IP-adapter), etc… - A Playground page, for using workflows in a much simpler interface - A Community page, for sharing workflows with others<p>Would love to get your thoughts! You can use the app here: <a href="https://rubbrband.com">https://rubbrband.com</a><p>We’re looking to also create an API so that you can create nodes on our platform as well! If you’re interested in getting early-access, please let me know!

Show HN: The Tomb of Nefertari [QV 66] Guided Virtual Tour

I 3d scanned the Tomb of Nefertari and am building this guided virtual tour, trying to bring in photogrammetry of artifacts that I've made at various museums. It crashes sometimes still on mobile devices.<p>I wasn't able to take photogrammetry photos of the artifacts from the tomb in the Museo Egizio in Turin because they were traveling while I was there (and now the museum is closed to install a new roof anyhow), so I tried to include comparanda from other museums where I had scanned artifacts.<p>I tested the same dataset with 3d Gaussian Splatting but that had lower resolution results for great complexity in frontend code and reduced performance on older devices. [3DGS tour: <a href="https://mused.com/en/tours/860/learn-about-3d-gaussian-splatting" rel="nofollow">https://mused.com/en/tours/860/learn-about-3d-gaussian-splat...</a>]<p>Moving forward, if anyone's working on the same idea, I didn't find a good path to monetization through web-based 3d content, so I'll take the high resolution photogrammetry of spaces into Unreal and switch to desktop and headset builds.

Show HN: The Tomb of Nefertari [QV 66] Guided Virtual Tour

I 3d scanned the Tomb of Nefertari and am building this guided virtual tour, trying to bring in photogrammetry of artifacts that I've made at various museums. It crashes sometimes still on mobile devices.<p>I wasn't able to take photogrammetry photos of the artifacts from the tomb in the Museo Egizio in Turin because they were traveling while I was there (and now the museum is closed to install a new roof anyhow), so I tried to include comparanda from other museums where I had scanned artifacts.<p>I tested the same dataset with 3d Gaussian Splatting but that had lower resolution results for great complexity in frontend code and reduced performance on older devices. [3DGS tour: <a href="https://mused.com/en/tours/860/learn-about-3d-gaussian-splatting" rel="nofollow">https://mused.com/en/tours/860/learn-about-3d-gaussian-splat...</a>]<p>Moving forward, if anyone's working on the same idea, I didn't find a good path to monetization through web-based 3d content, so I'll take the high resolution photogrammetry of spaces into Unreal and switch to desktop and headset builds.

Show HN: SmokeScanner – Using cigarette price arbitrage to find free flights

Show HN: Find AI – Perplexity Meets LinkedIn

As a founder, finding early customers is always a challenge. I'd come up with specific guesses for people to talk to - such "VCs that used to be startup founders" or "Former lawyers who are now CTOs." Running those types of searches typically involves opening dozens of LinkedIn profiles in tabs, and looking at them one-by-one. And, it turns out that going through LinkedIn profiles one-by-one is a daily job for many people.<p>I started building Find AI to make it easier to search for people. I initially started just having GPT review people's LinkedIn profiles and websites, but it cost thousands of dollars per search (!). The product we're launching today can now run the same searches in seconds for pennies.<p>Find AI is Perplexity-style search over LinkedIn-type data. Ask vague questions, and the AI will go find and analyze people to get you matches.<p>The results are really impressive - here are some questions I've used:<p>- Find potential future founders by looking for tech company PMs who previously started a company<p>- Find potential chief science officers by looking for PhDs with industry experience who now work at a startup but have never founded a company before<p>- Find other founders who have a dog and might want my vet app product<p>The database currently consists of tech companies and people, but we're working to scale up to more people. The data is all first-party and retrieved from public sources.<p>Our first customers have been VCs, who are using Find AI to keep track of new AI companies. We just launched email alerts on searches, so you can get updates as new companies match your criteria.<p>Try it out and let me know what you think.

Show HN: Find AI – Perplexity Meets LinkedIn

As a founder, finding early customers is always a challenge. I'd come up with specific guesses for people to talk to - such "VCs that used to be startup founders" or "Former lawyers who are now CTOs." Running those types of searches typically involves opening dozens of LinkedIn profiles in tabs, and looking at them one-by-one. And, it turns out that going through LinkedIn profiles one-by-one is a daily job for many people.<p>I started building Find AI to make it easier to search for people. I initially started just having GPT review people's LinkedIn profiles and websites, but it cost thousands of dollars per search (!). The product we're launching today can now run the same searches in seconds for pennies.<p>Find AI is Perplexity-style search over LinkedIn-type data. Ask vague questions, and the AI will go find and analyze people to get you matches.<p>The results are really impressive - here are some questions I've used:<p>- Find potential future founders by looking for tech company PMs who previously started a company<p>- Find potential chief science officers by looking for PhDs with industry experience who now work at a startup but have never founded a company before<p>- Find other founders who have a dog and might want my vet app product<p>The database currently consists of tech companies and people, but we're working to scale up to more people. The data is all first-party and retrieved from public sources.<p>Our first customers have been VCs, who are using Find AI to keep track of new AI companies. We just launched email alerts on searches, so you can get updates as new companies match your criteria.<p>Try it out and let me know what you think.

Show HN: R2R V2 – A open source RAG engine with prod features

Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!

Show HN: R2R V2 – A open source RAG engine with prod features

Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!

Show HN: R2R V2 – A open source RAG engine with prod features

Hi HN! We're building R2R [<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>], an open source RAG answer engine that is built on top of Postgres+Neo4j. The best way to get started is with the docs - <a href="https://r2r-docs.sciphi.ai/introduction">https://r2r-docs.sciphi.ai/introduction</a>.<p>This is a major update from our V1 which we have spent the last 3 months intensely building after getting a ton of great feedback from our first Show HN (<a href="https://news.ycombinator.com/item?id=39510874">https://news.ycombinator.com/item?id=39510874</a>). We changed our focus to building a RAG engine instead of a framework, because this is what developers asked for the most. To us this distinction meant working on an opinionated system instead of layers of abstractions over providers. We built features for multimodal data ingestion, hybrid search with reranking, advanced RAG techniques (e.g. HyDE), automatic knowledge graph construction alongside the original goal of an observable RAG system built on top of a RESTful API that we shared back in February.<p>What's the problem? Developers are struggling to build accurate, reliable RAG solutions. Popular tools like Langchain are complex and overly abstracted and lack crucial production features such as user/document management, observability, and a default API. There was a big thread about this a few days ago: <i>Why we no longer use LangChain for building our AI agents</i> (<a href="https://news.ycombinator.com/item?id=40739982">https://news.ycombinator.com/item?id=40739982</a>)<p>We experienced these challenges firsthand while building a large-scale semantic search engine, having users report numerous hallucinations and inaccuracies. This highlighted that search+RAG is a difficult problem. We're convinced that these missing features, and more, are essential to effectively monitor and improve such systems over time.<p>Teams have been using R2R to develop custom AI agents with their own data, with applications ranging from B2B lead generation to research assistants. Best of all, the developer experience is much improved. For example, we have recently seen multiple teams use R2R to deploy a user-facing RAG engine for their application within a day. By day 2 some of these same teams were using their generated logs to tune the system with advanced features like hybrid search and HyDE.<p>Here are a few examples of how R2R can outperform classic RAG with semantic search only:<p>1. “What were the UK's top exports in 2023?". R2R with hybrid search can identify documents mentioning "UK exports" and "2023", whereas semantic search finds related concepts like trade balance and economic reports.<p>2. "List all YC founders that worked at Google and now have an AI startup." Our knowledge graph feature allows R2R to understand relationships between employees and projects, answering a query that would be challenging for simple vector search.<p>The built in observability and customizability of R2R helps you to tune and improve your system long after launching. Our plan is to keep the API ~fixed while we iterate on the internal system logic, making it easier for developers to trust R2R for production from day 1.<p>We are currently working on the following: (1) Improve semantic chunking through third party providers or our own custom LLMs; (2) Training a custom model for knowledge graph triples extraction that will allow KG construction to be 10x more efficient. (This is in private beta, please reach out if interested!); (3) Ability to handle permissions at a more granular level than just a single user; (4) LLM-powered online evaluation of system performance + enhanced analytics and metrics.<p>Getting started is easy. R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker. Check out our quickstart guide: <a href="https://r2r-docs.sciphi.ai/quickstart">https://r2r-docs.sciphi.ai/quickstart</a>. Lastly, if it interests you, we are also working on a cloud solution at <a href="https://sciphi.ai">https://sciphi.ai</a>.<p>Thanks a lot for taking the time to read! The feedback from the first ShowHN was invaluable and gave us our direction for the last three months, so we'd love to hear any more comments you have!

Show HN: FiddleCube – Generate Q&A to test your LLM

Convert your vector embeddings into a set of questions and their ideal responses. Use this dataset to test your LLM and catch failures caused by prompt or RAG updates.<p>Get started in 3 lines of code:<p>```<p>pip3 install fiddlecube<p>```<p>```<p>from fiddlecube import FiddleCube<p>fc = FiddleCube(api_key="<api-key>") dataset = fc.generate( [ "The cat did not want to be petted.", "The cat was not happy with the owner's behavior.", ], 10, ) dataset<p>```<p>Generate your API key: <a href="https://dashboard.fiddlecube.ai/api-key">https://dashboard.fiddlecube.ai/api-key</a><p># Ideal QnA datasets for testing, eval and training LLMs<p>Testing, evaluation or training LLMs requires an ideal QnA dataset aka the golden dataset.<p>This dataset needs to be diverse, covering a wide range of queries with accurate responses.<p>Creating such a dataset takes significant manual effort.<p>As the prompt or RAG contexts are updated, which is nearly all the time for early applications, the dataset needs to be updated to match.<p># FiddleCube generates ideal QnA from vector embeddings<p>- The questions cover the entire RAG knowledge corpus.<p>- Complex reasoning, safety alignment and 5 other question types are generated.<p>- Filtered for correctness, context relevance and style.<p>- Auto-updated with prompt and RAG updates.

Show HN: Qq: like jq, but can transcode between many formats

qq is jq inspired interoperable config format transcoder with interactive querying. It features an optional interactive editor with autocomplete for structured data. And supports inputs and outputs for json, xml, ini, toml, yaml, hcl, tf, and csv to varying degrees of capability.

Show HN: Qq: like jq, but can transcode between many formats

qq is jq inspired interoperable config format transcoder with interactive querying. It features an optional interactive editor with autocomplete for structured data. And supports inputs and outputs for json, xml, ini, toml, yaml, hcl, tf, and csv to varying degrees of capability.

Show HN: Triplit – Open-source syncing database that runs on server and client

Hey HN, we’re Matt and Will, the co-founders of Triplit (<a href="https://www.triplit.dev">https://www.triplit.dev</a>). Triplit is an open-source database (<a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>) that combines a server-side database, client-side cache, and a sync engine into one cohesive product. You can try it out with a new project by running:<p><pre><code> (npm|bun|yarn) create triplit-app </code></pre> As a team, we’ve worked on several projects that aspired to the user experience of Linear or Superhuman, where every interaction feels instant like a native app while still having the collaborative and syncing features we expect from the web. Delivering this level of UX was incredibly challenging. In each app we built, we had to implement a local caching strategy, keep the cache up to date with optimistic writes, individually handle retries and rollbacks from failures, and do a lot of codegen to get Typescript to work properly. This was spread across multiple libraries and infrastructure providers and required constant maintenance.<p>We finally decided to build the system we always wanted. Triplit enables your app to work offline and sync in real-time over websockets with an enjoyable developer experience.<p>Triplit lets you (1) define your schema in Typescript and simply push to the server without writing migration files; (2) write queries that automatically update in real-time to both remote changes from the server and optimistic local mutations on the client—with complete Typescript types; (3) run the whole stack locally without having to run a bunch of Docker containers.<p>One interesting challenge of building a system like this is enabling partial replication and incremental query evaluation. In order to make loading times as fast as possible, Triplit will only fetch the minimal required data from the server to fulfill a specific query and then send granular updates to that client. This differs from other systems which either sync all of a user’s data (too slow for web apps) or repeatedly fetch the query to simulate a subscription (which bogs down your database and network bandwidth).<p>If you’re familiar with the complexity of cache-invalidation and syncing, you’ll know that Triplit is operating firmly in the distributed systems space. We did a lot of research and settled on a local first approach that uses a fairly simple CRDT (conflict-free replicated data type) that allows each client to work offline and guarantees that they will converge to a consistent state when syncing. It works by treating each attribute of an entity as a last writer wins register. Compared to more complex strategies, this approach ends up being faster and doesn’t require additional logic to handle conflicting edits between concurrent writers. It’s similar to the strategy Figma uses for their collaborative editor.<p>You can add Triplit to an existing project by installing the client NPM package. You may self-host the Triplit Server or pay us to manage an instance for you. One cool part is that whether you choose to self-host or deploy on Triplit Cloud, you can still use our Dashboard to configure your database or interactively manage your data in the Triplit Console, a spreadsheet-like GUI.<p>In the future, we plan to add APIs for authentication, file uploads, and presence to create a Supabase/Firebase-like experience.<p>You can get started by going to <a href="https://triplit.dev">https://triplit.dev</a> or find us on Github <a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>. Thanks for checking us out and we are looking forward to your feedback in the comments!

Show HN: Triplit – Open-source syncing database that runs on server and client

Hey HN, we’re Matt and Will, the co-founders of Triplit (<a href="https://www.triplit.dev">https://www.triplit.dev</a>). Triplit is an open-source database (<a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>) that combines a server-side database, client-side cache, and a sync engine into one cohesive product. You can try it out with a new project by running:<p><pre><code> (npm|bun|yarn) create triplit-app </code></pre> As a team, we’ve worked on several projects that aspired to the user experience of Linear or Superhuman, where every interaction feels instant like a native app while still having the collaborative and syncing features we expect from the web. Delivering this level of UX was incredibly challenging. In each app we built, we had to implement a local caching strategy, keep the cache up to date with optimistic writes, individually handle retries and rollbacks from failures, and do a lot of codegen to get Typescript to work properly. This was spread across multiple libraries and infrastructure providers and required constant maintenance.<p>We finally decided to build the system we always wanted. Triplit enables your app to work offline and sync in real-time over websockets with an enjoyable developer experience.<p>Triplit lets you (1) define your schema in Typescript and simply push to the server without writing migration files; (2) write queries that automatically update in real-time to both remote changes from the server and optimistic local mutations on the client—with complete Typescript types; (3) run the whole stack locally without having to run a bunch of Docker containers.<p>One interesting challenge of building a system like this is enabling partial replication and incremental query evaluation. In order to make loading times as fast as possible, Triplit will only fetch the minimal required data from the server to fulfill a specific query and then send granular updates to that client. This differs from other systems which either sync all of a user’s data (too slow for web apps) or repeatedly fetch the query to simulate a subscription (which bogs down your database and network bandwidth).<p>If you’re familiar with the complexity of cache-invalidation and syncing, you’ll know that Triplit is operating firmly in the distributed systems space. We did a lot of research and settled on a local first approach that uses a fairly simple CRDT (conflict-free replicated data type) that allows each client to work offline and guarantees that they will converge to a consistent state when syncing. It works by treating each attribute of an entity as a last writer wins register. Compared to more complex strategies, this approach ends up being faster and doesn’t require additional logic to handle conflicting edits between concurrent writers. It’s similar to the strategy Figma uses for their collaborative editor.<p>You can add Triplit to an existing project by installing the client NPM package. You may self-host the Triplit Server or pay us to manage an instance for you. One cool part is that whether you choose to self-host or deploy on Triplit Cloud, you can still use our Dashboard to configure your database or interactively manage your data in the Triplit Console, a spreadsheet-like GUI.<p>In the future, we plan to add APIs for authentication, file uploads, and presence to create a Supabase/Firebase-like experience.<p>You can get started by going to <a href="https://triplit.dev">https://triplit.dev</a> or find us on Github <a href="https://github.com/aspen-cloud/triplit">https://github.com/aspen-cloud/triplit</a>. Thanks for checking us out and we are looking forward to your feedback in the comments!

Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm

Hello HN, we're Philip and Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (<a href="https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s" rel="nofollow">https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s</a>) with quick start instructions.<p>Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.<p>We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.<p>Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.<p>We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.<p>Some of the features Glasskube already supports are<p>Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.<p>Browse our central package repository so there is no need to look for a Helm repository to find a specific package.<p>All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.<p>Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.<p>Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).<p>All features are available via UI or interactive CLI. You can also manage all packages via GitOps.<p>Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.<p>We also started working on a cloud version. You can pre-signup here in case you are interested: <a href="https://glasskube.cloud" rel="nofollow">https://glasskube.cloud</a><p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.

Show HN: Glasskube – Open Source Kubernetes Package Manager, alternative to Helm

Hello HN, we're Philip and Louis from Glasskube (<a href="https://github.com/glasskube/glasskube">https://github.com/glasskube/glasskube</a>). We're working on an open-source package manager for Kubernetes. It's an alternative to tools like Helm or Kustomize, primarily focused on making deploying, updating, and configuring Kubernetes packages simpler and a lot faster. Here is a demo video (<a href="https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s" rel="nofollow">https://www.youtube.com/watch?v=aIeTHGWsG2c#t=17s</a>) with quick start instructions.<p>Most developers working with Kubernetes use Helm, an open-source tool created during a hackathon nine years ago. However, with the rapid growth of Kubernetes packages to over 800 packages on the CNCF landscape today, the prerequisites have changed, and we believe it’s time for a new package manager. Every engineer we talked to has a love-hate relationship with Helm, and we also found ourselves defaulting to Helm despite its shortcomings due to a lack of alternatives.<p>We have spent enough time trying to get Helm to do what we need. From looking for the correct chart, trying to learn how each value affects the components and hand-crafting a schemaless values.yaml file, to debugging the final release if it inevitably fails to install, the experience of using Helm is, for the most part, time consuming and cumbersome.<p>Charts often become more complex, requiring the use of sub-charts. These umbrella charts tend to be even harder to maintain and upgrade, because so many different components are bundled into a single release.<p>We talked to over 100 developers and found that everyone developed their own little workarounds, with some working better than others. We collected the feedback poured everything we learned from that into a new package manager. We want to build something that is as easy to use as Homebrew or npm and make package management on Kubernetes as easy as on every other platform.<p>Some of the features Glasskube already supports are<p>Typesafe package configuration via UI or interactive CLI to inject values from other packages, ConfigMaps, and Secrets.<p>Browse our central package repository so there is no need to look for a Helm repository to find a specific package.<p>All packages are dependency-aware so they can be used and referenced by multiple other packages even across namespaces. We validate the complete dependency tree - So packages get installed in the correct namespace.<p>Preview and perform pending updates to your desired version with a single click of a button. All updates have been tested in the Glasskube test suite before being available in the public repository.<p>Use multiple repositories and publish your own private packages (e.g., your company's internal services packages, so all developers will have the up-to-date and easily configured internal services).<p>All features are available via UI or interactive CLI. You can also manage all packages via GitOps.<p>Currently, we are focused on enhancing the user experience, aiming to save engineers as much time as possible. We are still using Helm and Manifests under the hood. However, together with the community, we plan to develop an entirely new packaging and bundling format for all cloud-native packages. This will provide package developers with a straightforward way to define how to install and configure packages, offer simple upgrade paths, and enable us to provide feedback, crash reports, and analytics to every developer working on Kubernetes packages.<p>We also started working on a cloud version. You can pre-signup here in case you are interested: <a href="https://glasskube.cloud" rel="nofollow">https://glasskube.cloud</a><p>We'd greatly appreciate any feedback you have and hope you get the chance to try out Glasskube.

< 1 2 3 ... 185 186 187 188 189 ... 830 831 832 >