The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Textify – Replace gibberish in AI-generated images

Show HN: Textify – Replace gibberish in AI-generated images

Show HN: Marqo – Vectorless Vector Search

Marqo is an end-to-end vector search engine. It contains everything required to integrate vector search into an application in a single API. Here is a code snippet for a minimal example of vector search with Marqo:<p>mq = marqo.Client()<p>mq.create_index("my-first-index")<p>mq.index("my-first-index").add_documents([{"title": "The Travels of Marco Polo"}])<p>results = mq.index("my-first-index").search(q="Marqo Polo")<p>Why Marqo? Vector similarity alone is not enough for vector search. Vector search requires more than a vector database - it also requires machine learning (ML) deployment and management, preprocessing and transformations of inputs as well as the ability to modify search behavior without retraining a model. Marqo contains all these pieces, enabling developers to build vector search into their application with minimal effort.<p>Why not X, Y, Z vector database? Vector databases are specialized components for vector similarity. They are “vectors in - vectors out”. They still require the production of vectors, management of the ML models, associated orchestration and processing of the inputs. Marqo makes this easy by being “documents in, documents out”. Preprocessing of text and images, embedding the content, storing meta-data and deployment of inference and storage is all taken care of by Marqo. We have been running Marqo for production workloads with both low-latency and large index requirements.<p>Marqo features:<p>- Low-latency (10’s ms - configuration dependent), large scale (10’s - 100’s M vectors). - Easily integrates with LLM’s and other generative AI - augmented generation using a knowledge base. - Pre-configured open source embedding models - SBERT, Huggingface, CLIP/OpenCLIP. - Pre-filtering and lexical search. - Multimodal model support - search text and/or images. - Custom models - load models fine tuned from your own data. - Ranking with document meta data - bias the similarity with properties like popularity. - Multi-term multi-modal queries - allows per query personalization and topic avoidance. - Multi-modal representations - search over documents that have both text and images. - GPU/CPU/ONNX/PyTorch inference support.<p>See some examples here:<p>Multimodal search: [1] <a href="https://www.marqo.ai/blog/context-is-all-you-need-multimodal-vector-search-with-personalization" rel="nofollow noreferrer">https://www.marqo.ai/blog/context-is-all-you-need-multimodal...</a><p>Refining image quality and identifying unwanted content: [2] <a href="https://www.marqo.ai/blog/refining-image-quality-and-eliminating-nsfw-content-with-marqo" rel="nofollow noreferrer">https://www.marqo.ai/blog/refining-image-quality-and-elimina...</a><p>Question answering over transcripts of speech: [3] <a href="https://www.marqo.ai/blog/speech-processing" rel="nofollow noreferrer">https://www.marqo.ai/blog/speech-processing</a><p>Question and answering over technical documents and augmenting NPC's with a backstory: [4] <a href="https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmenting-gpt-with-marqo-for-fast-editable-memory-to-enable-context-aware-question-answering" rel="nofollow noreferrer">https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmen...</a>

Show HN: Marqo – Vectorless Vector Search

Marqo is an end-to-end vector search engine. It contains everything required to integrate vector search into an application in a single API. Here is a code snippet for a minimal example of vector search with Marqo:<p>mq = marqo.Client()<p>mq.create_index("my-first-index")<p>mq.index("my-first-index").add_documents([{"title": "The Travels of Marco Polo"}])<p>results = mq.index("my-first-index").search(q="Marqo Polo")<p>Why Marqo? Vector similarity alone is not enough for vector search. Vector search requires more than a vector database - it also requires machine learning (ML) deployment and management, preprocessing and transformations of inputs as well as the ability to modify search behavior without retraining a model. Marqo contains all these pieces, enabling developers to build vector search into their application with minimal effort.<p>Why not X, Y, Z vector database? Vector databases are specialized components for vector similarity. They are “vectors in - vectors out”. They still require the production of vectors, management of the ML models, associated orchestration and processing of the inputs. Marqo makes this easy by being “documents in, documents out”. Preprocessing of text and images, embedding the content, storing meta-data and deployment of inference and storage is all taken care of by Marqo. We have been running Marqo for production workloads with both low-latency and large index requirements.<p>Marqo features:<p>- Low-latency (10’s ms - configuration dependent), large scale (10’s - 100’s M vectors). - Easily integrates with LLM’s and other generative AI - augmented generation using a knowledge base. - Pre-configured open source embedding models - SBERT, Huggingface, CLIP/OpenCLIP. - Pre-filtering and lexical search. - Multimodal model support - search text and/or images. - Custom models - load models fine tuned from your own data. - Ranking with document meta data - bias the similarity with properties like popularity. - Multi-term multi-modal queries - allows per query personalization and topic avoidance. - Multi-modal representations - search over documents that have both text and images. - GPU/CPU/ONNX/PyTorch inference support.<p>See some examples here:<p>Multimodal search: [1] <a href="https://www.marqo.ai/blog/context-is-all-you-need-multimodal-vector-search-with-personalization" rel="nofollow noreferrer">https://www.marqo.ai/blog/context-is-all-you-need-multimodal...</a><p>Refining image quality and identifying unwanted content: [2] <a href="https://www.marqo.ai/blog/refining-image-quality-and-eliminating-nsfw-content-with-marqo" rel="nofollow noreferrer">https://www.marqo.ai/blog/refining-image-quality-and-elimina...</a><p>Question answering over transcripts of speech: [3] <a href="https://www.marqo.ai/blog/speech-processing" rel="nofollow noreferrer">https://www.marqo.ai/blog/speech-processing</a><p>Question and answering over technical documents and augmenting NPC's with a backstory: [4] <a href="https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmenting-gpt-with-marqo-for-fast-editable-memory-to-enable-context-aware-question-answering" rel="nofollow noreferrer">https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmen...</a>

Show HN: Shadeform – Single Platform and API for Provisioning GPUs

Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform (<a href="https://www.shadeform.ai/">https://www.shadeform.ai/</a>), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available.<p>From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds.<p>With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference.<p>This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in.<p>We shipped a few features to help teams access GPUs today:<p>- a “single plane of glass” for GPU availability and prices;<p>- a “single control plane” for provisioning GPUs in any cloud through our platform and API;<p>- a reservation system that monitors real time availability and launches GPUs as soon as they become available.<p>Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more.<p>You can try our platform at <a href="https://platform.shadeform.ai">https://platform.shadeform.ai</a>. You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee.<p>We’d love your feedback on how we’re approaching this problem. What do you think?

Show HN: Shadeform – Single Platform and API for Provisioning GPUs

Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform (<a href="https://www.shadeform.ai/">https://www.shadeform.ai/</a>), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available.<p>From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds.<p>With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference.<p>This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in.<p>We shipped a few features to help teams access GPUs today:<p>- a “single plane of glass” for GPU availability and prices;<p>- a “single control plane” for provisioning GPUs in any cloud through our platform and API;<p>- a reservation system that monitors real time availability and launches GPUs as soon as they become available.<p>Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more.<p>You can try our platform at <a href="https://platform.shadeform.ai">https://platform.shadeform.ai</a>. You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee.<p>We’d love your feedback on how we’re approaching this problem. What do you think?

Show HN: A website for remote workers to find Airbnb's with good Internet

I created this website about a month ago to solve a problem I was facing myself as an aspiring digital nomad. It is very important to find an accommodation with fast and reliable Internet. I also specifically wanted places with Ethernet access to minimize latency as much as possible since I (and many others) use a VPN hosted back at home.<p>The database is in its infancy but covers 11 countries so far. I realize the UX is very basic and a minimum viable product. I intend to have someone help me overhaul the design (with ReactJS perhaps) to make it mobile friendly and more appealing.

Show HN: Saf – simple, reliable, rsync-based, battle tested, rounded backup

I had this backup code working reliably for years, using local file system, vps/dedicated server, or remote storage for backup, then I finally get time to wrap README, iron few missing switches and publish. Should be production ready and reliable, so it could be useful to others. Contributors are welcome.<p><<a href="https://github.com/dusanx/saf">https://github.com/dusanx/saf</a>>

Show HN: Saf – simple, reliable, rsync-based, battle tested, rounded backup

I had this backup code working reliably for years, using local file system, vps/dedicated server, or remote storage for backup, then I finally get time to wrap README, iron few missing switches and publish. Should be production ready and reliable, so it could be useful to others. Contributors are welcome.<p><<a href="https://github.com/dusanx/saf">https://github.com/dusanx/saf</a>>

Show HN: Saf – simple, reliable, rsync-based, battle tested, rounded backup

I had this backup code working reliably for years, using local file system, vps/dedicated server, or remote storage for backup, then I finally get time to wrap README, iron few missing switches and publish. Should be production ready and reliable, so it could be useful to others. Contributors are welcome.<p><<a href="https://github.com/dusanx/saf">https://github.com/dusanx/saf</a>>

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Run globally distributed full-stack apps on high-performance MicroVMs

Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Poozle – open-source Plaid for LLMs

Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )

Show HN: Not My Cows – Save your cows. Blast the meteors. Giddy up

I made this straight vanilla JS game for a game jam a few years ago. Considering coming back to it and fixing the bugs and gameplay.<p><a href="https://github.com/jonfranco224/not-my-cows">https://github.com/jonfranco224/not-my-cows</a> if anyone wants to check the source.<p>Edit: y'all seem to be enjoying this! I spun up a quick Twitter/X for game updates if anyone is interested - <a href="https://twitter.com/notmycowsgame" rel="nofollow noreferrer">https://twitter.com/notmycowsgame</a>

Show HN: Not My Cows – Save your cows. Blast the meteors. Giddy up

I made this straight vanilla JS game for a game jam a few years ago. Considering coming back to it and fixing the bugs and gameplay.<p><a href="https://github.com/jonfranco224/not-my-cows">https://github.com/jonfranco224/not-my-cows</a> if anyone wants to check the source.<p>Edit: y'all seem to be enjoying this! I spun up a quick Twitter/X for game updates if anyone is interested - <a href="https://twitter.com/notmycowsgame" rel="nofollow noreferrer">https://twitter.com/notmycowsgame</a>

Show HN: Strich – Barcode scanning for web apps

Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps. Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.

< 1 2 3 ... 363 364 365 366 367 ... 853 854 855 >