The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Accept Payments with fees up-to 70% lower than Stripe

Hello HN,<p>I'm Youssef, a 26-year-old CEO of LeetPay. I created LeetPay after recognizing the financial strain that transaction fees can have on businesses, big or small in the United States. Just as the saying goes, "Necessity is the mother of invention", so began the journey of LeetPay.<p>LeetPay facilitates direct payments from your client’s bank to yours, effectively bypassing the traditional card payment systems. The idea behind LeetPay was to simplify payment processing while minimizing associated costs. Our service operates at a flat 1% fee per transaction, a solution that could potentially save you thousands of dollars each year. Which is much lower than credit card payment providers.<p>While LeetPay is a general solution for payment processing, we've particularly honed in on a niche that we feel has been underserved - B2B SaaS companies. In this business model, where recurring payments form the backbone of revenue, the persistent drain of transaction fees can significantly impact the bottom line over time. The high volume of transactions that these companies handle presents an excellent opportunity for them to benefit from our flat 1% fee system. By using LeetPay, these companies can preserve more of their earnings and reinvest them back into growing their business.<p>The beauty of this system is in its simplicity and security. Your customers authorize the payment directly through their bank - as straightforward as using a credit card, but more secure due to the direct bank authentication. To ensure seamless adoption, we've developed a user-friendly API that easily integrates into your existing workflows and website. What's more, we currently support 92% of US banks, making our solution widely accessible. For your peace of mind, we also prioritize transaction protection and offer robust security measures.<p>I am aware that there's always room for improvement. LeetPay is not perfect, but it's been a labor of love. As a fellow builder, I understand the value of feedback and constant iteration. I would genuinely appreciate any insights or suggestions you might have.<p>You can check out our website at www.leetpay.me<p>Thank you for taking the time to read about LeetPay.<p>I am excited to hear your thoughts and feedback.<p>Thank you HN :) ,<p>Youssef

Show HN: Accept Payments with fees up-to 70% lower than Stripe

Hello HN,<p>I'm Youssef, a 26-year-old CEO of LeetPay. I created LeetPay after recognizing the financial strain that transaction fees can have on businesses, big or small in the United States. Just as the saying goes, "Necessity is the mother of invention", so began the journey of LeetPay.<p>LeetPay facilitates direct payments from your client’s bank to yours, effectively bypassing the traditional card payment systems. The idea behind LeetPay was to simplify payment processing while minimizing associated costs. Our service operates at a flat 1% fee per transaction, a solution that could potentially save you thousands of dollars each year. Which is much lower than credit card payment providers.<p>While LeetPay is a general solution for payment processing, we've particularly honed in on a niche that we feel has been underserved - B2B SaaS companies. In this business model, where recurring payments form the backbone of revenue, the persistent drain of transaction fees can significantly impact the bottom line over time. The high volume of transactions that these companies handle presents an excellent opportunity for them to benefit from our flat 1% fee system. By using LeetPay, these companies can preserve more of their earnings and reinvest them back into growing their business.<p>The beauty of this system is in its simplicity and security. Your customers authorize the payment directly through their bank - as straightforward as using a credit card, but more secure due to the direct bank authentication. To ensure seamless adoption, we've developed a user-friendly API that easily integrates into your existing workflows and website. What's more, we currently support 92% of US banks, making our solution widely accessible. For your peace of mind, we also prioritize transaction protection and offer robust security measures.<p>I am aware that there's always room for improvement. LeetPay is not perfect, but it's been a labor of love. As a fellow builder, I understand the value of feedback and constant iteration. I would genuinely appreciate any insights or suggestions you might have.<p>You can check out our website at www.leetpay.me<p>Thank you for taking the time to read about LeetPay.<p>I am excited to hear your thoughts and feedback.<p>Thank you HN :) ,<p>Youssef

Show HN: PDF Differ

Show HN: PDF Differ

Show HN: PDF Differ

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Snapify – open-source Loom alternative

Show HN: Snapify – open-source Loom alternative

Show HN: Snapify – open-source Loom alternative

Show HN: Gemini web client in 100 lines of C

Gemini protocol documentation claims that it is possible to write basic web client in 100 lines of code proving protocol simplicity. Easy in modern scripting language but can it be done in ANSI C? Let the source code decide.<p>Someone suggested to share this silly project of mine with HN community so here it is. Enjoy

Show HN: Gemini web client in 100 lines of C

Gemini protocol documentation claims that it is possible to write basic web client in 100 lines of code proving protocol simplicity. Easy in modern scripting language but can it be done in ANSI C? Let the source code decide.<p>Someone suggested to share this silly project of mine with HN community so here it is. Enjoy

Show HN: Infisical – open-source secret management platform

Hi HN, we’re the founders of Infisical, the open source secret management platform – it provides an end-to-end set of tools to manage your secrets across your team and infrastructure (<a href="https://infisical.com/">https://infisical.com/</a>).<p>Excited to show you all the progress that we’ve made in the past few months after our Launch HN in February (<a href="https://news.ycombinator.com/item?id=34955699">https://news.ycombinator.com/item?id=34955699</a>) and Show HN in December (<a href="https://news.ycombinator.com/item?id=34055132">https://news.ycombinator.com/item?id=34055132</a>).<p>During the previous Show HN and Launch HN, we received a ton of feedback which helped us improve Infisical. We’ve since released:<p>- Secret scanning: a new toolset to block commits with hardcoded secrets and continuously monitor your code.<p>- Folders: Deeper organizational structure within projects to accommodate for microservice architectures and storage of more secret types like user API keys and OAuth tokens.<p>- Node and Python SDKs, Webhooks: More ways to integrate and start syncing secrets with Infisical across your infrastructure.<p>- Integrations with Terraform, Supabase, Railway, Checkly, Cloudflare Pages, Azure Key Vault, Laravel Forge, and more.<p>- Secret Referencing and Importing: to create a proper single source of truth.<p>- 1-click deployments to AWS EC2, Digital Ocean, Render, Fly.io: More ways to self-host Infisical on your own infrastructure.<p>In addition, the platform has become more stable and undergone a full-coverage penetration test; we’ve also begun the SOC 2 (Type II) certification process.<p>Overall, we’re really lucky to have support of the developer community, and, in fact, Infisical has gathered over 7k GitHub stars, and now processes over 200 million secrets per month for everyone from solo developers to public enterprises.<p>Our repo is published under the MIT license so any developer can use Infisical. Again, the goal is to not charge individual developers. We make money by charging a license fee for some enterprise features as well as providing a hosted version and support.<p>Check out Infisical Cloud (<a href="https://infisical.com/">https://infisical.com/</a>) or self-host Infisical on your own infrastructure (<a href="https://github.com/Infisical/infisical">https://github.com/Infisical/infisical</a>). We’d love to hear what you think!<p>We’re excited to continue building Infisical, and keep shipping features for you. Please let us know if you have any thoughts, feedback, or feature suggestions!

< 1 2 3 ... 485 486 487 488 489 ... 960 961 962 >