The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: LibreScroll – enable flywheel-scrolling on any generic mouse

Based on the framerate-independent momentum simulation[0] that I used in my TPMouse script[1]<p>If you've ever used a mouse with Infinite-scrollwheel such as Logitech, this utility for Windows basically recreates that functionality for any generic mouse.<p>Actually, it's even better than that: this allows for simultaneous horizontal and vertical scrolling, so essentially it combines two of the best features of the Logitech MX Master -- horizontal wheel, and unlocked momentum scrolling -- into one intuitive control scheme.<p>To enable horizontal scrolling, set the X-sensitivity to a value you prefer.<p>[0] <a href="https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_a_virtual_trackball_for_windows/ivl0sr8/" rel="nofollow noreferrer">https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_...</a><p>[1] <a href="https://github.com/EsportToys/TPMouse">https://github.com/EsportToys/TPMouse</a>

Show HN: LibreScroll – enable flywheel-scrolling on any generic mouse

Based on the framerate-independent momentum simulation[0] that I used in my TPMouse script[1]<p>If you've ever used a mouse with Infinite-scrollwheel such as Logitech, this utility for Windows basically recreates that functionality for any generic mouse.<p>Actually, it's even better than that: this allows for simultaneous horizontal and vertical scrolling, so essentially it combines two of the best features of the Logitech MX Master -- horizontal wheel, and unlocked momentum scrolling -- into one intuitive control scheme.<p>To enable horizontal scrolling, set the X-sensitivity to a value you prefer.<p>[0] <a href="https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_a_virtual_trackball_for_windows/ivl0sr8/" rel="nofollow noreferrer">https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_...</a><p>[1] <a href="https://github.com/EsportToys/TPMouse">https://github.com/EsportToys/TPMouse</a>

Show HN: LibreScroll – enable flywheel-scrolling on any generic mouse

Based on the framerate-independent momentum simulation[0] that I used in my TPMouse script[1]<p>If you've ever used a mouse with Infinite-scrollwheel such as Logitech, this utility for Windows basically recreates that functionality for any generic mouse.<p>Actually, it's even better than that: this allows for simultaneous horizontal and vertical scrolling, so essentially it combines two of the best features of the Logitech MX Master -- horizontal wheel, and unlocked momentum scrolling -- into one intuitive control scheme.<p>To enable horizontal scrolling, set the X-sensitivity to a value you prefer.<p>[0] <a href="https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_a_virtual_trackball_for_windows/ivl0sr8/" rel="nofollow noreferrer">https://old.reddit.com/r/Trackballs/comments/ym9q2t/tpmouse_...</a><p>[1] <a href="https://github.com/EsportToys/TPMouse">https://github.com/EsportToys/TPMouse</a>

Show HN: A non-VC backed content creation and social media platform

Hey HN, I'm soft launching my MVP today and would love to hear your honest feedback. For the past few months I've been working extremely hard on this side hustle in my spare time (I have a day job as a CTO).<p>I'm building a platform for writers, bloggers and content creators that's built for them rather than for investors and advertisers like most similar products and social media platforms backed by VCs.<p>I wrote about why I built it here: <a href="https://persumi.com/c/persumi/u/fredwu/p/welcome-to-persumi-a-modern-platform-for-content-creation" rel="nofollow noreferrer">https://persumi.com/c/persumi/u/fredwu/p/welcome-to-persumi-...</a><p>And the landing page is here: <a href="https://persumi.com/" rel="nofollow noreferrer">https://persumi.com/</a><p>I would really love your honest feedback, if you care to share them with me. :)<p>In the next week or two I'll write about how I built this MVP in three months - the tech, the architecture, the experiments and the missteps... If you are curious, stay tuned!

Show HN: Primo – a visual CMS with Svelte blocks, a code editor, and SSG

Show HN: Primo – a visual CMS with Svelte blocks, a code editor, and SSG

Show HN: Primo – a visual CMS with Svelte blocks, a code editor, and SSG

Show HN: Primo – a visual CMS with Svelte blocks, a code editor, and SSG

Show HN: Netdata got new impressive dashboard

Show HN: ChatGPT Alternative with LLaMA Models

Hey folks, we created a ChatGPT alternative for anyone to try out LLaMA models and benchmark responses across OpenAI's models and other open source models.<p>Would love some feedback! No data is being used for retraining models.

Show HN: Accept Payments with fees up-to 70% lower than Stripe

Hello HN,<p>I'm Youssef, a 26-year-old CEO of LeetPay. I created LeetPay after recognizing the financial strain that transaction fees can have on businesses, big or small in the United States. Just as the saying goes, "Necessity is the mother of invention", so began the journey of LeetPay.<p>LeetPay facilitates direct payments from your client’s bank to yours, effectively bypassing the traditional card payment systems. The idea behind LeetPay was to simplify payment processing while minimizing associated costs. Our service operates at a flat 1% fee per transaction, a solution that could potentially save you thousands of dollars each year. Which is much lower than credit card payment providers.<p>While LeetPay is a general solution for payment processing, we've particularly honed in on a niche that we feel has been underserved - B2B SaaS companies. In this business model, where recurring payments form the backbone of revenue, the persistent drain of transaction fees can significantly impact the bottom line over time. The high volume of transactions that these companies handle presents an excellent opportunity for them to benefit from our flat 1% fee system. By using LeetPay, these companies can preserve more of their earnings and reinvest them back into growing their business.<p>The beauty of this system is in its simplicity and security. Your customers authorize the payment directly through their bank - as straightforward as using a credit card, but more secure due to the direct bank authentication. To ensure seamless adoption, we've developed a user-friendly API that easily integrates into your existing workflows and website. What's more, we currently support 92% of US banks, making our solution widely accessible. For your peace of mind, we also prioritize transaction protection and offer robust security measures.<p>I am aware that there's always room for improvement. LeetPay is not perfect, but it's been a labor of love. As a fellow builder, I understand the value of feedback and constant iteration. I would genuinely appreciate any insights or suggestions you might have.<p>You can check out our website at www.leetpay.me<p>Thank you for taking the time to read about LeetPay.<p>I am excited to hear your thoughts and feedback.<p>Thank you HN :) ,<p>Youssef

Show HN: Accept Payments with fees up-to 70% lower than Stripe

Hello HN,<p>I'm Youssef, a 26-year-old CEO of LeetPay. I created LeetPay after recognizing the financial strain that transaction fees can have on businesses, big or small in the United States. Just as the saying goes, "Necessity is the mother of invention", so began the journey of LeetPay.<p>LeetPay facilitates direct payments from your client’s bank to yours, effectively bypassing the traditional card payment systems. The idea behind LeetPay was to simplify payment processing while minimizing associated costs. Our service operates at a flat 1% fee per transaction, a solution that could potentially save you thousands of dollars each year. Which is much lower than credit card payment providers.<p>While LeetPay is a general solution for payment processing, we've particularly honed in on a niche that we feel has been underserved - B2B SaaS companies. In this business model, where recurring payments form the backbone of revenue, the persistent drain of transaction fees can significantly impact the bottom line over time. The high volume of transactions that these companies handle presents an excellent opportunity for them to benefit from our flat 1% fee system. By using LeetPay, these companies can preserve more of their earnings and reinvest them back into growing their business.<p>The beauty of this system is in its simplicity and security. Your customers authorize the payment directly through their bank - as straightforward as using a credit card, but more secure due to the direct bank authentication. To ensure seamless adoption, we've developed a user-friendly API that easily integrates into your existing workflows and website. What's more, we currently support 92% of US banks, making our solution widely accessible. For your peace of mind, we also prioritize transaction protection and offer robust security measures.<p>I am aware that there's always room for improvement. LeetPay is not perfect, but it's been a labor of love. As a fellow builder, I understand the value of feedback and constant iteration. I would genuinely appreciate any insights or suggestions you might have.<p>You can check out our website at www.leetpay.me<p>Thank you for taking the time to read about LeetPay.<p>I am excited to hear your thoughts and feedback.<p>Thank you HN :) ,<p>Youssef

Show HN: PDF Differ

Show HN: PDF Differ

Show HN: PDF Differ

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2

Hey hacker news,<p>We’re the cofounders at Psychic.dev (<a href="http://psychic.dev">http://psychic.dev</a>) where we help companies connect LLMs to private data. With the launch of Llama 2, we think it’s finally viable to self-host an internal application that’s on-par with ChatGPT, so we did exactly that and made it an open source project.<p>We also included a vector DB and API server so you can upload files and connect Llama 2 to your own data.<p>The RAG in RAGstack stands for Retrieval Augmented Generation, a technique where the capabilities of a large language model (LLM) are augmented by retrieving information from other systems and inserting them into the LLM’s context window via a prompt. This gives LLMs information beyond what was provided in their training data, which is necessary for almost every enterprise application. Examples include data from current web pages, data from SaaS apps like Confluence or Salesforce, and data from documents like sales contracts and PDFs.<p>RAG works better than fine-tuning the model because it’s cheaper, it’s faster, and it’s more reliable since the provenance of information is attached to each response.<p>While there are quite quite a few “chat with your data” apps at this point, most have external dependencies to APIs like OpenAI or Pinecone. RAGstack, on the other hand, only has open-source dependencies and lets you run the entire stack locally or on your cloud provider. This includes:<p>- Containerizing LLMs like Falcon, Llama2, and GPT4all with Truss - Vector search with Qdrant. - File parsing and ingestion with Langchain, PyMuPDF, and Unstructured.io - Cloud deployment with Terraform<p>If you want to dive into it yourself, we also published a couple of tutorials on how to deploy open source LLMs for your organization, and optionally give it access to internal documents without any data ever leaving your VPC.<p>- How to deploy Llama 2 to Google Cloud (GCP): <a href="https://www.psychic.dev/post/how-to-deploy-llama-2-to-google-cloud-gcp">https://www.psychic.dev/post/how-to-deploy-llama-2-to-google...</a> - How to connect Llama 2 to your own data using RAGstack: <a href="https://www.psychic.dev/post/how-to-self-host-llama-2-and-connect-it-to-your-private-data">https://www.psychic.dev/post/how-to-self-host-llama-2-and-co...</a><p>Let a thousand private corporate oracles bloom!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

< 1 2 3 ... 378 379 380 381 382 ... 854 855 856 >