The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Ollama – Run LLMs on your Mac

Hi HN<p>A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.<p>While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.<p>It's an early project, and there's still lots to do!

Show HN: Snapify – open-source Loom alternative

Show HN: Snapify – open-source Loom alternative

Show HN: Snapify – open-source Loom alternative

Show HN: Gemini web client in 100 lines of C

Gemini protocol documentation claims that it is possible to write basic web client in 100 lines of code proving protocol simplicity. Easy in modern scripting language but can it be done in ANSI C? Let the source code decide.<p>Someone suggested to share this silly project of mine with HN community so here it is. Enjoy

Show HN: Gemini web client in 100 lines of C

Gemini protocol documentation claims that it is possible to write basic web client in 100 lines of code proving protocol simplicity. Easy in modern scripting language but can it be done in ANSI C? Let the source code decide.<p>Someone suggested to share this silly project of mine with HN community so here it is. Enjoy

Show HN: Infisical – open-source secret management platform

Hi HN, we’re the founders of Infisical, the open source secret management platform – it provides an end-to-end set of tools to manage your secrets across your team and infrastructure (<a href="https://infisical.com/">https://infisical.com/</a>).<p>Excited to show you all the progress that we’ve made in the past few months after our Launch HN in February (<a href="https://news.ycombinator.com/item?id=34955699">https://news.ycombinator.com/item?id=34955699</a>) and Show HN in December (<a href="https://news.ycombinator.com/item?id=34055132">https://news.ycombinator.com/item?id=34055132</a>).<p>During the previous Show HN and Launch HN, we received a ton of feedback which helped us improve Infisical. We’ve since released:<p>- Secret scanning: a new toolset to block commits with hardcoded secrets and continuously monitor your code.<p>- Folders: Deeper organizational structure within projects to accommodate for microservice architectures and storage of more secret types like user API keys and OAuth tokens.<p>- Node and Python SDKs, Webhooks: More ways to integrate and start syncing secrets with Infisical across your infrastructure.<p>- Integrations with Terraform, Supabase, Railway, Checkly, Cloudflare Pages, Azure Key Vault, Laravel Forge, and more.<p>- Secret Referencing and Importing: to create a proper single source of truth.<p>- 1-click deployments to AWS EC2, Digital Ocean, Render, Fly.io: More ways to self-host Infisical on your own infrastructure.<p>In addition, the platform has become more stable and undergone a full-coverage penetration test; we’ve also begun the SOC 2 (Type II) certification process.<p>Overall, we’re really lucky to have support of the developer community, and, in fact, Infisical has gathered over 7k GitHub stars, and now processes over 200 million secrets per month for everyone from solo developers to public enterprises.<p>Our repo is published under the MIT license so any developer can use Infisical. Again, the goal is to not charge individual developers. We make money by charging a license fee for some enterprise features as well as providing a hosted version and support.<p>Check out Infisical Cloud (<a href="https://infisical.com/">https://infisical.com/</a>) or self-host Infisical on your own infrastructure (<a href="https://github.com/Infisical/infisical">https://github.com/Infisical/infisical</a>). We’d love to hear what you think!<p>We’re excited to continue building Infisical, and keep shipping features for you. Please let us know if you have any thoughts, feedback, or feature suggestions!

Show HN: Infisical – open-source secret management platform

Hi HN, we’re the founders of Infisical, the open source secret management platform – it provides an end-to-end set of tools to manage your secrets across your team and infrastructure (<a href="https://infisical.com/">https://infisical.com/</a>).<p>Excited to show you all the progress that we’ve made in the past few months after our Launch HN in February (<a href="https://news.ycombinator.com/item?id=34955699">https://news.ycombinator.com/item?id=34955699</a>) and Show HN in December (<a href="https://news.ycombinator.com/item?id=34055132">https://news.ycombinator.com/item?id=34055132</a>).<p>During the previous Show HN and Launch HN, we received a ton of feedback which helped us improve Infisical. We’ve since released:<p>- Secret scanning: a new toolset to block commits with hardcoded secrets and continuously monitor your code.<p>- Folders: Deeper organizational structure within projects to accommodate for microservice architectures and storage of more secret types like user API keys and OAuth tokens.<p>- Node and Python SDKs, Webhooks: More ways to integrate and start syncing secrets with Infisical across your infrastructure.<p>- Integrations with Terraform, Supabase, Railway, Checkly, Cloudflare Pages, Azure Key Vault, Laravel Forge, and more.<p>- Secret Referencing and Importing: to create a proper single source of truth.<p>- 1-click deployments to AWS EC2, Digital Ocean, Render, Fly.io: More ways to self-host Infisical on your own infrastructure.<p>In addition, the platform has become more stable and undergone a full-coverage penetration test; we’ve also begun the SOC 2 (Type II) certification process.<p>Overall, we’re really lucky to have support of the developer community, and, in fact, Infisical has gathered over 7k GitHub stars, and now processes over 200 million secrets per month for everyone from solo developers to public enterprises.<p>Our repo is published under the MIT license so any developer can use Infisical. Again, the goal is to not charge individual developers. We make money by charging a license fee for some enterprise features as well as providing a hosted version and support.<p>Check out Infisical Cloud (<a href="https://infisical.com/">https://infisical.com/</a>) or self-host Infisical on your own infrastructure (<a href="https://github.com/Infisical/infisical">https://github.com/Infisical/infisical</a>). We’d love to hear what you think!<p>We’re excited to continue building Infisical, and keep shipping features for you. Please let us know if you have any thoughts, feedback, or feature suggestions!

Show HN: Infisical – open-source secret management platform

Hi HN, we’re the founders of Infisical, the open source secret management platform – it provides an end-to-end set of tools to manage your secrets across your team and infrastructure (<a href="https://infisical.com/">https://infisical.com/</a>).<p>Excited to show you all the progress that we’ve made in the past few months after our Launch HN in February (<a href="https://news.ycombinator.com/item?id=34955699">https://news.ycombinator.com/item?id=34955699</a>) and Show HN in December (<a href="https://news.ycombinator.com/item?id=34055132">https://news.ycombinator.com/item?id=34055132</a>).<p>During the previous Show HN and Launch HN, we received a ton of feedback which helped us improve Infisical. We’ve since released:<p>- Secret scanning: a new toolset to block commits with hardcoded secrets and continuously monitor your code.<p>- Folders: Deeper organizational structure within projects to accommodate for microservice architectures and storage of more secret types like user API keys and OAuth tokens.<p>- Node and Python SDKs, Webhooks: More ways to integrate and start syncing secrets with Infisical across your infrastructure.<p>- Integrations with Terraform, Supabase, Railway, Checkly, Cloudflare Pages, Azure Key Vault, Laravel Forge, and more.<p>- Secret Referencing and Importing: to create a proper single source of truth.<p>- 1-click deployments to AWS EC2, Digital Ocean, Render, Fly.io: More ways to self-host Infisical on your own infrastructure.<p>In addition, the platform has become more stable and undergone a full-coverage penetration test; we’ve also begun the SOC 2 (Type II) certification process.<p>Overall, we’re really lucky to have support of the developer community, and, in fact, Infisical has gathered over 7k GitHub stars, and now processes over 200 million secrets per month for everyone from solo developers to public enterprises.<p>Our repo is published under the MIT license so any developer can use Infisical. Again, the goal is to not charge individual developers. We make money by charging a license fee for some enterprise features as well as providing a hosted version and support.<p>Check out Infisical Cloud (<a href="https://infisical.com/">https://infisical.com/</a>) or self-host Infisical on your own infrastructure (<a href="https://github.com/Infisical/infisical">https://github.com/Infisical/infisical</a>). We’d love to hear what you think!<p>We’re excited to continue building Infisical, and keep shipping features for you. Please let us know if you have any thoughts, feedback, or feature suggestions!

Show HN: Logwise – AI Powered Log Analysis with context from all your apps

Hey HN! We're excited to introduce Logwise, our new AI-powered log analysis tool. (built by two devs who hate logs)<p>Product page: <a href="https://logwise.framer.website/" rel="nofollow noreferrer">https://logwise.framer.website/</a><p>Logwise makes debugging and incident response faster for developers. It uses natural language processing to automatically parse log data, surface insights, and detect anomalies.<p>We built Logwise to eliminate the manual sifting of log analysis. Key features:<p>- Search logs in plain English - no complex queries needed - Auto-generated alerts highlight potential issues - Contextual debugging advice speeds incident response - Centralized access to all your log data sources - Continuous learning improves analysis over time<p>Logwise saves developers hours or even days wasted on manual log searches.<p>We want to help resolve incidents 2x faster with accelerated insights Reduce context switching by aggregating all log data Let developers focus on building, not log mining Get ahead of problems with predictive anomaly detection<p>We're currently working on:<p>Customizable log parsing for different data formats Integrations with PagerDuty, Datadog, and other tools An API for accessing analysis results Try out the Logwise beta today! We'd love your feedback on how we can improve. Let us know if you have any feature requests.<p>Our goal is to make AI-powered log analysis seamless and maximize developer productivity. Thanks for any feedback and support!

Show HN: Logwise – AI Powered Log Analysis with context from all your apps

Hey HN! We're excited to introduce Logwise, our new AI-powered log analysis tool. (built by two devs who hate logs)<p>Product page: <a href="https://logwise.framer.website/" rel="nofollow noreferrer">https://logwise.framer.website/</a><p>Logwise makes debugging and incident response faster for developers. It uses natural language processing to automatically parse log data, surface insights, and detect anomalies.<p>We built Logwise to eliminate the manual sifting of log analysis. Key features:<p>- Search logs in plain English - no complex queries needed - Auto-generated alerts highlight potential issues - Contextual debugging advice speeds incident response - Centralized access to all your log data sources - Continuous learning improves analysis over time<p>Logwise saves developers hours or even days wasted on manual log searches.<p>We want to help resolve incidents 2x faster with accelerated insights Reduce context switching by aggregating all log data Let developers focus on building, not log mining Get ahead of problems with predictive anomaly detection<p>We're currently working on:<p>Customizable log parsing for different data formats Integrations with PagerDuty, Datadog, and other tools An API for accessing analysis results Try out the Logwise beta today! We'd love your feedback on how we can improve. Let us know if you have any feature requests.<p>Our goal is to make AI-powered log analysis seamless and maximize developer productivity. Thanks for any feedback and support!

Show HN: Rectfillcurve – generate rectangle-filling curves

How do you visit every coordinate in an NxM grid once? The easiest is to process line-by-line, from the first to last column, but if you want better caching you might try alternating the column direction for each row. The Morton/Z-order and Hilbert order give even better cache coherency for some tasks, although the classic versions only work on squares with power-of-two length sides.<p>Luckily for me, people have developed generalized versions of those algorithms which can handle arbitrary-sized rectangles.<p>I've taken those and packaged all of the those curves into "rectfillcurve", with an iterator API for generating those curves, and a bonus "mlcg curve" with a pseudo-random visit order that should have poor cache behavior. Implemented in stand-alone C, and also available as a Python module.

Show HN: Rectfillcurve – generate rectangle-filling curves

How do you visit every coordinate in an NxM grid once? The easiest is to process line-by-line, from the first to last column, but if you want better caching you might try alternating the column direction for each row. The Morton/Z-order and Hilbert order give even better cache coherency for some tasks, although the classic versions only work on squares with power-of-two length sides.<p>Luckily for me, people have developed generalized versions of those algorithms which can handle arbitrary-sized rectangles.<p>I've taken those and packaged all of the those curves into "rectfillcurve", with an iterator API for generating those curves, and a bonus "mlcg curve" with a pseudo-random visit order that should have poor cache behavior. Implemented in stand-alone C, and also available as a Python module.

Show HN: Rectfillcurve – generate rectangle-filling curves

How do you visit every coordinate in an NxM grid once? The easiest is to process line-by-line, from the first to last column, but if you want better caching you might try alternating the column direction for each row. The Morton/Z-order and Hilbert order give even better cache coherency for some tasks, although the classic versions only work on squares with power-of-two length sides.<p>Luckily for me, people have developed generalized versions of those algorithms which can handle arbitrary-sized rectangles.<p>I've taken those and packaged all of the those curves into "rectfillcurve", with an iterator API for generating those curves, and a bonus "mlcg curve" with a pseudo-random visit order that should have poor cache behavior. Implemented in stand-alone C, and also available as a Python module.

Show HN: Weaviate – Build your own generative health search engine

We are super excited to release our latest open-source demo, Healthsearch. This demo decodes user reviews of supplements and performs semantic- and generative search on them, retrieving the most related products for specific health effects, and leveraging Large Language Models to generate product and review summaries. The demo can understand natural language queries and derive all search filters directly from the context of your query.

Show HN: Weaviate – Build your own generative health search engine

We are super excited to release our latest open-source demo, Healthsearch. This demo decodes user reviews of supplements and performs semantic- and generative search on them, retrieving the most related products for specific health effects, and leveraging Large Language Models to generate product and review summaries. The demo can understand natural language queries and derive all search filters directly from the context of your query.

Show HN: Juno – Code Interpreter in Your Jupyter Notebook

ChatGPT Code Interpreter is a game changer for data cleaning, analysis, and plotting, but as early users my friend @amauboussin and I were frustrated that there is no easy way to work on top of its results. You can’t edit code, install packages, work on large datasets, collaborate with teammates, or use it for privacy-sensitive workloads.<p>So we built Juno to bring the power of Code Interpreter to your local Jupyter notebook. It understands your data, generates code directly in your notebook, and can fix its own errors. We’ve found ourselves using it for tons of analysis tasks at our startups, so we decided to release it to everyone!

< 1 2 3 ... 379 380 381 382 383 ... 854 855 856 >