The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Litellm – Simple library to standardize OpenAI, Cohere, Azure LLM I/O

I built this library because langchain was too bloated and I needed a simple abstraction to call multiple LLM APIs. litellm has two functions - completion(), embedding()

Show HN: I built a multiplayer Gameboy

Still very much a work in progress, but really wanted to share this even in it's early state. Had heaps of fun building it to learn more about WebRTC.

Show HN: I built a multiplayer Gameboy

Still very much a work in progress, but really wanted to share this even in it's early state. Had heaps of fun building it to learn more about WebRTC.

Show HN: I built a multiplayer Gameboy

Still very much a work in progress, but really wanted to share this even in it's early state. Had heaps of fun building it to learn more about WebRTC.

Show HN: Shell AI – My Aggressively Minimal Open Source Assistant

Show HN: Experiment with Hugging Face models in a single notebook interface

Customize Django Admin Interface

Show HN: Continue – Open-source coding autopilot

Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )

Show HN: Continue – Open-source coding autopilot

Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )

Show HN: Continue – Open-source coding autopilot

Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )

Show HN: Continue – Open-source coding autopilot

Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )

Show HN: Continue – Open-source coding autopilot

Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )

RealAboutInstagram – a replica highlighting harmful strategies

Hello HN! I'm a creative technologist and recently decided to develop RealAboutInstagram, a replica of the current About page of Instagram replacing its content with their current harmful strategies used on the platform and the negative impacts of social media.<p>The information on the website is extracted from resources such as the Digital Minimalism book by Cal Newport, Ted Talks, and many others that can be found in the footer.<p>This is one of many projects for my career, and I appreciate anyone taking the time to read this and check out the website.<p>You can check out my other projects at <a href="https://santiagoaguirre.netlify.app/" rel="nofollow noreferrer">https://santiagoaguirre.netlify.app/</a><p>Thank you for your time!

Show HN: Marsha – An LLM-Based Programming Language

Show HN: Marsha – An LLM-Based Programming Language

Show HN: Marsha – An LLM-Based Programming Language

Show HN: Invoice Dragon – An open source app to create PDF invoices

Show HN: Invoice Dragon – An open source app to create PDF invoices

Show HN: Invoice Dragon – An open source app to create PDF invoices

< 1 2 3 ... 459 460 461 462 463 ... 937 938 939 >