The best Hacker News stories from Show from the past week
Latest posts:
Postgres Language Server
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc.<p>there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL.<p>This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres.<p>this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community.<p>a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe (<a href="https://github.com/psteinroe">https://github.com/psteinroe</a>) who the creator and maintainer.<p>[0] LSP: <a href="https://microsoft.github.io/language-server-protocol/" rel="nofollow noreferrer">https://microsoft.github.io/language-server-protocol/</a><p>[1] pganalyze: <a href="https://pganalyze.com/" rel="nofollow noreferrer">https://pganalyze.com/</a>
Show HN: Hydra 1.0 – open-source column-oriented Postgres
hi hn, hydra ceo here<p>hydra is an open source, column-oriented postgres. you can set up remarkably fast aggregates on your project in minutes to query billions of rows instantly.<p>postgres is great, but aggregates can take minutes to hours to return results on large data sets. long-running analytical queries hog database resources and degrade performance. use hydra to run much faster analytics on postgres without making code changes. data is automatically loaded into columnar format and compressed. connect to hydra with your preferred postgres client (psql, dbeaver, etc).<p>following 4 months of development on hydra v0.3.0-alpha, our team is proud to share our first major version release. hydra 1.0 is under active development, but ready for use and feedback. we’re aiming to release 1.0 into general availability (ga) soon.<p>for testing, try the hydra free tier to create a column-oriented postgres instance on the cloud. <a href="https://dashboard.hydra.so/signup">https://dashboard.hydra.so/signup</a>
Show HN: Learn a language quickly by practising speaking with AI
Hi guys,<p>Hope everyone is well. This app was borne out of my own frustration. I thought that I was terrible at learning languages at school, since I didn't become conversational in French after 5 years of study. However, I later traveled with some French friends and, in just under 3 weeks, I was able to hold a reasonable conversation. I realized that there's no substitute for speaking to native speakers.<p>I tried to adopt this approach for other languages, but it's much harder to find people to practise with when you aren't travelling. I started using iTalki to meet people from different countries and chat to them. It quickly became very expensive and time-consuming to schedule the calls, so I gave up.<p>I made PrettyPolly so that anyone can easily practice speaking 26 languages orally. The app uses ChatGPT (amongst other tools) to allow you to practice speaking whenever you want. It also generates a fluency score for each conversation so that you have an objective way of tracking progress.<p>It's free to use (up to 15 conversations per month). I've found that using it once or twice per day is plenty, and you'll be amazed at how much you will pick up in a week. I've added some FAQs here in case useful - <a href="https://www.prettypolly.app/learn" rel="nofollow noreferrer">https://www.prettypolly.app/learn</a><p>Would really appreciate any feedback. Let me know if you have any questions, issues or suggestions.<p>Thanks,
Chris
Show HN: Magic Loops – Combine LLMs and code to create simple automations
Howdy! We built this as an experiment in personal-programming, combining the best of LLMs and code to help automate tasks around you. I personally use it to track the tides and get notified when certain conditions are met, something that pure LLMs had trouble dealing with and pure code was often too brittle for.<p>We created it after getting frustrated with the inability of LLMs to deal with numbers and the various hoops we had to jump through to make ChatGPT output repeatable.<p>At the core, Magic Loops are just a series of "blocks" (JSON) that can be triggered with different inputs (email, time, webhook), then operate on those inputs using a combination of LLMs and code, and then output those results (email, text, webhook). Under the hood, the LLM calls are using GPT-4 via OpenAI and the code is run in sandboxed (no internet) Docker containers in AWS.<p>You have full control over each step of the loop, but you can also create (or attempt to create) a Magic Loop by simply describing what you want. We use GPT-4 to break that request into feasible steps, and then create a Magic Loop scaffold. Of course, you should still validate the loop before publishing it!<p>We've seen some neat use cases already:<p>- "Text me when the tide is less than 1ft between 7am and 7pm at Fort Funston"<p>- "Summarize an email using this format and forward it to this address"<p>- "Text me every time our store does more than $1000/day in volume on Shopify"<p>- "Take specific data from Cloudflare, format it, and send it to Mixpanel every hour"<p>We hope you enjoy what's essentially an experiment at this point. If folks like the concept, we're thinking about open sourcing it so you can run the loops locally with the code runtimes you wish (rather than in our code runners).<p>Let us know what you think, and more importantly, what you wish to build or automate!<p>Cheers,
Adam & Mihai
Show HN: LearnLingo – Converse with an AI-powered language tutor
Hey folks! I'm Callum, and I'm working on a way to practice a new language with an AI powered tutor.<p>I've always found that the hardest part of learning a new language is finding someone to actually converse with. Even if a partner can be found, the pressure can mean that you are more focused on not making mistakes than on actually learning new grammar or vocabulary.<p>The service that I have been working on allows you to practice with a language tutor via online chat messages, or you can have a turn-based voice conversation.<p>I'm working on a number of other features that will be coming out shortly, including a few games for practising pronunciation and listening skills, as well as a plan to release some lesson plans for specific languages later on.<p>Have a try, and let me know if you have any feedback!
Show HN: Linkwarden – An open source collaborative bookmark manager
Hey there HN!
Meet Linkwarden, a fully self-hostable, open-source collaborative bookmark manager to collect, organize and archive webpages.<p>Please also visit/star our GitHub repo [1].<p>Linkwarden was built using TypeScript and NextJS, backed by a PostgreSQL database for the lighter-weight data. The rest of the data can be chosen either to be stored on the filesystem, or stored on the cloud on Digital Ocean Space/AWS S3, the reason for the cloud storage solution was for the Cloud offering [2], we realized that the preserved webpages (archives) take up space pretty quickly and S3 was much more efficient for this task. On the front-end we used TailwindCSS for styling and Zustand for state management.<p>You could either use our Cloud offering (with 14-day free trial) to directly support this project and experience Linkwarden, or you could self-host it on your own machine and have maximum flexibility.<p>Feel free if you had any questions, we'll do our best to answer it.<p>[1]: <a href="https://github.com/linkwarden/linkwarden">https://github.com/linkwarden/linkwarden</a><p>[2]: <a href="https://cloud.linkwarden.app/register" rel="nofollow noreferrer">https://cloud.linkwarden.app/register</a> - Hosted in Digital Ocean's datacenter located here in Toronto, ON.
Show HN: Markwhen: Markdown for Timelines
I've been working on markwhen for a bit as a way to create timelines and calendars from plain text, like markdown.<p>I personally like tools that let you immediately start using them, and I set out to do that here with markwhen.<p>Let me know if you have any questions or feedback!
Show HN: Markwhen: Markdown for Timelines
I've been working on markwhen for a bit as a way to create timelines and calendars from plain text, like markdown.<p>I personally like tools that let you immediately start using them, and I set out to do that here with markwhen.<p>Let me know if you have any questions or feedback!
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: Gogit – Just enough Git (in Go) to push itself to GitHub
Show HN: Continue – Open-source coding autopilot
Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )
Show HN: Continue – Open-source coding autopilot
Hi HN, we’re Nate and Ty, co-founders of Continue, an open-source autopilot for software development built to be deeply customizable and continuously learn from development data. It consists of an extended language server and (to start) a VS Code extension.<p>Our GitHub is <a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>. You can watch a demo of Continue and download the extension at <a href="https://continue.dev">https://continue.dev</a><p>— — —<p>A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.<p>However, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose how we build software and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.<p>The solution is to give developers what they need: <i>transparency, hackability,</i> and <i>control</i>. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.<p>— — —<p>At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.<p>But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.<p>And there’s a ton of room for further customization. Today, you can write your own<p>- slash commands (e.g. ‘/commit’ to write a summary and commit message for staged changes, ‘/docs’ to grab the contents of a file and update documentation pages that depend on it, ‘/ticket’ to generate a full-featured ticket with relevant files and high-level instructions from a short description)<p>- context sources (e.g. GitHub issues, Jira, local files, StackOverflow, documentation pages)<p>- templated system message (e.g. “Always give maximally concise answers. Adhere to the following style guide whenever writing code: {{ /Users/nate/repo/styleguide.md }}”)<p>- tools (e.g. add a file, run unit tests, build and watch for errors)<p>- policies (e.g. define a goal-oriented agent that works in a write code, run code, read errors, fix code, repeat loop)<p>Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (<a href="https://continue.dev/docs/customization#change-the-default-llm">https://continue.dev/docs/customization#change-the-default-l...</a>).<p>When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).<p>You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: <a href="https://medium.com/@continuedev/its-time-to-collect-data-on-how-you-build-software-197d12a020d5" rel="nofollow noreferrer">https://medium.com/@continuedev/its-time-to-collect-data-on-...</a><p>Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.<p>— — —<p>We’d love for you to try out Continue and give us feedback! Let us know what you think in the comments : )
Show HN: Invoice Dragon – An open source app to create PDF invoices
Show HN: Invoice Dragon – An open source app to create PDF invoices