The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Markwhen: Markdown for Timelines
I've been working on markwhen for a bit as a way to create timelines and calendars from plain text, like markdown.<p>I personally like tools that let you immediately start using them, and I set out to do that here with markwhen.<p>Let me know if you have any questions or feedback!
Show HN: Single-Instruction (Subleq) Programming Game
Show HN: Single-Instruction (Subleq) Programming Game
Show HN: Pyflo – a free, interactive guide to learning Python
TL;DR: <a href="https://pyflo.net" rel="nofollow noreferrer">https://pyflo.net</a> is a free, interactive guide to learning Python<p>Hi Everyone, I am a CS educator who has taught a variety of university courses, including many on introductory Python programming. Over the past few years, I've written down a number of Python programming lessons and has culminated into Pyflo.net. This tool is a completely free, introductory guide to learning Python. It is more-or-less an intro programming textbook, but with a few twists, including:<p>* It is totally free. You don't even have to give me your email to use it<p>* The lessons are short and modularized<p>* It's interactive, containing embedded questions that provide instant feedback throughout.<p>My hope is that this can be a useful resource for those looking to learn Python. Feel free to use yourself, or share with those you think would be interested.<p>Feedback is very much welcome and appreciated.
Show HN: Pyflo – a free, interactive guide to learning Python
TL;DR: <a href="https://pyflo.net" rel="nofollow noreferrer">https://pyflo.net</a> is a free, interactive guide to learning Python<p>Hi Everyone, I am a CS educator who has taught a variety of university courses, including many on introductory Python programming. Over the past few years, I've written down a number of Python programming lessons and has culminated into Pyflo.net. This tool is a completely free, introductory guide to learning Python. It is more-or-less an intro programming textbook, but with a few twists, including:<p>* It is totally free. You don't even have to give me your email to use it<p>* The lessons are short and modularized<p>* It's interactive, containing embedded questions that provide instant feedback throughout.<p>My hope is that this can be a useful resource for those looking to learn Python. Feel free to use yourself, or share with those you think would be interested.<p>Feedback is very much welcome and appreciated.
Show HN: Pyflo – a free, interactive guide to learning Python
TL;DR: <a href="https://pyflo.net" rel="nofollow noreferrer">https://pyflo.net</a> is a free, interactive guide to learning Python<p>Hi Everyone, I am a CS educator who has taught a variety of university courses, including many on introductory Python programming. Over the past few years, I've written down a number of Python programming lessons and has culminated into Pyflo.net. This tool is a completely free, introductory guide to learning Python. It is more-or-less an intro programming textbook, but with a few twists, including:<p>* It is totally free. You don't even have to give me your email to use it<p>* The lessons are short and modularized<p>* It's interactive, containing embedded questions that provide instant feedback throughout.<p>My hope is that this can be a useful resource for those looking to learn Python. Feel free to use yourself, or share with those you think would be interested.<p>Feedback is very much welcome and appreciated.
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: Khoj – Chat offline with your second brain using Llama 2
Hi folks, we're Debanjum and Saba. We created Khoj as a hobby project 2+ years ago because: (1) Search on the desktop sucked; we just had keyword search on the desktop vs google for the internet; and (2) Natural language search models had become good and easy to run on consumer hardware by this point.<p>Once we made Khoj search incremental, I completely stopped using the default incremental search (C-s) in Emacs. Since then Khoj has grown to support more content types, deeper integrations and chat (using ChatGPT). With Llama 2 released last week, chat models are finally good and easy enough to use on consumer hardware for the chat with docs scenario.<p>Khoj is a desktop application to search and chat with your personal notes, documents and images. It is accessible from within Emacs, Obsidian or your Web browser. It works with org-mode, markdown, pdf, jpeg files and notion, github repositories. It is open-source and can work without internet access (e.g on a plane).<p>Our chat feature allows you to extract answers and create content from your existing knowledge base. Example: <i>"What was that book Trillian mentioned at Zaphod's birthday last week"</i>. We personally use the chat feature regularly to find links, names and addresses (especially on mobile) and collate content across multiple, messy notes. It works online or offline: you can chat without internet using Llama 2 or with internet using GPT3.5+ depending on your requirements.<p>Our search feature lets you quickly find relevant notes, documents or images using natural language. It does not use the internet. Example: Search for <i>"bought flowers at grocery store"</i> will find notes about <i>"roses at wholefoods"</i>.<p>Quickstart:<p><pre><code> pip install khoj-assistant && khoj
</code></pre>
See <a href="https://docs.khoj.dev/#/setup">https://docs.khoj.dev/#/setup</a> for detailed instructions<p>We also have desktop apps (in beta) at <a href="https://github.com/khoj-ai/khoj/releases/tag/0.10.0">https://github.com/khoj-ai/khoj/releases/tag/0.10.0</a> if you want to try them out.<p>Please do try out Khoj and let us know if it works for your use cases? <i>Looking forward to the feedback!</i>
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: San Francisco Compute – 512 H100s at <$2/hr for research and startups
Hey folks! We're Alex and Evan, and we're working on putting together a 512 H100 compute cluster for startups and researchers to train large generative models on.
- it runs at the lowest possible margins (<$2.00/hr per H100)
- designed for bursty training runs, so you can take say 128 H100s for a week
- you don’t need to commit to multiple years of compute or pay for a year upfront<p>Big labs like OpenAI and Deepmind have big clusters that support this kind of bursty allocation for their researchers, but startups so far have had to get very small clusters on very long term contracts, wait months of lead time, and try to keep them busy all the time.<p>Our goal is to make it about 10-20x cheaper to do an AI startup than it is right now. Stable Diffusion only costs about $100k to train -- in theory every YC company could get up to that scale. It's just that no cloud provider in the world will give you $100k of compute for just a couple weeks, so startups have to raise 20x that much to buy a whole year of compute.<p>Once the cluster is online, we're going to be pretty much the only option for startups to do big training runs like that on.
Show HN: LLMFlows – LangChain alternative for explicit and transparent apps
Hi HN! Over the last several weekends, I've been building LLMFlows as an alternative to langchain.<p>There's been a lot of discussion on the shortcomings of langchain in the past few weeks, but when I first tried it in March, I thought there are 3 main problems:
1. Too many abstractions
2. Hidden prompts and opinionated logic in chains which makes it hard to customize
3. Hard to debug<p>This inspired me to try and build a framework that solves these 3 issues, and therefore I started building LLFlows with the "philosophy" of being "simple, explicit, and transparent."<p>A few weekends later, I think I finally managed to reach a state where I feel it's ready to be shared.<p>I would love to hear your feedback! Thank you!
Show HN: ssh-tpm-agent – SSH agent for TPMs
Show HN: ssh-tpm-agent – SSH agent for TPMs
Show HN: Gogit – Just enough Git (in Go) to push itself to GitHub
Show HN: Gogit – Just enough Git (in Go) to push itself to GitHub
Show HN: Gogit – Just enough Git (in Go) to push itself to GitHub
Show HN: Worst programming language written in less than an hour
Unfinished side project inspared by JavaScript
It's just a stupid interpeter for my poor language