The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Apple Notes Liberator – Extract Notes.app Data and Save It as JSON
Hey there!<p>I just released the first version of a project I’ve been working on solves a very specific problem that perhaps only I have. I welcome any and all feedback, even if you just want to drop in to say that this is a hot piece of garbage!
Show HN: Generate styled web pages with just Python
There are a lot of Python to web app frameworks going around these days but I wanted something that was a little more lightweight that just generates HTML pages and can be embedded in Flask or other Python web servers incrementally.<p>PyVibe uses Python components to construct a page with styling that you can use in Flask, in a static site, or even in Pyodide.
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!
Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA
ChatLLaMA is an experimental chatbot interface for interacting with variants of Facebook's LLaMA. Currently, we support the 7 billion parameter variant that was fine-tuned on the Alpaca dataset. This early versions isn't as conversational as we'd like, but over the next week or so, we're planning on adding support for the 30 billion parameter variant, another variant fine-tuned on LAION's OpenAssistant dataset and more as we explore what this model is capable of.<p>If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: <a href="https://github.com/basetenlabs/alpaca-7b-truss">https://github.com/basetenlabs/alpaca-7b-truss</a><p>We'd love to hear any feedback you have. You can reach me on Twitter @aaronrelph or Abu (the engineer behind this) @aqaderb.<p>Disclaimer: We both work at Baseten. This was a weekend project. Not trying to shill anything; just want to build and share cool stuff.
Show HN: Get a Professional Headshot in Minutes with AI
After playing with AI Avatars (like many of us I guess around here), I started to wonder if we could instead bring real value to people by producing affordable professional head-shots using a combination of Dreambooth and ControlNet.<p>Obviously it's only the beginning and there are still many imperfections, but the foundational tech behind this (Dreambooth and ControlNet) are only respectively 6 months and 1.5 month old, and already delivers pretty amazing results.<p>I came up with this little service "Virtual Face" and I'm looking for feedback if some of you are willing to try it (you can use the HUNTER50 coupon to get 50% off, can't make it free to try yet since the running costs are still non-negligible).<p>Cheers,
Pierre
Show HN: Yaksha Programming Language
I have been working on this for a while. Main goal was to build a usable programming language. I even end up building few tools for this such as IntelliJ plugin, etc.<p>I also plan on building some games with it in future.<p>Main use case would be: small games (raylib), tools (static linux binaries with musl-libc) and recreational programming (wasm4). Works in Windows also. If you have emscripten in path you can even build these games/tools (raylib) to WASM.<p>Please have a look. Thank you.<p>-------------------------------------<p>Main Repo: <a href="https://github.com/YakshaLang/Yaksha">https://github.com/YakshaLang/Yaksha</a><p>Doc: <a href="https://yakshalang.github.io/documentation.html" rel="nofollow">https://yakshalang.github.io/documentation.html</a><p>Library: <a href="https://yakshalang.github.io/library-docs.html" rel="nofollow">https://yakshalang.github.io/library-docs.html</a><p>Tutorials: <a href="https://github.com/orgs/YakshaLang/discussions/categories/tutorials">https://github.com/orgs/YakshaLang/discussions/categories/tu...</a><p>----------------------------------------<p>Started after a comment from WalterBright here <a href="https://news.ycombinator.com/item?id=28929840" rel="nofollow">https://news.ycombinator.com/item?id=28929840</a>
Show HN: Yaksha Programming Language
I have been working on this for a while. Main goal was to build a usable programming language. I even end up building few tools for this such as IntelliJ plugin, etc.<p>I also plan on building some games with it in future.<p>Main use case would be: small games (raylib), tools (static linux binaries with musl-libc) and recreational programming (wasm4). Works in Windows also. If you have emscripten in path you can even build these games/tools (raylib) to WASM.<p>Please have a look. Thank you.<p>-------------------------------------<p>Main Repo: <a href="https://github.com/YakshaLang/Yaksha">https://github.com/YakshaLang/Yaksha</a><p>Doc: <a href="https://yakshalang.github.io/documentation.html" rel="nofollow">https://yakshalang.github.io/documentation.html</a><p>Library: <a href="https://yakshalang.github.io/library-docs.html" rel="nofollow">https://yakshalang.github.io/library-docs.html</a><p>Tutorials: <a href="https://github.com/orgs/YakshaLang/discussions/categories/tutorials">https://github.com/orgs/YakshaLang/discussions/categories/tu...</a><p>----------------------------------------<p>Started after a comment from WalterBright here <a href="https://news.ycombinator.com/item?id=28929840" rel="nofollow">https://news.ycombinator.com/item?id=28929840</a>
Show HN: Chatblade – A CLI Swiss Army Knife for ChatGPT
integrate chatGPT into your scripts or terminal work. Supports piping text, saving prompts, estimating costs, and some basic json/yaml extraction.<p>I've added some elaborate examples on the readme of how to use it with pictures, that may provide a better overview.
Show HN: Chatblade – A CLI Swiss Army Knife for ChatGPT
integrate chatGPT into your scripts or terminal work. Supports piping text, saving prompts, estimating costs, and some basic json/yaml extraction.<p>I've added some elaborate examples on the readme of how to use it with pictures, that may provide a better overview.
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
I was getting tired of copy/pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files/folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don't think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I'm following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT's response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks!
Show HN: Can you beat my dad at Scrabble?
My Failure Resume
Show HN: Alpaca.cpp – Run an Instruction-Tuned Chat-Style LLM on a MacBook
Show HN: Alpaca.cpp – Run an Instruction-Tuned Chat-Style LLM on a MacBook
Show HN: AI explanations for other people’s code
Hi HN,<p>As a developer, I much prefer to write code than read other people’s code. But most projects I work on involve other developers so it’s hard to avoid. Often I find it really hard to quickly parse other people’s code so I thought maybe ChatGPT could help.<p>ChatGPT does a really nice job of giving clear explanations when you paste in code and ask it to explain it. But the interface is a bit of a pain to use if you’re using it many times a day. It’s also hard to share explanations with co-workers. So I built whatdoesthiscodedo.com. Just paste your code and get your ChatGPT explanation and a sharable link you can give to coworkers.<p>I’m hopeful it can also be helpful to folks who aren’t professional developers. My co-founder at my main company, Beam Analytics, is more of a product guy so only needs to write scripts etc. Before, he’d often just find code online to copy and then struggle to make it work. But with this tool, he says he’s getting better intuition in understanding the code he’s using, and how to modify it.<p>We’d love your feedback. Email us at hi (at) beamanalytics.io or DM me on twitter @TheBuilderJR
Show HN: Modern Font Stacks – New system font stack CSS for modern OSs
Show HN: Using GPT-3 and Whisper to save doctors’ time
Hey HN,<p>We're Alex, Martin and Laurent. We previously founded Wit.ai (W14), which we sold to Facebook in 2015. Since 2019, we've been working on Nabla (<a href="https://nabla.com" rel="nofollow">https://nabla.com</a>), an intelligent assistant for health practitioners.<p>When GPT-3 was released in 2020, we investigated it's usage in a medical context[0], to mixed results.<p>Since then we’ve kept exploring opportunities at the intersection of healthcare and AI, and noticed that doctors spend am awful lot of time on medical documentation (writing clinical notes, updating their EHR, etc.).<p>Today, we're releasing Nabla Copilot, a Chrome extension generating clinical notes from video consultations, to address this problem.<p>You can try it out, without installation nor sign up, on our demo page: <a href="https://nabla.com/copilot-demo/" rel="nofollow">https://nabla.com/copilot-demo/</a><p>Here’s how it works under the hood:<p>- When a doctor starts a video consultation, our Chrome extension auto-starts itself and listens to the active tab as well as the doctor’s microphone.<p>- We then transcribe the consultation using a fine-tuned version of Whisper. We've trained Whisper with tens of thousands of hours of medical consultation and medical terms recordings, and we have now reached an error rate which is 3× lower than Google's Speech-To-Text.<p>- Once we have the transcript, we feed it to a heavily trained GPT-3, which generates a clinical note.<p>- We finally return the clinical note to the doctor through our Chrome extension, the doctor can copy it to their EHR, and send a version to the patient.<p>This allows doctors to be fully focused on their consultation, and saves them a lot time.<p>Next, we want to make this work for in-person consultation.<p>We also want to extract structured data (in the FHIR standard) from the clinical note, and feed it to the doctor’s EHR so that it is automatically added to the patient's record.<p>Happy to further discuss technical details in comments!<p>---<p>[0]: <a href="https://nabla.com/blog/gpt-3/" rel="nofollow">https://nabla.com/blog/gpt-3/</a>
Show HN: Counter – Simple and free web analytics