The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Dungeon Map Doodler Beta - Free online map drawing tool

This is a D&D map making tool I've been working on for a while now, but I just added some new features to the beta that I think HN users might find neat. When building a world map, you can use "Dynamic Brushes" to draw organic looking terrain. This is achieved entirely with svg filters and javascript canvas, no fancy libraries or anything. This came with a pretty large rewrite of some of the underlying code, so I'm sure there's a number of bugs I haven't come across, but I'd love to hear your opinions on it!

Show HN: Dungeon Map Doodler Beta - Free online map drawing tool

This is a D&D map making tool I've been working on for a while now, but I just added some new features to the beta that I think HN users might find neat. When building a world map, you can use "Dynamic Brushes" to draw organic looking terrain. This is achieved entirely with svg filters and javascript canvas, no fancy libraries or anything. This came with a pretty large rewrite of some of the underlying code, so I'm sure there's a number of bugs I haven't come across, but I'd love to hear your opinions on it!

Show HN: Zapier's first API

Hey HN! We launched Zapier way back in 2012 on HN: <a href="https://news.ycombinator.com/item?id=4138415" rel="nofollow">https://news.ycombinator.com/item?id=4138415</a> and thought we'd return home to announce something special and hopefully exciting :) We are trying to finally live up to the "API" in our name with Zapier's first universal API:<p>Natural Language Actions – <a href="https://zapier.com/l/natural-language-actions" rel="nofollow">https://zapier.com/l/natural-language-actions</a><p>API docs – <a href="https://nla.zapier.com/api/v1/docs" rel="nofollow">https://nla.zapier.com/api/v1/docs</a><p>(to be fair, we have published APIs before that can access Zapier data, but never before one devs can use to directly call the 5k+ apps / 20k+ actions on our platform)<p>For example, you can use the API to:<p><pre><code> * Send messages in Slack * Retrieve a row in a Google Sheet * Draft a reply in Gmail * ... and thousands more actions with one universal API </code></pre> We optimized NLA for use cases that receive user input in natural language (think chatbots, assistants, or any product/feature using LLMs) -- but not strictly required!<p>Folks have asked for an API for 10 years and I've always been slightly embarrassed we didn't have one. We hesitated because we did not want to pass along our universe of complexity to end devs. With the help of LLMs we found some cool patterns to deliver the API we always wanted.<p>My co-founder/CTO Bryan did an interview with Garry on YC blog with more details: <a href="https://www.ycombinator.com/blog/building-apis-for-ai-an-interview-with-zapiers-bryan-helmig" rel="nofollow">https://www.ycombinator.com/blog/building-apis-for-ai-an-int...</a><p>We also published a LangChain integration to show off some possibilities:<p><pre><code> * Demo: https://www.youtube.com/watch?v=EEK_9wLYEHU * Jupyter notebook: https://github.com/hwchase17/langchain/blob/master/docs/modules/utils/examples/zapier.ipynb </code></pre> We know the API is not perfect but we're excited and eager for feedback to help shape it.

Show HN: Zapier's first API

Hey HN! We launched Zapier way back in 2012 on HN: <a href="https://news.ycombinator.com/item?id=4138415" rel="nofollow">https://news.ycombinator.com/item?id=4138415</a> and thought we'd return home to announce something special and hopefully exciting :) We are trying to finally live up to the "API" in our name with Zapier's first universal API:<p>Natural Language Actions – <a href="https://zapier.com/l/natural-language-actions" rel="nofollow">https://zapier.com/l/natural-language-actions</a><p>API docs – <a href="https://nla.zapier.com/api/v1/docs" rel="nofollow">https://nla.zapier.com/api/v1/docs</a><p>(to be fair, we have published APIs before that can access Zapier data, but never before one devs can use to directly call the 5k+ apps / 20k+ actions on our platform)<p>For example, you can use the API to:<p><pre><code> * Send messages in Slack * Retrieve a row in a Google Sheet * Draft a reply in Gmail * ... and thousands more actions with one universal API </code></pre> We optimized NLA for use cases that receive user input in natural language (think chatbots, assistants, or any product/feature using LLMs) -- but not strictly required!<p>Folks have asked for an API for 10 years and I've always been slightly embarrassed we didn't have one. We hesitated because we did not want to pass along our universe of complexity to end devs. With the help of LLMs we found some cool patterns to deliver the API we always wanted.<p>My co-founder/CTO Bryan did an interview with Garry on YC blog with more details: <a href="https://www.ycombinator.com/blog/building-apis-for-ai-an-interview-with-zapiers-bryan-helmig" rel="nofollow">https://www.ycombinator.com/blog/building-apis-for-ai-an-int...</a><p>We also published a LangChain integration to show off some possibilities:<p><pre><code> * Demo: https://www.youtube.com/watch?v=EEK_9wLYEHU * Jupyter notebook: https://github.com/hwchase17/langchain/blob/master/docs/modules/utils/examples/zapier.ipynb </code></pre> We know the API is not perfect but we're excited and eager for feedback to help shape it.

Show HN: Zapier's first API

Hey HN! We launched Zapier way back in 2012 on HN: <a href="https://news.ycombinator.com/item?id=4138415" rel="nofollow">https://news.ycombinator.com/item?id=4138415</a> and thought we'd return home to announce something special and hopefully exciting :) We are trying to finally live up to the "API" in our name with Zapier's first universal API:<p>Natural Language Actions – <a href="https://zapier.com/l/natural-language-actions" rel="nofollow">https://zapier.com/l/natural-language-actions</a><p>API docs – <a href="https://nla.zapier.com/api/v1/docs" rel="nofollow">https://nla.zapier.com/api/v1/docs</a><p>(to be fair, we have published APIs before that can access Zapier data, but never before one devs can use to directly call the 5k+ apps / 20k+ actions on our platform)<p>For example, you can use the API to:<p><pre><code> * Send messages in Slack * Retrieve a row in a Google Sheet * Draft a reply in Gmail * ... and thousands more actions with one universal API </code></pre> We optimized NLA for use cases that receive user input in natural language (think chatbots, assistants, or any product/feature using LLMs) -- but not strictly required!<p>Folks have asked for an API for 10 years and I've always been slightly embarrassed we didn't have one. We hesitated because we did not want to pass along our universe of complexity to end devs. With the help of LLMs we found some cool patterns to deliver the API we always wanted.<p>My co-founder/CTO Bryan did an interview with Garry on YC blog with more details: <a href="https://www.ycombinator.com/blog/building-apis-for-ai-an-interview-with-zapiers-bryan-helmig" rel="nofollow">https://www.ycombinator.com/blog/building-apis-for-ai-an-int...</a><p>We also published a LangChain integration to show off some possibilities:<p><pre><code> * Demo: https://www.youtube.com/watch?v=EEK_9wLYEHU * Jupyter notebook: https://github.com/hwchase17/langchain/blob/master/docs/modules/utils/examples/zapier.ipynb </code></pre> We know the API is not perfect but we're excited and eager for feedback to help shape it.

Show HN: Generate styled web pages with just Python

There are a lot of Python to web app frameworks going around these days but I wanted something that was a little more lightweight that just generates HTML pages and can be embedded in Flask or other Python web servers incrementally.<p>PyVibe uses Python components to construct a page with styling that you can use in Flask, in a static site, or even in Pyodide.

Show HN: Generate styled web pages with just Python

There are a lot of Python to web app frameworks going around these days but I wanted something that was a little more lightweight that just generates HTML pages and can be embedded in Flask or other Python web servers incrementally.<p>PyVibe uses Python components to construct a page with styling that you can use in Flask, in a static site, or even in Pyodide.

Show HN: Finetune LLaMA-7B on commodity GPUs using your own text

I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!

Show HN: Finetune LLaMA-7B on commodity GPUs using your own text

I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!

Show HN: Finetune LLaMA-7B on commodity GPUs using your own text

I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!

Show HN: Finetune LLaMA-7B on commodity GPUs using your own text

I've been playing around with <a href="https://github.com/zphang/minimal-llama/">https://github.com/zphang/minimal-llama/</a> and <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a>, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.<p>To prepare the data, simply separate your text with two blank lines.<p>There's an inference tab, so you can test how the tuned model behaves.<p>This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.<p>Enjoy!

Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA

ChatLLaMA is an experimental chatbot interface for interacting with variants of Facebook's LLaMA. Currently, we support the 7 billion parameter variant that was fine-tuned on the Alpaca dataset. This early versions isn't as conversational as we'd like, but over the next week or so, we're planning on adding support for the 30 billion parameter variant, another variant fine-tuned on LAION's OpenAssistant dataset and more as we explore what this model is capable of.<p>If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: <a href="https://github.com/basetenlabs/alpaca-7b-truss">https://github.com/basetenlabs/alpaca-7b-truss</a><p>We'd love to hear any feedback you have. You can reach me on Twitter @aaronrelph or Abu (the engineer behind this) @aqaderb.<p>Disclaimer: We both work at Baseten. This was a weekend project. Not trying to shill anything; just want to build and share cool stuff.

Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA

ChatLLaMA is an experimental chatbot interface for interacting with variants of Facebook's LLaMA. Currently, we support the 7 billion parameter variant that was fine-tuned on the Alpaca dataset. This early versions isn't as conversational as we'd like, but over the next week or so, we're planning on adding support for the 30 billion parameter variant, another variant fine-tuned on LAION's OpenAssistant dataset and more as we explore what this model is capable of.<p>If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: <a href="https://github.com/basetenlabs/alpaca-7b-truss">https://github.com/basetenlabs/alpaca-7b-truss</a><p>We'd love to hear any feedback you have. You can reach me on Twitter @aaronrelph or Abu (the engineer behind this) @aqaderb.<p>Disclaimer: We both work at Baseten. This was a weekend project. Not trying to shill anything; just want to build and share cool stuff.

Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA

ChatLLaMA is an experimental chatbot interface for interacting with variants of Facebook's LLaMA. Currently, we support the 7 billion parameter variant that was fine-tuned on the Alpaca dataset. This early versions isn't as conversational as we'd like, but over the next week or so, we're planning on adding support for the 30 billion parameter variant, another variant fine-tuned on LAION's OpenAssistant dataset and more as we explore what this model is capable of.<p>If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: <a href="https://github.com/basetenlabs/alpaca-7b-truss">https://github.com/basetenlabs/alpaca-7b-truss</a><p>We'd love to hear any feedback you have. You can reach me on Twitter @aaronrelph or Abu (the engineer behind this) @aqaderb.<p>Disclaimer: We both work at Baseten. This was a weekend project. Not trying to shill anything; just want to build and share cool stuff.

Show HN: Pair: Open Tool for Coding with GPTs, Built by Coding with GPTs

Github Copilot is a great tool for leveraging GPTs while coding, but I find that it is too “open loop” for more complex tasks that require Q&A, feedback to guide it in a particular direction, iteration on code execution errors, etc. There is a large class of tasks that are better accomplished in an iterative, stateful chat-like interface.<p>I have been experimenting with a local command line chat interface to GPT-4 and my mind was blown once again a few days ago when I copied documentation for a pretty involved API into the model context and managed to chat-guide GPT-4 to implement the API in under 30 minutes, complete with a ridiculous amount of unit test coverage.<p>This involved a lot of manual copy and pasting back and forth and other friction points that could be removed by a streamlined REPL interface optimized for code interactions. It occurred to me that it would be fun to build such a tool, and as the ultimate act of dogfooding, try to build it with GPT!<p>So PAIR is the starting point here. You can see a recent commit message has a log of my interactions with the model that produced that commit.<p>Next step is to add better mechanisms to manage the model input context (e.g. make it easy for the model to see the latest version of a source file when needed) followed by mechanisms for allowing the model to suggest changes via diffs that are quickly reviewed and accepted by the human in the loop before being applied to the file and tested.<p>I would love to hear from others who have experimented with GPT pair programming in a chat-style interface and any feedback you might have on your experience with it.

Show HN: Watermelon – GPT-powered code contextualizer

Hey there HN! We're Esteban and Esteban and we are looking to get feedback for the new version of our GPT-powered, open-source code contextualizer.<p>We're starting with a VS Code extension that indexes information from git (GitHub, GitLab, or Bitbucket integrations available), Slack and Jira to explain the context around a file or block of code. Finally, we summarize such aggregated context using the power of GPT.<p>As devs we know that it's very annoying to look at a new codebase and start understanding all the nuances, particularly when the person who wrote the code already left the company. With this problem in mind, we decided to build this solution. You'll be able to get into "the ghost" of the person who left the company.<p>Soon, we will also be building a GitHub Action that does the same thing as the VS Code extension but at the time of creating a PR: Index the most relevant information related to this new PR, and add it as a comment. This way we will provide context at one more moment, and also, we will be making the IDE extension better.<p>Here's our open source repo if you also want to check it out: <a href="https://github.com/watermelontools/watermelon-extension">https://github.com/watermelontools/watermelon-extension</a><p>Please give us your feedback! Thanks.

Show HN: Watermelon – GPT-powered code contextualizer

Hey there HN! We're Esteban and Esteban and we are looking to get feedback for the new version of our GPT-powered, open-source code contextualizer.<p>We're starting with a VS Code extension that indexes information from git (GitHub, GitLab, or Bitbucket integrations available), Slack and Jira to explain the context around a file or block of code. Finally, we summarize such aggregated context using the power of GPT.<p>As devs we know that it's very annoying to look at a new codebase and start understanding all the nuances, particularly when the person who wrote the code already left the company. With this problem in mind, we decided to build this solution. You'll be able to get into "the ghost" of the person who left the company.<p>Soon, we will also be building a GitHub Action that does the same thing as the VS Code extension but at the time of creating a PR: Index the most relevant information related to this new PR, and add it as a comment. This way we will provide context at one more moment, and also, we will be making the IDE extension better.<p>Here's our open source repo if you also want to check it out: <a href="https://github.com/watermelontools/watermelon-extension">https://github.com/watermelontools/watermelon-extension</a><p>Please give us your feedback! Thanks.

Show HN: Public transportation signage based on bloom filters (rough mockup)

Hello, I was running around Germany, hectically navigating public transportation, and getting lost all the time. I noticed that every station had i platforms, each used lists of n buses (trains, whatever) arriving, each has their list of m destinations. That means I would be scanning i x n x m items just to see if I was at the correct stop. As I was nervous, for every bus that arrived, I would rescan the list of stops to double check. I began thinking how I could make a better system.<p>Linked is a very shoddy mockup of how bloom filters could be used to allow passengers O(1) lookup time for which platform+bus is the correct one. I believe it's likely for public transportation to grow increasingly more complex in the future, as population grows, and under the current list-based system, this will make the signage ever more complex. I think some bloom filter mechanism could reduce that complexity.<p>So, here is my fantasy, my day dream. What do you think?

Show HN: Public transportation signage based on bloom filters (rough mockup)

Hello, I was running around Germany, hectically navigating public transportation, and getting lost all the time. I noticed that every station had i platforms, each used lists of n buses (trains, whatever) arriving, each has their list of m destinations. That means I would be scanning i x n x m items just to see if I was at the correct stop. As I was nervous, for every bus that arrived, I would rescan the list of stops to double check. I began thinking how I could make a better system.<p>Linked is a very shoddy mockup of how bloom filters could be used to allow passengers O(1) lookup time for which platform+bus is the correct one. I believe it's likely for public transportation to grow increasingly more complex in the future, as population grows, and under the current list-based system, this will make the signage ever more complex. I think some bloom filter mechanism could reduce that complexity.<p>So, here is my fantasy, my day dream. What do you think?

Show HN: Public transportation signage based on bloom filters (rough mockup)

Hello, I was running around Germany, hectically navigating public transportation, and getting lost all the time. I noticed that every station had i platforms, each used lists of n buses (trains, whatever) arriving, each has their list of m destinations. That means I would be scanning i x n x m items just to see if I was at the correct stop. As I was nervous, for every bus that arrived, I would rescan the list of stops to double check. I began thinking how I could make a better system.<p>Linked is a very shoddy mockup of how bloom filters could be used to allow passengers O(1) lookup time for which platform+bus is the correct one. I believe it's likely for public transportation to grow increasingly more complex in the future, as population grows, and under the current list-based system, this will make the signage ever more complex. I think some bloom filter mechanism could reduce that complexity.<p>So, here is my fantasy, my day dream. What do you think?

< 1 2 3 ... 437 438 439 440 441 ... 853 854 855 >