The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: We built the fastest spreadsheet

Show HN: We built the fastest spreadsheet

Show HN: Pgs.sh – A zero-install static site hosting service for hackers

Show HN: Pgs.sh – A zero-install static site hosting service for hackers

Show HN: I made an app to use local AI as daily driver

Hi Hackers,<p>Excited to share a macOS app I've been working on: <a href="https://recurse.chat/" rel="nofollow">https://recurse.chat/</a> for chatting with local AI. While it's amazing that you can run AI models locally quite easily these days (through llama.cpp / llamafile / ollama / llm CLI etc.), I missed feature complete chat interfaces. Tools like LMStudio are super powerful, but there's a learning curve to it. I'd like to hit a middleground of simplicity and customizability for advanced users.<p>Here's what separates RecurseChat out from similar apps:<p>- UX designed for you to use local AI as a daily driver. Zero config setup, supports multi-modal chat, chat with multiple models in the same session, link your own gguf file.<p>- Import ChatGPT history. This is probably my favorite feature. Import your hundreds of messages, search them and even continuing previous chats using local AI offline.<p>- Full text search. Search for hundreds of messages and see results instantly.<p>- Private and capable of working completely offline.<p>Thanks to the amazing work of @ggerganov on llama.cpp which made this possible. If there is anything that you wish to exist in an ideal local AI app, I'd love to hear about it.

Show HN: I made an app to use local AI as daily driver

Hi Hackers,<p>Excited to share a macOS app I've been working on: <a href="https://recurse.chat/" rel="nofollow">https://recurse.chat/</a> for chatting with local AI. While it's amazing that you can run AI models locally quite easily these days (through llama.cpp / llamafile / ollama / llm CLI etc.), I missed feature complete chat interfaces. Tools like LMStudio are super powerful, but there's a learning curve to it. I'd like to hit a middleground of simplicity and customizability for advanced users.<p>Here's what separates RecurseChat out from similar apps:<p>- UX designed for you to use local AI as a daily driver. Zero config setup, supports multi-modal chat, chat with multiple models in the same session, link your own gguf file.<p>- Import ChatGPT history. This is probably my favorite feature. Import your hundreds of messages, search them and even continuing previous chats using local AI offline.<p>- Full text search. Search for hundreds of messages and see results instantly.<p>- Private and capable of working completely offline.<p>Thanks to the amazing work of @ggerganov on llama.cpp which made this possible. If there is anything that you wish to exist in an ideal local AI app, I'd love to hear about it.

Show HN: I made an app to use local AI as daily driver

Hi Hackers,<p>Excited to share a macOS app I've been working on: <a href="https://recurse.chat/" rel="nofollow">https://recurse.chat/</a> for chatting with local AI. While it's amazing that you can run AI models locally quite easily these days (through llama.cpp / llamafile / ollama / llm CLI etc.), I missed feature complete chat interfaces. Tools like LMStudio are super powerful, but there's a learning curve to it. I'd like to hit a middleground of simplicity and customizability for advanced users.<p>Here's what separates RecurseChat out from similar apps:<p>- UX designed for you to use local AI as a daily driver. Zero config setup, supports multi-modal chat, chat with multiple models in the same session, link your own gguf file.<p>- Import ChatGPT history. This is probably my favorite feature. Import your hundreds of messages, search them and even continuing previous chats using local AI offline.<p>- Full text search. Search for hundreds of messages and see results instantly.<p>- Private and capable of working completely offline.<p>Thanks to the amazing work of @ggerganov on llama.cpp which made this possible. If there is anything that you wish to exist in an ideal local AI app, I'd love to hear about it.

Show HN: I made an app to use local AI as daily driver

Hi Hackers,<p>Excited to share a macOS app I've been working on: <a href="https://recurse.chat/" rel="nofollow">https://recurse.chat/</a> for chatting with local AI. While it's amazing that you can run AI models locally quite easily these days (through llama.cpp / llamafile / ollama / llm CLI etc.), I missed feature complete chat interfaces. Tools like LMStudio are super powerful, but there's a learning curve to it. I'd like to hit a middleground of simplicity and customizability for advanced users.<p>Here's what separates RecurseChat out from similar apps:<p>- UX designed for you to use local AI as a daily driver. Zero config setup, supports multi-modal chat, chat with multiple models in the same session, link your own gguf file.<p>- Import ChatGPT history. This is probably my favorite feature. Import your hundreds of messages, search them and even continuing previous chats using local AI offline.<p>- Full text search. Search for hundreds of messages and see results instantly.<p>- Private and capable of working completely offline.<p>Thanks to the amazing work of @ggerganov on llama.cpp which made this possible. If there is anything that you wish to exist in an ideal local AI app, I'd love to hear about it.

Show HN: I made an app to use local AI as daily driver

Hi Hackers,<p>Excited to share a macOS app I've been working on: <a href="https://recurse.chat/" rel="nofollow">https://recurse.chat/</a> for chatting with local AI. While it's amazing that you can run AI models locally quite easily these days (through llama.cpp / llamafile / ollama / llm CLI etc.), I missed feature complete chat interfaces. Tools like LMStudio are super powerful, but there's a learning curve to it. I'd like to hit a middleground of simplicity and customizability for advanced users.<p>Here's what separates RecurseChat out from similar apps:<p>- UX designed for you to use local AI as a daily driver. Zero config setup, supports multi-modal chat, chat with multiple models in the same session, link your own gguf file.<p>- Import ChatGPT history. This is probably my favorite feature. Import your hundreds of messages, search them and even continuing previous chats using local AI offline.<p>- Full text search. Search for hundreds of messages and see results instantly.<p>- Private and capable of working completely offline.<p>Thanks to the amazing work of @ggerganov on llama.cpp which made this possible. If there is anything that you wish to exist in an ideal local AI app, I'd love to hear about it.

Show HN: Sqlbind a Python library to compose raw SQL

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: Darwin – Automate Your GitHub Project with AI

Hey HN! I've been working on a project called Darwin that I'm thrilled to share with you.<p>Darwin is essentially your GitHub agent powered by large language models (LLMs). It checks out your projects, understands them through natural language prompts, and automates tasks such as fixing issues, documenting code, reviewing pull requests, and more.<p>What drove me to create Darwin was a desire to harness the power of LLMs in a way that's seamlessly integrated with the tools I use daily. The motivation came from my curiosity about what could be possible when writing code that understands code. Darwin stands out because it's designed for developers who want to leverage AI without needing deep expertise in LLMs or prompt engineering. It offers:<p>- hands-off approach to automate routine development tasks.<p>- Novel and creative ways of making LLMs work for you<p>- A unique API for each project, allowing for customized automation tools.<p>Currently, Darwin is in alpha. It's functional, with users able to connect their repositories, define tools, and run tasks. I'm especially interested in feedback at this stage — everything from output quality to user experience. Every project starts with a $5 free budget to try it out, and while payment isn't implemented yet, I'm keen to hear your thoughts.<p>The vision for Darwin is not just about automation but creating a more productive, creative, and enjoyable development experience. I believe we're just scratching the surface of what's possible with AI in software development, and I'm excited to see where we can take this.<p>For those interested, I'm looking for alpha testers and feedback. If you're curious about automating your GitHub workflow or want to push the limits of what AI can do for development, Darwin might be for you. Check it out and let me know what you think!

< 1 2 3 ... 249 250 251 252 253 ... 835 836 837 >