The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: Mountaineer – Webapps in Python and React

Hey HN, I’m Pierce. Today I’m open sourcing a beta of Mountaineer, an integrated framework for building webapps in React and Python.<p>I’ve written a good 25+ webapps over the last few years in almost every major framework under the sun. Python and React remain my favorite. They let you get started quickly and grow to scale. But the developer experience of linking these two worlds remains less than optimal:<p>— Sharing typehints and schemas across frontend and backend code<p>— Scattered fetch() calls to template data and modify server objects<p>— Server side rendering / gateway support<p>— Error handling on frontend fetches<p>Mountaineer is an attempt to solve those problems. I didn’t want to re-invent the wheel of what Python and React are good at, so it’s relatively light on syntax. It provides one frontend hook for React apps and introduces a MVC convention on the backend for managing views. Support files are generated progressively through a local watcher, so IDE type-hints and function calls work out of the box.<p>It’s more intuitive to explain with some code, so pop over to the Github if you’re interested in this stack and taking a look:<p>Github: <a href="https://github.com/piercefreeman/mountaineer">https://github.com/piercefreeman/mountaineer</a><p>More context: <a href="https://freeman.vc/notes/mountaineer-v01-webapps-in-python-and-react" rel="nofollow">https://freeman.vc/notes/mountaineer-v01-webapps-in-python-a...</a><p>Would love to hear your thoughts!

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: I built an open-source data copy tool called ingestr

Hi there, Burak here. I built an open-source data copy tool called ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>I did build quite a few data warehouses both for the companies I worked at, as well as for consultancy projects. One of the more common pain points I observed was that everyone had to rebuild the same data ingestion bit over and over again, and each in different ways:<p>- some wrote code for the ingestion from scratch to various degrees<p>- some used off-the-shelf data ingestion tools like Fivetran / Airbyte<p>I have always disliked both of these approaches, for different reasons, but never got around to working on what I'd imagine to be the better way forward.<p>The solutions that required writing code for copying the data had quite a bit of overhead such as how to generalize them, what language/library to use, where to deploy, how to monitor, how to schedule, etc. I ended up figuring out solutions for each of these matters, but the process always felt suboptimal. I like coding but for more novel stuff rather than trying to copy a table from Postgres to BigQuery. There are libraries like dlt (awesome lib btw, and awesome folks!) but that still required me to write, deploy, and maintain the code.<p>Then there are solutions like Fivetran or Airbyte, where there's a UI and everything is managed through there. While it is nice that I didn't have to write code for copying the data, I still had to either pay some unknown/hard-to-predict amount of money to these vendors or host Airbyte myself which is roughly back to square zero (for me, since I want to maintain the least amount of tech myself). Nothing was versioned, people were changing things in the UI and breaking the connectors, and what worked yesterday didn't work today.<p>I had a bit of spare time a couple of weeks ago and I wanted to take a stab at the problem. I have been thinking of standardizing the process for quite some time already, and dlt had some abstractions that allowed me to quickly prototype a CLI that copies data from one place to another. I made a few decisions (that I hope I won't regret in the future):<p>- everything is a URI: every source and every destination is represented as a URI<p>- there can be only one thing copied at a time: it'll copy only a single table within a single command, not a full database with an unknown amount of tables<p>- incremental loading is a must, but doesn't have to be super flexible: I decided to support full-refresh, append-only, merge, and delete+insert incremental strategies, because I believe this covers 90% of the use-cases out there.<p>- it is CLI-only, and can be configured with flags & env variables so that it can be automated quickly, e.g. drop it into GitHub Actions and run it daily.<p>The result ended up being `ingestr` (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>).<p>I am pretty happy with how the first version turned out, and I plan to add support for more sources & destinations. ingestr is built to be flexible with various source and destination combinations, and I plan to introduce more non-DB sources such as Notion, GSheets, and custom APIs that return JSON (which I am not sure how exactly I'll do but open to suggestions!).<p>To be perfectly clear: I don't think ingestr covers 100% of data ingestion/copying needs out there, and it doesn't aim that. My goal with it is to cover most scenarios with a decent set of trade-offs so that common scenarios can be solved easily without having to write code or manage infra. There will be more complex needs that require engineering effort by others, and that's fine.<p>I'd love to hear your feedback on how can ingestr help data copying needs better, looking forward to hearing your thoughts!<p>Best, Burak

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: AI dub tool I made to watch foreign language videos with my 7-year-old

Hey HN!<p>I love watching YouTube with my 7-year-old daughter. Unfortunately, the best stuff is often in English (we're German). So I made an AI tool that translates videos directly, using the original voices. All other sounds, as well as background music, are preserved, too.<p>Turns out that it works for many other language pairs, too. So far, it can create dubs in English, Mandarin Chinese, Spanish, Arabic, French, Russian, German, Italian, Korean, Polish and Dutch.<p>The main challenge in building this was to get the balance right between translating the original meaning and getting the timing right. Especially for language pairs like English -> German, where the target ist often longer than the source ("bat" -> "Fle-der-maus", "speed" -> "Ge-schwin-dig-keit").<p>Let me know what you think! :)

Show HN: Darwin – Automate Your GitHub Project with AI

Hey HN! I've been working on a project called Darwin that I'm thrilled to share with you.<p>Darwin is essentially your GitHub agent powered by large language models (LLMs). It checks out your projects, understands them through natural language prompts, and automates tasks such as fixing issues, documenting code, reviewing pull requests, and more.<p>What drove me to create Darwin was a desire to harness the power of LLMs in a way that's seamlessly integrated with the tools I use daily. The motivation came from my curiosity about what could be possible when writing code that understands code. Darwin stands out because it's designed for developers who want to leverage AI without needing deep expertise in LLMs or prompt engineering. It offers:<p>- hands-off approach to automate routine development tasks.<p>- Novel and creative ways of making LLMs work for you<p>- A unique API for each project, allowing for customized automation tools.<p>Currently, Darwin is in alpha. It's functional, with users able to connect their repositories, define tools, and run tasks. I'm especially interested in feedback at this stage — everything from output quality to user experience. Every project starts with a $5 free budget to try it out, and while payment isn't implemented yet, I'm keen to hear your thoughts.<p>The vision for Darwin is not just about automation but creating a more productive, creative, and enjoyable development experience. I believe we're just scratching the surface of what's possible with AI in software development, and I'm excited to see where we can take this.<p>For those interested, I'm looking for alpha testers and feedback. If you're curious about automating your GitHub workflow or want to push the limits of what AI can do for development, Darwin might be for you. Check it out and let me know what you think!

Show HN: Darwin – Automate Your GitHub Project with AI

Hey HN! I've been working on a project called Darwin that I'm thrilled to share with you.<p>Darwin is essentially your GitHub agent powered by large language models (LLMs). It checks out your projects, understands them through natural language prompts, and automates tasks such as fixing issues, documenting code, reviewing pull requests, and more.<p>What drove me to create Darwin was a desire to harness the power of LLMs in a way that's seamlessly integrated with the tools I use daily. The motivation came from my curiosity about what could be possible when writing code that understands code. Darwin stands out because it's designed for developers who want to leverage AI without needing deep expertise in LLMs or prompt engineering. It offers:<p>- hands-off approach to automate routine development tasks.<p>- Novel and creative ways of making LLMs work for you<p>- A unique API for each project, allowing for customized automation tools.<p>Currently, Darwin is in alpha. It's functional, with users able to connect their repositories, define tools, and run tasks. I'm especially interested in feedback at this stage — everything from output quality to user experience. Every project starts with a $5 free budget to try it out, and while payment isn't implemented yet, I'm keen to hear your thoughts.<p>The vision for Darwin is not just about automation but creating a more productive, creative, and enjoyable development experience. I believe we're just scratching the surface of what's possible with AI in software development, and I'm excited to see where we can take this.<p>For those interested, I'm looking for alpha testers and feedback. If you're curious about automating your GitHub workflow or want to push the limits of what AI can do for development, Darwin might be for you. Check it out and let me know what you think!

Show HN: Free Certificate Monitoring via RSS

Show HN: Free Certificate Monitoring via RSS

Show HN: Free Certificate Monitoring via RSS

Show HN: R2R – Open-source framework for production-grade RAG

Hello HN, I'm Owen from SciPhi (<a href="https://www.sciphi.ai/" rel="nofollow">https://www.sciphi.ai/</a>), a startup working on simplifying˛Retrieval-Augmented Generation (RAG). Today we’re excited to share R2R (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>), an open-source framework that makes it simpler to develop and deploy production-grade RAG systems.<p>Just a quick reminder: RAG helps Large Language Models (LLMs) use current information and specific knowledge. For example, it allows a programming assistant to use your latest documents to answer questions. The idea is to gather all the relevant information ("retrieval") and present it to the LLM with a question ("augmentation"). This way, the LLM can provide answers (“generation”) as though it was trained directly on your data.<p>The R2R framework is a powerful tool for addressing key challenges in deploying RAG systems, avoiding the complex abstractions common in other projects. Through conversations with numerous developers, we discovered that many were independently developing similar solutions. R2R distinguishes itself by adopting a straightforward approach to streamline the setup, monitoring, and upgrading of RAG systems. Specifically, it focuses on reducing unnecessary complexity and enhancing the visibility and tracking of system performance.<p>The key parts of R2R include: an Ingestion Pipeline that transforms different data types (like json, txt, pdf, html) into 'Documents' ready for embedding. Next, the Embedding Pipeline takes text and turns it into vector embeddings through various processes (such as extracting text, transforming it, chunking, and embedding). Finally, the RAG Pipeline follows the steps of the embedding pipeline but adds an LLM provider to create text completions.<p>R2R is currently in use at several companies building applications from B2B lead generation to educational tools for consumers.<p>Our GitHub repo (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>) includes basic examples for application deployment and standalone use, demonstrating the framework's adaptability in a simple way.<p>We’d love for you to give R2R a try, and welcome your feedback and comments as we refine and develop it further!

Show HN: R2R – Open-source framework for production-grade RAG

Hello HN, I'm Owen from SciPhi (<a href="https://www.sciphi.ai/" rel="nofollow">https://www.sciphi.ai/</a>), a startup working on simplifying˛Retrieval-Augmented Generation (RAG). Today we’re excited to share R2R (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>), an open-source framework that makes it simpler to develop and deploy production-grade RAG systems.<p>Just a quick reminder: RAG helps Large Language Models (LLMs) use current information and specific knowledge. For example, it allows a programming assistant to use your latest documents to answer questions. The idea is to gather all the relevant information ("retrieval") and present it to the LLM with a question ("augmentation"). This way, the LLM can provide answers (“generation”) as though it was trained directly on your data.<p>The R2R framework is a powerful tool for addressing key challenges in deploying RAG systems, avoiding the complex abstractions common in other projects. Through conversations with numerous developers, we discovered that many were independently developing similar solutions. R2R distinguishes itself by adopting a straightforward approach to streamline the setup, monitoring, and upgrading of RAG systems. Specifically, it focuses on reducing unnecessary complexity and enhancing the visibility and tracking of system performance.<p>The key parts of R2R include: an Ingestion Pipeline that transforms different data types (like json, txt, pdf, html) into 'Documents' ready for embedding. Next, the Embedding Pipeline takes text and turns it into vector embeddings through various processes (such as extracting text, transforming it, chunking, and embedding). Finally, the RAG Pipeline follows the steps of the embedding pipeline but adds an LLM provider to create text completions.<p>R2R is currently in use at several companies building applications from B2B lead generation to educational tools for consumers.<p>Our GitHub repo (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>) includes basic examples for application deployment and standalone use, demonstrating the framework's adaptability in a simple way.<p>We’d love for you to give R2R a try, and welcome your feedback and comments as we refine and develop it further!

Show HN: R2R – Open-source framework for production-grade RAG

Hello HN, I'm Owen from SciPhi (<a href="https://www.sciphi.ai/" rel="nofollow">https://www.sciphi.ai/</a>), a startup working on simplifying˛Retrieval-Augmented Generation (RAG). Today we’re excited to share R2R (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>), an open-source framework that makes it simpler to develop and deploy production-grade RAG systems.<p>Just a quick reminder: RAG helps Large Language Models (LLMs) use current information and specific knowledge. For example, it allows a programming assistant to use your latest documents to answer questions. The idea is to gather all the relevant information ("retrieval") and present it to the LLM with a question ("augmentation"). This way, the LLM can provide answers (“generation”) as though it was trained directly on your data.<p>The R2R framework is a powerful tool for addressing key challenges in deploying RAG systems, avoiding the complex abstractions common in other projects. Through conversations with numerous developers, we discovered that many were independently developing similar solutions. R2R distinguishes itself by adopting a straightforward approach to streamline the setup, monitoring, and upgrading of RAG systems. Specifically, it focuses on reducing unnecessary complexity and enhancing the visibility and tracking of system performance.<p>The key parts of R2R include: an Ingestion Pipeline that transforms different data types (like json, txt, pdf, html) into 'Documents' ready for embedding. Next, the Embedding Pipeline takes text and turns it into vector embeddings through various processes (such as extracting text, transforming it, chunking, and embedding). Finally, the RAG Pipeline follows the steps of the embedding pipeline but adds an LLM provider to create text completions.<p>R2R is currently in use at several companies building applications from B2B lead generation to educational tools for consumers.<p>Our GitHub repo (<a href="https://github.com/SciPhi-AI/R2R">https://github.com/SciPhi-AI/R2R</a>) includes basic examples for application deployment and standalone use, demonstrating the framework's adaptability in a simple way.<p>We’d love for you to give R2R a try, and welcome your feedback and comments as we refine and develop it further!

Show HN: AboutIdeasNow – search /about, /ideas, /now pages of 7k+ personal sites

Hi HN!<p>It’s hard to find interesting people to work with on your ideas.<p>Our solution: index the /about, /ideas, /now pages of 1000s of personal websites. There are thousands of cool personal sites out there, with amazing ideas on them, but there’s nowhere to easily search through. So we built a simple site that indexes 7k+ personal sites [0]. We were inspired by Derek Sivers’ Now page movement [1] and other IndieWeb directories [2], but we figured that it would be more useful if we:<p>* Let you search directly across personal sites without having to visit them<p>* Take the content from 3 specific pages, /about, /now and /ideas, to structure everything<p>* Define /ideas pages as a space to articulate things you want to work on<p>We hope this’ll be a cool place for people to find others to collaborate with - would love your feedback. If you’d like your site to appear at the top, add it via the form and add a last updated date of today (any format). It’s completely open source (MIT) and open to contributions [3]!<p>Peter & Louis<p>[0] gathered from: 1) <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a> and similar sites 2) checking all HN posts since 2020 with more than 100 upvotes<p>[1] <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a><p>[2] <a href="https://personalsit.es" rel="nofollow">https://personalsit.es</a><p>[3] <a href="https://github.com/lindylearn/aboutideasnow">https://github.com/lindylearn/aboutideasnow</a>

Show HN: AboutIdeasNow – search /about, /ideas, /now pages of 7k+ personal sites

Hi HN!<p>It’s hard to find interesting people to work with on your ideas.<p>Our solution: index the /about, /ideas, /now pages of 1000s of personal websites. There are thousands of cool personal sites out there, with amazing ideas on them, but there’s nowhere to easily search through. So we built a simple site that indexes 7k+ personal sites [0]. We were inspired by Derek Sivers’ Now page movement [1] and other IndieWeb directories [2], but we figured that it would be more useful if we:<p>* Let you search directly across personal sites without having to visit them<p>* Take the content from 3 specific pages, /about, /now and /ideas, to structure everything<p>* Define /ideas pages as a space to articulate things you want to work on<p>We hope this’ll be a cool place for people to find others to collaborate with - would love your feedback. If you’d like your site to appear at the top, add it via the form and add a last updated date of today (any format). It’s completely open source (MIT) and open to contributions [3]!<p>Peter & Louis<p>[0] gathered from: 1) <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a> and similar sites 2) checking all HN posts since 2020 with more than 100 upvotes<p>[1] <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a><p>[2] <a href="https://personalsit.es" rel="nofollow">https://personalsit.es</a><p>[3] <a href="https://github.com/lindylearn/aboutideasnow">https://github.com/lindylearn/aboutideasnow</a>

Show HN: AboutIdeasNow – search /about, /ideas, /now pages of 7k+ personal sites

Hi HN!<p>It’s hard to find interesting people to work with on your ideas.<p>Our solution: index the /about, /ideas, /now pages of 1000s of personal websites. There are thousands of cool personal sites out there, with amazing ideas on them, but there’s nowhere to easily search through. So we built a simple site that indexes 7k+ personal sites [0]. We were inspired by Derek Sivers’ Now page movement [1] and other IndieWeb directories [2], but we figured that it would be more useful if we:<p>* Let you search directly across personal sites without having to visit them<p>* Take the content from 3 specific pages, /about, /now and /ideas, to structure everything<p>* Define /ideas pages as a space to articulate things you want to work on<p>We hope this’ll be a cool place for people to find others to collaborate with - would love your feedback. If you’d like your site to appear at the top, add it via the form and add a last updated date of today (any format). It’s completely open source (MIT) and open to contributions [3]!<p>Peter & Louis<p>[0] gathered from: 1) <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a> and similar sites 2) checking all HN posts since 2020 with more than 100 upvotes<p>[1] <a href="https://nownownow.com" rel="nofollow">https://nownownow.com</a><p>[2] <a href="https://personalsit.es" rel="nofollow">https://personalsit.es</a><p>[3] <a href="https://github.com/lindylearn/aboutideasnow">https://github.com/lindylearn/aboutideasnow</a>

< 1 2 3 ... 135 136 137 138 139 ... 720 721 722 >