The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Ferris, social network for IRL activities with your closest friends

Show HN: Pathfinding Visualizer

Decided to remake my old pathfinding project to hexagonal tiles. Pretty happy with how it turned out.<p>Source code: <a href="https://github.com/honzaap/Pathfinding" rel="nofollow">https://github.com/honzaap/Pathfinding</a>

Show HN: Feather – 90 percent of Bloomberg terminal, for 5 percent of the price

Hey,<p>Wanted to share what my friend and I built — Feather. It provides investors with all imaginable financial data, without breaking the bank. Effectively 90 percent of the Bloomberg Terminal, at 5 percent of the price.<p>We just opened sign ups for early access — all you need to sign up is your email address. We’ll open access to the software in order of sign ups, and we’d love to have you onboard.<p>Check it out!<p><a href="https://try-feather.com" rel="nofollow">https://try-feather.com</a>

Show HN: Domfetch.com - free tool to find expired domains with history

We have finally launched Domfetch!<p>Domfetch is a free platform to find expired domains. Users can search through domains that are (almost) available for registration. We enrich these domains with extra data to help users find valuable domains.<p>We created this tool because we found the (free) alternatives lacking certain data, such as Moz, Alexa history (we check 5 years of data) & search volume history over a period of 1 year.<p>Let us know what you think! More features and tld's will be added in the near future.

Show HN: Domfetch.com - free tool to find expired domains with history

We have finally launched Domfetch!<p>Domfetch is a free platform to find expired domains. Users can search through domains that are (almost) available for registration. We enrich these domains with extra data to help users find valuable domains.<p>We created this tool because we found the (free) alternatives lacking certain data, such as Moz, Alexa history (we check 5 years of data) & search volume history over a period of 1 year.<p>Let us know what you think! More features and tld's will be added in the near future.

Show HN: I developed a fast general purpose sorting algorithm

Show HN: OpsFlow – Low-code DevOps – Webflow for infrastructure

Opsflow cofounder here! We launched a bunch of things recently, some of them well received (Terragen, AWS Bootstrap). It's no secret that AWS is hard, and our mission is to make it simple.<p>What HN crowd helped us realise is that the UI was still not simple enough. Rather confusing in fact.<p>So in the last couple weeks we have radically simplified it based on your feedback. We have removed the unnecessarily complex concepts like Services and Environments - it's just Apps now. All options are now on the new Settings screen. Instead of Infrastructure and Software deployments there is now a sequence of steps.<p>Check it out - and tell us what you think!<p>Here's our launch on ProductHunt: <a href="https://www.producthunt.com/posts/opsflow" rel="nofollow">https://www.producthunt.com/posts/opsflow</a>

Show HN: OpsFlow – Low-code DevOps – Webflow for infrastructure

Opsflow cofounder here! We launched a bunch of things recently, some of them well received (Terragen, AWS Bootstrap). It's no secret that AWS is hard, and our mission is to make it simple.<p>What HN crowd helped us realise is that the UI was still not simple enough. Rather confusing in fact.<p>So in the last couple weeks we have radically simplified it based on your feedback. We have removed the unnecessarily complex concepts like Services and Environments - it's just Apps now. All options are now on the new Settings screen. Instead of Infrastructure and Software deployments there is now a sequence of steps.<p>Check it out - and tell us what you think!<p>Here's our launch on ProductHunt: <a href="https://www.producthunt.com/posts/opsflow" rel="nofollow">https://www.producthunt.com/posts/opsflow</a>

Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB

Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB

Show HN: In-depth photographic look at all the golf courses I play

I'm an avid golfer; it's my main hobby. I decided to start taking pictures of all the courses I play. While there's a lot of golf websites out there, none of them really try to document the courses in depth and look at each hole, along with the course facilities like the practice areas. I live in Chicago and am starting with the courses in this area (of which there are dozens of public courses to play).<p>While I play golf, I take photos with my phone of every (relevant) aspect of the golf course I can think of. Then they're processed and organized on the website.<p>Obviously I'm starting this journey on my own, and in that sense it's not scalable. I won't be able to visit all the courses in the US, let alone the world. I hope to find others that would like to contribute to the effort.<p>At some point I'd like to add course news and histories to the site. Many golf courses in the US are over 100 years old and have rich histories. And of course many older courses exist in Europe.<p>I also have started adding descriptions/commentary for each hole on courses. For example, see:<p><a href="https://www.golfscout.net/golf-hole/wilmette-golf-club-wilmette-illinois/5" rel="nofollow">https://www.golfscout.net/golf-hole/wilmette-golf-club-wilme...</a><p>And maybe went a little overboard on this one:<p><a href="https://www.golfscout.net/golf-hole/billy-caldwell-golf-course-chicago-illinois/8" rel="nofollow">https://www.golfscout.net/golf-hole/billy-caldwell-golf-cour...</a><p>Anyway, it's a fun project and could go in a lot of directions. PS: I'm always looking to expand my golfing circle. If you're in Chicago and want to play sometime, hit me up -- contact details are on the website.

Show HN: Nerd Crawler – monitoring original comic art sites so you don't have to

I've been a fan of comics since I watched the X-Men Animated Series in the 90s, and I fell in love with collecting original comic art when I got my first Jim Lee sketch in high school.<p>But, after missing out on some original comic art pieces because I didn't know when they were added for sale on websites, I decided to take it upon myself to make an app that monitors original comic art sites and emails/texts you when new art drops.<p>It's called Nerd Crawler and I'm building it myself so there might be some bugs but I'm hoping it helps comic art collectors. It works with over 40 original comic art websites like Albert Moy (Jim Lee's art dealer), Cadence Comic Art, Artcoholics, a bunch of Big Cartel sites like Jim Cheung / Jason Fabok / Dustin Nguyen, Greg Capullo Art, Skottie Young, and more.<p>It's free to try @ <a href="https://www.nerdcrawler.com/" rel="nofollow">https://www.nerdcrawler.com/</a>, and you can upgrade to a paid plan if you want text messages alerts or want to check sites every 10 minutes or 1 minute.<p>From a technical standpoint, my tech stack is:<p>- Ruby on Rails<p>- Hosted on Heroku<p>- Emails sent by Mailgun<p>- Texts sent by Twilio<p>- Images hosted on Cloudinary<p>- Credit card charging handled by Stripe and the new, low-code Stripe Checkout<p>The minimum viable product was built in about a week with minor bug fixes and new features added weekly.<p>If you have any feedback, have art sites you wanted added, or questions, let me know!

Show HN: Nerd Crawler – monitoring original comic art sites so you don't have to

I've been a fan of comics since I watched the X-Men Animated Series in the 90s, and I fell in love with collecting original comic art when I got my first Jim Lee sketch in high school.<p>But, after missing out on some original comic art pieces because I didn't know when they were added for sale on websites, I decided to take it upon myself to make an app that monitors original comic art sites and emails/texts you when new art drops.<p>It's called Nerd Crawler and I'm building it myself so there might be some bugs but I'm hoping it helps comic art collectors. It works with over 40 original comic art websites like Albert Moy (Jim Lee's art dealer), Cadence Comic Art, Artcoholics, a bunch of Big Cartel sites like Jim Cheung / Jason Fabok / Dustin Nguyen, Greg Capullo Art, Skottie Young, and more.<p>It's free to try @ <a href="https://www.nerdcrawler.com/" rel="nofollow">https://www.nerdcrawler.com/</a>, and you can upgrade to a paid plan if you want text messages alerts or want to check sites every 10 minutes or 1 minute.<p>From a technical standpoint, my tech stack is:<p>- Ruby on Rails<p>- Hosted on Heroku<p>- Emails sent by Mailgun<p>- Texts sent by Twilio<p>- Images hosted on Cloudinary<p>- Credit card charging handled by Stripe and the new, low-code Stripe Checkout<p>The minimum viable product was built in about a week with minor bug fixes and new features added weekly.<p>If you have any feedback, have art sites you wanted added, or questions, let me know!

Show HN: Nerd Crawler – monitoring original comic art sites so you don't have to

I've been a fan of comics since I watched the X-Men Animated Series in the 90s, and I fell in love with collecting original comic art when I got my first Jim Lee sketch in high school.<p>But, after missing out on some original comic art pieces because I didn't know when they were added for sale on websites, I decided to take it upon myself to make an app that monitors original comic art sites and emails/texts you when new art drops.<p>It's called Nerd Crawler and I'm building it myself so there might be some bugs but I'm hoping it helps comic art collectors. It works with over 40 original comic art websites like Albert Moy (Jim Lee's art dealer), Cadence Comic Art, Artcoholics, a bunch of Big Cartel sites like Jim Cheung / Jason Fabok / Dustin Nguyen, Greg Capullo Art, Skottie Young, and more.<p>It's free to try @ <a href="https://www.nerdcrawler.com/" rel="nofollow">https://www.nerdcrawler.com/</a>, and you can upgrade to a paid plan if you want text messages alerts or want to check sites every 10 minutes or 1 minute.<p>From a technical standpoint, my tech stack is:<p>- Ruby on Rails<p>- Hosted on Heroku<p>- Emails sent by Mailgun<p>- Texts sent by Twilio<p>- Images hosted on Cloudinary<p>- Credit card charging handled by Stripe and the new, low-code Stripe Checkout<p>The minimum viable product was built in about a week with minor bug fixes and new features added weekly.<p>If you have any feedback, have art sites you wanted added, or questions, let me know!

Show HN: Vimified – Master Vim by Developing Muscle Memory

Show HN: A simple website to show how NFTs are stored

Show HN: PTerm – Go module to beautify terminal output with interactive menus

Show HN: PTerm – Go module to beautify terminal output with interactive menus

Show HN: Data Diff – compare tables of any size across databases

Gleb, Alex, Erez and Simon here – we are building an open-source tool for comparing data within and across databases at any scale. The repo is at <a href="https://github.com/datafold/data-diff" rel="nofollow">https://github.com/datafold/data-diff</a>, and our home page is <a href="https://datafold.com/" rel="nofollow">https://datafold.com/</a>.<p>As a company, Datafold builds tools for data engineers to automate the most tedious and error-prone tasks falling through the cracks of the modern data stack, such as data testing and lineage. We launched two years ago with a tool for regression-testing changes to ETL code <a href="https://news.ycombinator.com/item?id=24071955" rel="nofollow">https://news.ycombinator.com/item?id=24071955</a>. It compares the produced data before and after the code change and shows the impact on values, aggregate metrics, and downstream data applications.<p>While working with many customers on improving their data engineering experience, we kept hearing that they needed to diff their data across databases to validate data replication between systems.<p>There were 3 main use cases for such replication:<p>(1) To perform analytics on transactional data in an OLAP engine (e.g. PostgreSQL > Snowflake) (2) To migrate between transactional stores (e.g. MySQL > PostgreSQL) (3) To leverage data in a specialized engine (e.g. PostgreSQL > ElasticSearch).<p>Despite multiple vendors (e.g., Fivetran, Stitch) and open-source products (Airbyte, Debezium) solving data replication, there was no tooling for validating the correctness of such replication. When we researched how teams were going about this, we found that most have been either:<p>Running manual checks: e.g., starting with COUNT(*) and then digging into the discrepancies, which often took hours to pinpoint the inconsistencies. Using distributed MPP engines such as Spark or Trino to download the complete datasets from both databases and then comparing them in memory – an expensive process requiring complex infrastructure.<p>Our users wanted a tool that could:<p>(1) Compare datasets quickly (seconds/minutes) at a large (millions/billions of rows) scale across different databases (2) Have minimal network IO and database workload overhead. (3) Provide straightforward output: basic stats and what rows are different. (4) Be embedded into a data orchestrator such as Airflow to run right after the replication process.<p>So we built Data Diff as an open-source package available through pip. Data Diff can be run in a CLI or wrapped into any data orchestrator such as Airflow, Dagster, etc.<p>To solve for speed at scale with minimal overhead, Data Diff relies on checksumming the data in both databases and uses binary search to identify diverging records. That way, it can compare arbitrarily large datasets in logarithmic time and IO – only transferring a tiny fraction of the data over the network. For example, it can diff tables with 25M rows in ~10s and 1B+ rows in ~5m across two physically separate PostgreSQL databases while running on a typical laptop.<p>We've launched this tool under the MIT license so that any developer can use it, and to encourage contributions of other database connectors. We didn't want to charge engineers for such a fundamental use case. We make money by charging a license fee for advanced solutions such as column-level data lineage, CI workflow automation, and ML-powered alerts.

Show HN: Data Diff – compare tables of any size across databases

Gleb, Alex, Erez and Simon here – we are building an open-source tool for comparing data within and across databases at any scale. The repo is at <a href="https://github.com/datafold/data-diff" rel="nofollow">https://github.com/datafold/data-diff</a>, and our home page is <a href="https://datafold.com/" rel="nofollow">https://datafold.com/</a>.<p>As a company, Datafold builds tools for data engineers to automate the most tedious and error-prone tasks falling through the cracks of the modern data stack, such as data testing and lineage. We launched two years ago with a tool for regression-testing changes to ETL code <a href="https://news.ycombinator.com/item?id=24071955" rel="nofollow">https://news.ycombinator.com/item?id=24071955</a>. It compares the produced data before and after the code change and shows the impact on values, aggregate metrics, and downstream data applications.<p>While working with many customers on improving their data engineering experience, we kept hearing that they needed to diff their data across databases to validate data replication between systems.<p>There were 3 main use cases for such replication:<p>(1) To perform analytics on transactional data in an OLAP engine (e.g. PostgreSQL > Snowflake) (2) To migrate between transactional stores (e.g. MySQL > PostgreSQL) (3) To leverage data in a specialized engine (e.g. PostgreSQL > ElasticSearch).<p>Despite multiple vendors (e.g., Fivetran, Stitch) and open-source products (Airbyte, Debezium) solving data replication, there was no tooling for validating the correctness of such replication. When we researched how teams were going about this, we found that most have been either:<p>Running manual checks: e.g., starting with COUNT(*) and then digging into the discrepancies, which often took hours to pinpoint the inconsistencies. Using distributed MPP engines such as Spark or Trino to download the complete datasets from both databases and then comparing them in memory – an expensive process requiring complex infrastructure.<p>Our users wanted a tool that could:<p>(1) Compare datasets quickly (seconds/minutes) at a large (millions/billions of rows) scale across different databases (2) Have minimal network IO and database workload overhead. (3) Provide straightforward output: basic stats and what rows are different. (4) Be embedded into a data orchestrator such as Airflow to run right after the replication process.<p>So we built Data Diff as an open-source package available through pip. Data Diff can be run in a CLI or wrapped into any data orchestrator such as Airflow, Dagster, etc.<p>To solve for speed at scale with minimal overhead, Data Diff relies on checksumming the data in both databases and uses binary search to identify diverging records. That way, it can compare arbitrarily large datasets in logarithmic time and IO – only transferring a tiny fraction of the data over the network. For example, it can diff tables with 25M rows in ~10s and 1B+ rows in ~5m across two physically separate PostgreSQL databases while running on a typical laptop.<p>We've launched this tool under the MIT license so that any developer can use it, and to encourage contributions of other database connectors. We didn't want to charge engineers for such a fundamental use case. We make money by charging a license fee for advanced solutions such as column-level data lineage, CI workflow automation, and ML-powered alerts.

< 1 2 3 ... 631 632 633 634 635 ... 913 914 915 >