The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Fosscord – open-source and selfhostable alternative to discord

Show HN: Fosscord – open-source and selfhostable alternative to discord

Show HN: Atto – BASIC computer that runs in the browser

Show HN: Atto – BASIC computer that runs in the browser

Show HN: Atto – BASIC computer that runs in the browser

Show HN: Atto – BASIC computer that runs in the browser

Show HN: Hetchr, a Developer's Homepage

Show HN: Hetchr, a Developer's Homepage

Show HN: Hetchr, a Developer's Homepage

Show HN: Hetchr, a Developer's Homepage

Show HN: Napkin – Build back-end functions in the browser

Show HN: Napkin – Build back-end functions in the browser

Show HN: Napkin – Build back-end functions in the browser

Show HN: Napkin – Build back-end functions in the browser

Show HN: Captain Stack – a parody version GitHub Copilot

Show HN: Captain Stack – a parody version GitHub Copilot

Show HN: Captain Stack – a parody version GitHub Copilot

Show HN: Captain Stack – a parody version GitHub Copilot

Show HN: VisaWhen – Data on US visa issuance backlogs

Heya! Not the usual sort of thing to be posted here, but I wanted to show off what I made yesterday. Here's a sample page about H1-B visas issued in Bogota:<p><https://visawhen.com/consulates/bogota/h1b><p>The code is source-available (not open source) at <https://github.com/underyx/visawhen>. It's my first time choosing a source-available license over MIT, mainly out of fear of existing immigration startups just gobbling this data and code up; frankly I didn't think the implications through though, I just threw a safe license on there.<p>The way the project works is:<p>- Use requests-html to find publicly available PDFs from government pages<p>- Use camelot to OCR the PDFs and extract data tables from them<p>- Since the previous step takes crazy long for my tastes (around 8000 pages at around 5 seconds each) I've used dask to split the work into chunks and parallel-process them across my laptop's CPUs.<p>- Do data cleanup and processing with pandas, and save all of it to a SQLite file.<p>- Take data from the SQLite file with next.js and generate a static HTML page for each possible embassy - visa type combination<p>- The pages use ECharts to visualize data, and Bulma as a CSS framework<p>- Build and host each commit via Netlify<p>- But proxy to Netlify from CloudFlare, which I believe has more edge locations in the free plan<p>- Collect any donations via Ko-Fi<p>- Use Google Analytics to have a general idea about visitor counts<p>- Use FullStory session recordings to find out about bugs – I've fixed quite a few and I think I'll probably remove this tracking after a bit of time<p>…and that's where I'm at now. I'm pretty happy about the results. Most pages load in less than 300ms, which is something I care about all too much. More importantly, I've shared the site with some immigration communities I'm part of, and the response has been very positive! Let me know what y'all think.

Show HN: VisaWhen – Data on US visa issuance backlogs

Heya! Not the usual sort of thing to be posted here, but I wanted to show off what I made yesterday. Here's a sample page about H1-B visas issued in Bogota:<p><https://visawhen.com/consulates/bogota/h1b><p>The code is source-available (not open source) at <https://github.com/underyx/visawhen>. It's my first time choosing a source-available license over MIT, mainly out of fear of existing immigration startups just gobbling this data and code up; frankly I didn't think the implications through though, I just threw a safe license on there.<p>The way the project works is:<p>- Use requests-html to find publicly available PDFs from government pages<p>- Use camelot to OCR the PDFs and extract data tables from them<p>- Since the previous step takes crazy long for my tastes (around 8000 pages at around 5 seconds each) I've used dask to split the work into chunks and parallel-process them across my laptop's CPUs.<p>- Do data cleanup and processing with pandas, and save all of it to a SQLite file.<p>- Take data from the SQLite file with next.js and generate a static HTML page for each possible embassy - visa type combination<p>- The pages use ECharts to visualize data, and Bulma as a CSS framework<p>- Build and host each commit via Netlify<p>- But proxy to Netlify from CloudFlare, which I believe has more edge locations in the free plan<p>- Collect any donations via Ko-Fi<p>- Use Google Analytics to have a general idea about visitor counts<p>- Use FullStory session recordings to find out about bugs – I've fixed quite a few and I think I'll probably remove this tracking after a bit of time<p>…and that's where I'm at now. I'm pretty happy about the results. Most pages load in less than 300ms, which is something I care about all too much. More importantly, I've shared the site with some immigration communities I'm part of, and the response has been very positive! Let me know what y'all think.

< 1 2 3 ... 615 616 617 618 619 ... 719 720 721 >