The best Hacker News stories from Show from the past day
Latest posts:
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!<p>Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].<p>We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.<p>All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:<p>- Jupyter Notebook: <a href="https://marimo.app/github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.08-Sorting.ipynb" rel="nofollow">https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...</a><p>- marimo notebook: <a href="https://marimo.app/github.com/marimo-team/marimo/blob/07e8d14109f7312f19916fd13e4046a561a740f8/examples/third_party/polars/polars_example.py" rel="nofollow">https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...</a><p>Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.<p>We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like<p>```python<p>with open("./data/cars.csv") as f:
print(f.read())<p># or<p>import pandas as pd
pd.read_csv("./data/cars.csv")<p>```<p>behind the scenes, you make a request [4] to <a href="https://raw.githubusercontent.com/<org>/<repo>/main/data/cars.csv" rel="nofollow">https://raw.githubusercontent.com/<org>/<repo>/main/data/car...</a>.<p>Docs: <a href="https://docs.marimo.io/guides/publishing/playground/#open-notebooks-hosted-on-github" rel="nofollow">https://docs.marimo.io/guides/publishing/playground/#open-no...</a><p>[1] <a href="https://github.com/marimo-team/marimo">https://github.com/marimo-team/marimo</a><p>[2] <a href="https://news.ycombinator.com/item?id=39552882">https://news.ycombinator.com/item?id=39552882</a><p>[3] <a href="https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded-10-000-000-jupyter-notebooks-from-github-this-is-what-we-learned/" rel="nofollow">https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...</a><p>[4] We technically proxy it through the playground <a href="https://marimo.app" rel="nofollow">https://marimo.app</a> to fix CORS issues and GitHub rate-limiting.
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!<p>Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].<p>We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.<p>All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:<p>- Jupyter Notebook: <a href="https://marimo.app/github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.08-Sorting.ipynb" rel="nofollow">https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...</a><p>- marimo notebook: <a href="https://marimo.app/github.com/marimo-team/marimo/blob/07e8d14109f7312f19916fd13e4046a561a740f8/examples/third_party/polars/polars_example.py" rel="nofollow">https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...</a><p>Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.<p>We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like<p>```python<p>with open("./data/cars.csv") as f:
print(f.read())<p># or<p>import pandas as pd
pd.read_csv("./data/cars.csv")<p>```<p>behind the scenes, you make a request [4] to <a href="https://raw.githubusercontent.com/<org>/<repo>/main/data/cars.csv" rel="nofollow">https://raw.githubusercontent.com/<org>/<repo>/main/data/car...</a>.<p>Docs: <a href="https://docs.marimo.io/guides/publishing/playground/#open-notebooks-hosted-on-github" rel="nofollow">https://docs.marimo.io/guides/publishing/playground/#open-no...</a><p>[1] <a href="https://github.com/marimo-team/marimo">https://github.com/marimo-team/marimo</a><p>[2] <a href="https://news.ycombinator.com/item?id=39552882">https://news.ycombinator.com/item?id=39552882</a><p>[3] <a href="https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded-10-000-000-jupyter-notebooks-from-github-this-is-what-we-learned/" rel="nofollow">https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...</a><p>[4] We technically proxy it through the playground <a href="https://marimo.app" rel="nofollow">https://marimo.app</a> to fix CORS issues and GitHub rate-limiting.
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!<p>Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].<p>We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.<p>All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:<p>- Jupyter Notebook: <a href="https://marimo.app/github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.08-Sorting.ipynb" rel="nofollow">https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...</a><p>- marimo notebook: <a href="https://marimo.app/github.com/marimo-team/marimo/blob/07e8d14109f7312f19916fd13e4046a561a740f8/examples/third_party/polars/polars_example.py" rel="nofollow">https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...</a><p>Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.<p>We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like<p>```python<p>with open("./data/cars.csv") as f:
print(f.read())<p># or<p>import pandas as pd
pd.read_csv("./data/cars.csv")<p>```<p>behind the scenes, you make a request [4] to <a href="https://raw.githubusercontent.com/<org>/<repo>/main/data/cars.csv" rel="nofollow">https://raw.githubusercontent.com/<org>/<repo>/main/data/car...</a>.<p>Docs: <a href="https://docs.marimo.io/guides/publishing/playground/#open-notebooks-hosted-on-github" rel="nofollow">https://docs.marimo.io/guides/publishing/playground/#open-no...</a><p>[1] <a href="https://github.com/marimo-team/marimo">https://github.com/marimo-team/marimo</a><p>[2] <a href="https://news.ycombinator.com/item?id=39552882">https://news.ycombinator.com/item?id=39552882</a><p>[3] <a href="https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded-10-000-000-jupyter-notebooks-from-github-this-is-what-we-learned/" rel="nofollow">https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...</a><p>[4] We technically proxy it through the playground <a href="https://marimo.app" rel="nofollow">https://marimo.app</a> to fix CORS issues and GitHub rate-limiting.
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!<p>Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].<p>We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.<p>All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:<p>- Jupyter Notebook: <a href="https://marimo.app/github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.08-Sorting.ipynb" rel="nofollow">https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...</a><p>- marimo notebook: <a href="https://marimo.app/github.com/marimo-team/marimo/blob/07e8d14109f7312f19916fd13e4046a561a740f8/examples/third_party/polars/polars_example.py" rel="nofollow">https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...</a><p>Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.<p>We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like<p>```python<p>with open("./data/cars.csv") as f:
print(f.read())<p># or<p>import pandas as pd
pd.read_csv("./data/cars.csv")<p>```<p>behind the scenes, you make a request [4] to <a href="https://raw.githubusercontent.com/<org>/<repo>/main/data/cars.csv" rel="nofollow">https://raw.githubusercontent.com/<org>/<repo>/main/data/car...</a>.<p>Docs: <a href="https://docs.marimo.io/guides/publishing/playground/#open-notebooks-hosted-on-github" rel="nofollow">https://docs.marimo.io/guides/publishing/playground/#open-no...</a><p>[1] <a href="https://github.com/marimo-team/marimo">https://github.com/marimo-team/marimo</a><p>[2] <a href="https://news.ycombinator.com/item?id=39552882">https://news.ycombinator.com/item?id=39552882</a><p>[3] <a href="https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded-10-000-000-jupyter-notebooks-from-github-this-is-what-we-learned/" rel="nofollow">https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...</a><p>[4] We technically proxy it through the playground <a href="https://marimo.app" rel="nofollow">https://marimo.app</a> to fix CORS issues and GitHub rate-limiting.
Show HN: A blocklist to remove spam and bad websites from search results
Hi HN!<p>I've been fed up with search results so much that I decided to make a giant blocklist to remove garbage links by using uBlacklist.<p>I browsed other blocklists and wasn't very satisfied from what exists now; the goal of this one is to be super organized and transparent, explaining why each site was blocked via issues. Contributions welcome!<p>Even though around 100 domains are blocked so far, I already noticed a big improvement in casual searches. You'd be surprised how some AI generated websites can dominate the #1 page on DuckDuckGo.
Show HN: A blocklist to remove spam and bad websites from search results
Hi HN!<p>I've been fed up with search results so much that I decided to make a giant blocklist to remove garbage links by using uBlacklist.<p>I browsed other blocklists and wasn't very satisfied from what exists now; the goal of this one is to be super organized and transparent, explaining why each site was blocked via issues. Contributions welcome!<p>Even though around 100 domains are blocked so far, I already noticed a big improvement in casual searches. You'd be surprised how some AI generated websites can dominate the #1 page on DuckDuckGo.
Show HN: A blocklist to remove spam and bad websites from search results
Hi HN!<p>I've been fed up with search results so much that I decided to make a giant blocklist to remove garbage links by using uBlacklist.<p>I browsed other blocklists and wasn't very satisfied from what exists now; the goal of this one is to be super organized and transparent, explaining why each site was blocked via issues. Contributions welcome!<p>Even though around 100 domains are blocked so far, I already noticed a big improvement in casual searches. You'd be surprised how some AI generated websites can dominate the #1 page on DuckDuckGo.
Show HN: A blocklist to remove spam and bad websites from search results
Hi HN!<p>I've been fed up with search results so much that I decided to make a giant blocklist to remove garbage links by using uBlacklist.<p>I browsed other blocklists and wasn't very satisfied from what exists now; the goal of this one is to be super organized and transparent, explaining why each site was blocked via issues. Contributions welcome!<p>Even though around 100 domains are blocked so far, I already noticed a big improvement in casual searches. You'd be surprised how some AI generated websites can dominate the #1 page on DuckDuckGo.
Show HN: A blocklist to remove spam and bad websites from search results
Hi HN!<p>I've been fed up with search results so much that I decided to make a giant blocklist to remove garbage links by using uBlacklist.<p>I browsed other blocklists and wasn't very satisfied from what exists now; the goal of this one is to be super organized and transparent, explaining why each site was blocked via issues. Contributions welcome!<p>Even though around 100 domains are blocked so far, I already noticed a big improvement in casual searches. You'd be surprised how some AI generated websites can dominate the #1 page on DuckDuckGo.
Show HN: A daily digest for reMarkable
Show HN: A daily digest for reMarkable
Show HN: Werk, a simple build tool and command runner
I made this for my personal workflow, but I'd love to get feedback from the community.
Show HN: New search engine and free-FOIA-by-fax-via-web for US veteran records
Hi HN. I'm the president and founder of a small non-profit called Reclaim The Records that identifies historical and genealogical materials and data sets held by government agencies, archives, and libraries -- and then returns them to the public domain, for free public use.<p>Back in September 2017, our organization made a Freedom of Information Act (FOIA) request to the US Department of Veterans Affairs (the VA) asking for a copy of a database they maintain called "BIRLS", which stands for the Beneficiary Identification Records Locator Subsystem. While it's not exactly an index of every single post-Civil-War veteran of every branch of the US military, it's possibly the closest thing that exists to it.<p>BIRLS is a database that indexes all the known-to-the-VA-in-or-after-the-1970s *<i>veterans' benefits claims files*</i>, also called C-Files or sometimes XC-Files. Older veterans' claims files have been moved to the National Archives (NARA), such as the famous Civil War pension files. But 95% of the later benefits claim files, from the late nineteenth century up to today, are still held at the VA, in their warehouses, and still haven't been sent to NARA.<p>And even if you know these files exist, the VA really doesn't make it easy to get them. The Veterans Benefits Administration (VBA) group within the VA only seems to accept FOIA requests for copies of C-Files by fax (!) and also seems to have made up a whole new rule whereby you have to have an actual wet ink signature on your FOIA request, not just a typed letter.<p>Well, seven years and one very successful FOIA lawsuit in SDNY against the VA later, we at Reclaim The Records are very proud to announce the acquisition and first-ever free public release of the BIRLS database, AND that we built a new website to make the data freely and easily searchable AND that we even built a free FOIA-by-FAX-API system (with a signature widget, to get around the dumb new not-FOIA rules!) built into our website's search results, that makes it much, much easier for people to finally get these files out of the VA warehouses and into your mailbox. :-)<p>We also added the ability to do searches through the data for soundalike names, abbreviated names, common nicknames, wildcards, searches by date of birth or death, or ranges of birth and death years, or search by SSN, or by branch(es) of services, or by gender...<p>For a lot more information about our FOIA lawsuit against the VA for the database, including copies of our court papers and the SDNY judge's order:<p><a href="https://mailchi.mp/reclaimtherecords/the-birls-database-goes-online-with-eighteen-million-us-veteran-records-and-free-foia-by-fax-system" rel="nofollow">https://mailchi.mp/reclaimtherecords/the-birls-database-goes...</a><p>As for the tech stuff, actually building the website, the search engine, and its FOIAing capability...well, it has been a pretty fun project to build.<p>The BIRLS dataset was eventually provided to us by the VA (several years after we originally asked for it...) as a large zip file which, when decompressed via the command line, yielded the hilarious file name of *<i>Redacted_Full.csv*</i>. I then loaded the cleaned CSV data into a MySQL database, and then used a modified version of the Apache Solr search engine to index the data, so that it could become searchable by soundalike names (using Beider-Morse Phonetic Matching), nicknames (using Solr's synonyms feature), partial names (using wildcards), with dates converted to ISO 8601 format to enable both exact date and date range searches, and various other search criteria.<p>The front-end of the website is built with Nuxt and hosted on Digital Ocean's App Platform, with backups of the FOIA request data on the cloud storage service Wasabi. The fax interface for submitting FOIA requests is powered by the Notifyre API. We use Mailchimp to send e-mail newsletters, and their product Mandrill for programmatic e-mail sending. We use Sentry for error monitoring, Better Stack for server logging, and TinyBird to collect FOIA submission analytics.<p>Enjoy!
Show HN: New search engine and free-FOIA-by-fax-via-web for US veteran records
Hi HN. I'm the president and founder of a small non-profit called Reclaim The Records that identifies historical and genealogical materials and data sets held by government agencies, archives, and libraries -- and then returns them to the public domain, for free public use.<p>Back in September 2017, our organization made a Freedom of Information Act (FOIA) request to the US Department of Veterans Affairs (the VA) asking for a copy of a database they maintain called "BIRLS", which stands for the Beneficiary Identification Records Locator Subsystem. While it's not exactly an index of every single post-Civil-War veteran of every branch of the US military, it's possibly the closest thing that exists to it.<p>BIRLS is a database that indexes all the known-to-the-VA-in-or-after-the-1970s *<i>veterans' benefits claims files*</i>, also called C-Files or sometimes XC-Files. Older veterans' claims files have been moved to the National Archives (NARA), such as the famous Civil War pension files. But 95% of the later benefits claim files, from the late nineteenth century up to today, are still held at the VA, in their warehouses, and still haven't been sent to NARA.<p>And even if you know these files exist, the VA really doesn't make it easy to get them. The Veterans Benefits Administration (VBA) group within the VA only seems to accept FOIA requests for copies of C-Files by fax (!) and also seems to have made up a whole new rule whereby you have to have an actual wet ink signature on your FOIA request, not just a typed letter.<p>Well, seven years and one very successful FOIA lawsuit in SDNY against the VA later, we at Reclaim The Records are very proud to announce the acquisition and first-ever free public release of the BIRLS database, AND that we built a new website to make the data freely and easily searchable AND that we even built a free FOIA-by-FAX-API system (with a signature widget, to get around the dumb new not-FOIA rules!) built into our website's search results, that makes it much, much easier for people to finally get these files out of the VA warehouses and into your mailbox. :-)<p>We also added the ability to do searches through the data for soundalike names, abbreviated names, common nicknames, wildcards, searches by date of birth or death, or ranges of birth and death years, or search by SSN, or by branch(es) of services, or by gender...<p>For a lot more information about our FOIA lawsuit against the VA for the database, including copies of our court papers and the SDNY judge's order:<p><a href="https://mailchi.mp/reclaimtherecords/the-birls-database-goes-online-with-eighteen-million-us-veteran-records-and-free-foia-by-fax-system" rel="nofollow">https://mailchi.mp/reclaimtherecords/the-birls-database-goes...</a><p>As for the tech stuff, actually building the website, the search engine, and its FOIAing capability...well, it has been a pretty fun project to build.<p>The BIRLS dataset was eventually provided to us by the VA (several years after we originally asked for it...) as a large zip file which, when decompressed via the command line, yielded the hilarious file name of *<i>Redacted_Full.csv*</i>. I then loaded the cleaned CSV data into a MySQL database, and then used a modified version of the Apache Solr search engine to index the data, so that it could become searchable by soundalike names (using Beider-Morse Phonetic Matching), nicknames (using Solr's synonyms feature), partial names (using wildcards), with dates converted to ISO 8601 format to enable both exact date and date range searches, and various other search criteria.<p>The front-end of the website is built with Nuxt and hosted on Digital Ocean's App Platform, with backups of the FOIA request data on the cloud storage service Wasabi. The fax interface for submitting FOIA requests is powered by the Notifyre API. We use Mailchimp to send e-mail newsletters, and their product Mandrill for programmatic e-mail sending. We use Sentry for error monitoring, Better Stack for server logging, and TinyBird to collect FOIA submission analytics.<p>Enjoy!
Show HN: Doom (1993) in a PDF
I made a Doom source port that runs within a PDF file.<p>I was inspired by the recent HN post about Tetris in a PDF (<a href="https://news.ycombinator.com/item?id=42645218">https://news.ycombinator.com/item?id=42645218</a>) and I wondered if I could get Doom to run using a similar method.<p>It turns out that old versions of Emscripten can compile C to asm.js code that will happily run inside the limited JS runtime of the PDF engine. I used the doomgeneric (<a href="https://github.com/ozkl/doomgeneric">https://github.com/ozkl/doomgeneric</a>) fork of the original Doom source, as that made writing the IO fairly easy. All I had to do was implement a framebuffer and keyboard inputs.<p>Unlike previous interactive PDF demos, the output for DoomPDF is achieved by creating a text field for each row of pixels in the screen, then setting their contents to various ASCII characters. This gives me a 6 color monochrome display, that can be updated reasonably quickly (80ms per frame).<p>The source code is available at: <a href="https://github.com/ading2210/doompdf">https://github.com/ading2210/doompdf</a><p>Note that this PDF can only run in Chromium-based browsers that use the PDFium engine.
Show HN: Doom (1993) in a PDF
I made a Doom source port that runs within a PDF file.<p>I was inspired by the recent HN post about Tetris in a PDF (<a href="https://news.ycombinator.com/item?id=42645218">https://news.ycombinator.com/item?id=42645218</a>) and I wondered if I could get Doom to run using a similar method.<p>It turns out that old versions of Emscripten can compile C to asm.js code that will happily run inside the limited JS runtime of the PDF engine. I used the doomgeneric (<a href="https://github.com/ozkl/doomgeneric">https://github.com/ozkl/doomgeneric</a>) fork of the original Doom source, as that made writing the IO fairly easy. All I had to do was implement a framebuffer and keyboard inputs.<p>Unlike previous interactive PDF demos, the output for DoomPDF is achieved by creating a text field for each row of pixels in the screen, then setting their contents to various ASCII characters. This gives me a 6 color monochrome display, that can be updated reasonably quickly (80ms per frame).<p>The source code is available at: <a href="https://github.com/ading2210/doompdf">https://github.com/ading2210/doompdf</a><p>Note that this PDF can only run in Chromium-based browsers that use the PDFium engine.
Show HN: Doom (1993) in a PDF
I made a Doom source port that runs within a PDF file.<p>I was inspired by the recent HN post about Tetris in a PDF (<a href="https://news.ycombinator.com/item?id=42645218">https://news.ycombinator.com/item?id=42645218</a>) and I wondered if I could get Doom to run using a similar method.<p>It turns out that old versions of Emscripten can compile C to asm.js code that will happily run inside the limited JS runtime of the PDF engine. I used the doomgeneric (<a href="https://github.com/ozkl/doomgeneric">https://github.com/ozkl/doomgeneric</a>) fork of the original Doom source, as that made writing the IO fairly easy. All I had to do was implement a framebuffer and keyboard inputs.<p>Unlike previous interactive PDF demos, the output for DoomPDF is achieved by creating a text field for each row of pixels in the screen, then setting their contents to various ASCII characters. This gives me a 6 color monochrome display, that can be updated reasonably quickly (80ms per frame).<p>The source code is available at: <a href="https://github.com/ading2210/doompdf">https://github.com/ading2210/doompdf</a><p>Note that this PDF can only run in Chromium-based browsers that use the PDFium engine.
Show HN: Ultra-portable Gantt chart tool for very regulated environments
I work for government agency with a lot of security considerations. We can't install anything and using public webapps is out of the question. Going through clearance or procurement to buy or install something is a pain.<p>I needed a project management tool, and what we had on offer was too clunky and old. I built SimpleGantt to be ultra lightweight and portable. It's one HTML, one Javascript and one CSS file. Each project is saved into a single .yaml file.<p>If you have a SharePoint environment you can "host" it by uploading the repo to SharePoint after renaming simplegantt.html to simplegantt.aspx. That allows anyone with access to open the tool by simply having the URL.<p>Try it at: <a href="https://aerugo.github.io/simplegantt/simplegantt" rel="nofollow">https://aerugo.github.io/simplegantt/simplegantt</a><p>This is a couple of days of tinkering, and mostly exists to keep me from going crazy while managing projects with lots of deadlines and dependencies, so don't expect much. But another person in the same position, finding this might lead to calmer days.
Show HN: Ultra-portable Gantt chart tool for very regulated environments
I work for government agency with a lot of security considerations. We can't install anything and using public webapps is out of the question. Going through clearance or procurement to buy or install something is a pain.<p>I needed a project management tool, and what we had on offer was too clunky and old. I built SimpleGantt to be ultra lightweight and portable. It's one HTML, one Javascript and one CSS file. Each project is saved into a single .yaml file.<p>If you have a SharePoint environment you can "host" it by uploading the repo to SharePoint after renaming simplegantt.html to simplegantt.aspx. That allows anyone with access to open the tool by simply having the URL.<p>Try it at: <a href="https://aerugo.github.io/simplegantt/simplegantt" rel="nofollow">https://aerugo.github.io/simplegantt/simplegantt</a><p>This is a couple of days of tinkering, and mostly exists to keep me from going crazy while managing projects with lots of deadlines and dependencies, so don't expect much. But another person in the same position, finding this might lead to calmer days.
Show HN: TubePen – My attempt to get more out of YouTube learning
Hi HN! I made this because I always forget what I'm trying to learn from YouTube.<p>Test yourself: Can you remember the main concepts from the last (educational) video you watched?<p>So, why not highlight and take notes on YouTube videos, just like in books? That's TubePen.<p>Sign in, replace "youtube" with "tubepen" in your YouTube URL, and you're ready to retain more from your videos.<p>I’d love your feedback! What do you think of my landing page? Use the 10-day free trial and see if it’s useful for you.<p>Thanks!