The best Hacker News stories from Show from the past day
Latest posts:
Show HN: AI Peer Reviewer – Multiagent system for scientific manuscript analysis
After waiting 8 months for a journal response or two months for co-author feedback that consisted of "looks good" and a single comma change, we built an AI-powered peer review system that helps researchers improve their manuscripts rapidly before submission.<p>The system uses multiple specialized agents to analyze different aspects of scientific papers, from methodology to writing quality.<p>Key features: 24 specialized agents analyzing sections, scientific rigor, and writing quality // Detailed feedback with actionable recommendations. // PDF report generation. // Support for custom review criteria and target journals.<p>Two ways to use it:<p>1. Cloud version (free during testing):
<a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
- Upload your manuscript
- Get a comprehensive PDF report within 1–2 working days
- No setup required<p>2. Self-hosted version (GitHub):
<a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
- Use your own OpenAI API keys
- Full control over the review process
- Customize agents and criteria
- MIT licensed<p>The system is particularly useful for researchers preparing manuscripts before submission to co-authors or target journals.<p>Would love to get feedback from the HN community, especially from PhDs and researchers across all academic fields. The project is open source and we welcome contributions!<p>GitHub: <a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
Cloud version: <a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
Show HN: AI Peer Reviewer – Multiagent system for scientific manuscript analysis
After waiting 8 months for a journal response or two months for co-author feedback that consisted of "looks good" and a single comma change, we built an AI-powered peer review system that helps researchers improve their manuscripts rapidly before submission.<p>The system uses multiple specialized agents to analyze different aspects of scientific papers, from methodology to writing quality.<p>Key features: 24 specialized agents analyzing sections, scientific rigor, and writing quality // Detailed feedback with actionable recommendations. // PDF report generation. // Support for custom review criteria and target journals.<p>Two ways to use it:<p>1. Cloud version (free during testing):
<a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
- Upload your manuscript
- Get a comprehensive PDF report within 1–2 working days
- No setup required<p>2. Self-hosted version (GitHub):
<a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
- Use your own OpenAI API keys
- Full control over the review process
- Customize agents and criteria
- MIT licensed<p>The system is particularly useful for researchers preparing manuscripts before submission to co-authors or target journals.<p>Would love to get feedback from the HN community, especially from PhDs and researchers across all academic fields. The project is open source and we welcome contributions!<p>GitHub: <a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
Cloud version: <a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
Show HN: AI Peer Reviewer – Multiagent system for scientific manuscript analysis
After waiting 8 months for a journal response or two months for co-author feedback that consisted of "looks good" and a single comma change, we built an AI-powered peer review system that helps researchers improve their manuscripts rapidly before submission.<p>The system uses multiple specialized agents to analyze different aspects of scientific papers, from methodology to writing quality.<p>Key features: 24 specialized agents analyzing sections, scientific rigor, and writing quality // Detailed feedback with actionable recommendations. // PDF report generation. // Support for custom review criteria and target journals.<p>Two ways to use it:<p>1. Cloud version (free during testing):
<a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
- Upload your manuscript
- Get a comprehensive PDF report within 1–2 working days
- No setup required<p>2. Self-hosted version (GitHub):
<a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
- Use your own OpenAI API keys
- Full control over the review process
- Customize agents and criteria
- MIT licensed<p>The system is particularly useful for researchers preparing manuscripts before submission to co-authors or target journals.<p>Would love to get feedback from the HN community, especially from PhDs and researchers across all academic fields. The project is open source and we welcome contributions!<p>GitHub: <a href="https://github.com/robertjakob/rigorous">https://github.com/robertjakob/rigorous</a>
Cloud version: <a href="https://www.rigorous.company" rel="nofollow">https://www.rigorous.company</a>
Web dev is still fun if you want it to be
I wrote a silly toy website and went out of my way to enjoy it, rather than endure it.<p>I wrote up my thoughts. Maybe they'll resonate with you. Maybe they'll infuriate you. As long as they make you feel something more than a cosmic shrug I'll be pleased.
Show HN: I made a Zero-config tool to visualize your code
I built Staying – a tool that instantly turns your code into interactive animations with no setup required. Just write or paste your code and hit "Visualize". No installs, no accounts, no configuration.
*Supports*: Python, JavaScript & experimental C++
Show HN: Icepi Zero – The FPGA Raspberry Pi Zero Equivalent
I've been hacking away lately, and I'm now proud to show off my newest project - The Icepi Zero!<p>In case you don't know what an FPGA is, this phrase summarizes it perfectly:
"FPGAs work like this. You don't tell them what to do, you tell them what to BE."
You don't program them, but you rewrite the circuits they contain!<p>So I've made a PCB that carries an ECP5 FPGA, and has a raspberry pi zero footprint. It also has a few improvements! Notably the 2 USB b ports are replaced with 3 USB C ports, and it has multiple LEDs.<p>This board can output HDMI, read from a uSD, use a SDRAM and much more. I'm very proud the product of multiple weeks of work. (Thanks for the pcb reviews on r/PrintedCircuitBoard )<p>(All the sources on github under an open source license :D)<p>PS. See some more pics on reddit <a href="https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_first_fpga_board_the_icepi_zero/" rel="nofollow">https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_f...</a>
Show HN: Icepi Zero – The FPGA Raspberry Pi Zero Equivalent
I've been hacking away lately, and I'm now proud to show off my newest project - The Icepi Zero!<p>In case you don't know what an FPGA is, this phrase summarizes it perfectly:
"FPGAs work like this. You don't tell them what to do, you tell them what to BE."
You don't program them, but you rewrite the circuits they contain!<p>So I've made a PCB that carries an ECP5 FPGA, and has a raspberry pi zero footprint. It also has a few improvements! Notably the 2 USB b ports are replaced with 3 USB C ports, and it has multiple LEDs.<p>This board can output HDMI, read from a uSD, use a SDRAM and much more. I'm very proud the product of multiple weeks of work. (Thanks for the pcb reviews on r/PrintedCircuitBoard )<p>(All the sources on github under an open source license :D)<p>PS. See some more pics on reddit <a href="https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_first_fpga_board_the_icepi_zero/" rel="nofollow">https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_f...</a>
Show HN: Icepi Zero – The FPGA Raspberry Pi Zero Equivalent
I've been hacking away lately, and I'm now proud to show off my newest project - The Icepi Zero!<p>In case you don't know what an FPGA is, this phrase summarizes it perfectly:
"FPGAs work like this. You don't tell them what to do, you tell them what to BE."
You don't program them, but you rewrite the circuits they contain!<p>So I've made a PCB that carries an ECP5 FPGA, and has a raspberry pi zero footprint. It also has a few improvements! Notably the 2 USB b ports are replaced with 3 USB C ports, and it has multiple LEDs.<p>This board can output HDMI, read from a uSD, use a SDRAM and much more. I'm very proud the product of multiple weeks of work. (Thanks for the pcb reviews on r/PrintedCircuitBoard )<p>(All the sources on github under an open source license :D)<p>PS. See some more pics on reddit <a href="https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_first_fpga_board_the_icepi_zero/" rel="nofollow">https://www.reddit.com/r/FPGA/comments/1kwxvk8/ive_made_my_f...</a>
Show HN: Every problem and solution in Beyond Cracking the Coding Interview
Hey HN, I'm Aline, founder of interviewing.io and one of the authors of Beyond Cracking the Coding Interview (the official sequel to CTCI).<p>We just compiled every problem (and solution) in the book and made them available for free. There are ~230 problems in total. Some of them are classics like n-queens, but almost all are new and not found in the original CTCI.<p>You can read through the problems and solutions, or you work them with our AI Interviewer, which is also free. I'd recommend doing AI Interviewer before you read the solutions, but you can do it in whichever order you like. (When you first get into AI Interviewer, you can configure which topics you want problems on, and at what difficulty level, and you can add topics and change difficulty levels as you go.)<p>Here's the link: <a href="https://start.interviewing.io/beyond-ctci/all-problems/technical-topics" rel="nofollow">https://start.interviewing.io/beyond-ctci/all-problems/techn...</a> (You'll have to create an account if you don't already have one, but there's nothing else you need to do to access all the things.)
Show HN: Every problem and solution in Beyond Cracking the Coding Interview
Hey HN, I'm Aline, founder of interviewing.io and one of the authors of Beyond Cracking the Coding Interview (the official sequel to CTCI).<p>We just compiled every problem (and solution) in the book and made them available for free. There are ~230 problems in total. Some of them are classics like n-queens, but almost all are new and not found in the original CTCI.<p>You can read through the problems and solutions, or you work them with our AI Interviewer, which is also free. I'd recommend doing AI Interviewer before you read the solutions, but you can do it in whichever order you like. (When you first get into AI Interviewer, you can configure which topics you want problems on, and at what difficulty level, and you can add topics and change difficulty levels as you go.)<p>Here's the link: <a href="https://start.interviewing.io/beyond-ctci/all-problems/technical-topics" rel="nofollow">https://start.interviewing.io/beyond-ctci/all-problems/techn...</a> (You'll have to create an account if you don't already have one, but there's nothing else you need to do to access all the things.)
I'm starting a social club to solve the male loneliness epidemic
The other day I saw a post here on HN that featured a NYT article called "Where Have All My Deep Male Friendships Gone?" (<a href="https://news.ycombinator.com/item?id=44098369">https://news.ycombinator.com/item?id=44098369</a>) and it definitely hit home. As a guy in my early 30s, it made me realize how I've let many of my most meaningful friendships fade. I have a good group of friends - and my wife - but it doesn't feel like when I was in college and hung out with a crew of 10+ people on a weekly basis.
So, I decided to do something about it. I’ve launched wave3.social - a platform to help guys build in-person social circles with actual depth. Think parlor.social or timeleft for guys: curated events and meaningful connections for men who don’t want their friendships to atrophy post-college.<p>It started as a Boston-based idea (where I live), but I built it with flexibility in mind so it could scale to other cities if there’s interest. It’s intentionally not on Meetup or Facebook - I wanted something that feels more intentional, with a better UX and less noise.<p>Right now, I'm in the “see if this resonates with anyone” stage. If this sounds interesting to you and you're in Boston or another city where this type of thing might be needed, drop a comment or shot me an email. I'd love to hear any feedback on the site and ideas on how we can fix the male loneliness epidemic in the work-from-home era.
Show HN: Onlook – Open-source, visual-first Cursor for designers
Hey HN, I’m Kiet – one half of the two-person team building Onlook (<a href="https://beta.onlook.com/">https://beta.onlook.com/</a>), an open-source [<a href="https://github.com/onlook-dev/onlook/">https://github.com/onlook-dev/onlook/</a>] visual editor that lets you edit and create React apps live on an infinite canvas.<p>We launched Onlook [1][2] as a local-first Electron app almost a year ago. Since then, “prompt-to-build” tools have blown up, but none let you design and iterate visually. We fixed that by taking a visual-first, AI-powered approach where you can prompt, style, and directly manipulate elements in your app like in a design tool.<p>Two months ago, we decided to move away from Electron and rewrite everything for the browser. We wanted to remove the friction of downloading hundreds of MBs and setting up a development environment just to use the app. I wrote more here [3] about how we did it, but here are some learnings from the whole migration:<p>1. While most of the React UI code can be reused, mapping from Electron’s SPA experience to a Next.js app with routes is non-trivial on the state management side.<p>2. We were storing most of the data locally as large JSON objects. Moving that to a remote database required major refactoring into tables and more loading states. We didn’t have to think as hard about querying and load time before.<p>3. Iframes in the browser enforce many more restrictions than Electron webview. Working around this required us to inject code directly into the user project in order to do cross-iframe communication.<p>4. Keeping API keys secure is much easier on a web application than an Electron app. In Electron, every key we leave on the client can be statically accessed. Hence, we had to proxy any SDK we used that required an API key into a server call. In the web app, we can just keep the keys on the server.<p>5. Pushing a release bundle in Electron can take 1+ hours. And some users may never update. If we had a bug in the autoupdater itself, certain users could be “stranded” in an old version forever, and we’d have to email them to update. Though this is still better than mobile apps that go through an app store, it’s still very poor DX.<p>How does Onlook for web work?<p>We start by connecting to a remote “sandbox” [4]. The visual editing component happens through an iframe. We map the HTML element in the iframe to the location in code. Then, when an edit is made, we simulate the change on the iframe and edit the code at the same time. This way, visual changes always feel instant.<p>While we’re still ironing out the experience, you can already:
- Select elements and prompt changes<p>- Update TailwindCSS classes via the styling UI<p>- Draw in new divs and elements<p>- Preview on multiple screen sizes<p>- Edit your code through an in-browser IDE<p>We want to make it trivial for anyone to create, style, and edit codebases. We’re still porting over functionalities from the desktop app — layers, fonts, hosting, git, etc. Once that is done, we plan on adding support for back-end functionalities such as auth, database, and API calls.<p>Special thank you to the 70+ contributors who have helped create the Onlook experience! I think there’s still a lot to be solved for in the design and dev workflow, and I think the tech is almost there.<p>You can clone the project and run it from our repo (linked to this post) or try it out at <a href="https://beta.onlook.com">https://beta.onlook.com</a> where we’re letting people try it out for free.<p>I’d love to hear what you think and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=41390449">https://news.ycombinator.com/item?id=41390449</a><p>[2] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[3] <a href="https://docs.onlook.com/docs/developer/electron-to-web-migration">https://docs.onlook.com/docs/developer/electron-to-web-migra...</a><p>[4] Currently, the sandbox is through CodeSandbox, but we plan to add support for connecting to a locally running server as well
Show HN: Onlook – Open-source, visual-first Cursor for designers
Hey HN, I’m Kiet – one half of the two-person team building Onlook (<a href="https://beta.onlook.com/">https://beta.onlook.com/</a>), an open-source [<a href="https://github.com/onlook-dev/onlook/">https://github.com/onlook-dev/onlook/</a>] visual editor that lets you edit and create React apps live on an infinite canvas.<p>We launched Onlook [1][2] as a local-first Electron app almost a year ago. Since then, “prompt-to-build” tools have blown up, but none let you design and iterate visually. We fixed that by taking a visual-first, AI-powered approach where you can prompt, style, and directly manipulate elements in your app like in a design tool.<p>Two months ago, we decided to move away from Electron and rewrite everything for the browser. We wanted to remove the friction of downloading hundreds of MBs and setting up a development environment just to use the app. I wrote more here [3] about how we did it, but here are some learnings from the whole migration:<p>1. While most of the React UI code can be reused, mapping from Electron’s SPA experience to a Next.js app with routes is non-trivial on the state management side.<p>2. We were storing most of the data locally as large JSON objects. Moving that to a remote database required major refactoring into tables and more loading states. We didn’t have to think as hard about querying and load time before.<p>3. Iframes in the browser enforce many more restrictions than Electron webview. Working around this required us to inject code directly into the user project in order to do cross-iframe communication.<p>4. Keeping API keys secure is much easier on a web application than an Electron app. In Electron, every key we leave on the client can be statically accessed. Hence, we had to proxy any SDK we used that required an API key into a server call. In the web app, we can just keep the keys on the server.<p>5. Pushing a release bundle in Electron can take 1+ hours. And some users may never update. If we had a bug in the autoupdater itself, certain users could be “stranded” in an old version forever, and we’d have to email them to update. Though this is still better than mobile apps that go through an app store, it’s still very poor DX.<p>How does Onlook for web work?<p>We start by connecting to a remote “sandbox” [4]. The visual editing component happens through an iframe. We map the HTML element in the iframe to the location in code. Then, when an edit is made, we simulate the change on the iframe and edit the code at the same time. This way, visual changes always feel instant.<p>While we’re still ironing out the experience, you can already:
- Select elements and prompt changes<p>- Update TailwindCSS classes via the styling UI<p>- Draw in new divs and elements<p>- Preview on multiple screen sizes<p>- Edit your code through an in-browser IDE<p>We want to make it trivial for anyone to create, style, and edit codebases. We’re still porting over functionalities from the desktop app — layers, fonts, hosting, git, etc. Once that is done, we plan on adding support for back-end functionalities such as auth, database, and API calls.<p>Special thank you to the 70+ contributors who have helped create the Onlook experience! I think there’s still a lot to be solved for in the design and dev workflow, and I think the tech is almost there.<p>You can clone the project and run it from our repo (linked to this post) or try it out at <a href="https://beta.onlook.com">https://beta.onlook.com</a> where we’re letting people try it out for free.<p>I’d love to hear what you think and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=41390449">https://news.ycombinator.com/item?id=41390449</a><p>[2] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[3] <a href="https://docs.onlook.com/docs/developer/electron-to-web-migration">https://docs.onlook.com/docs/developer/electron-to-web-migra...</a><p>[4] Currently, the sandbox is through CodeSandbox, but we plan to add support for connecting to a locally running server as well
Show HN: Onlook – Open-source, visual-first Cursor for designers
Hey HN, I’m Kiet – one half of the two-person team building Onlook (<a href="https://beta.onlook.com/">https://beta.onlook.com/</a>), an open-source [<a href="https://github.com/onlook-dev/onlook/">https://github.com/onlook-dev/onlook/</a>] visual editor that lets you edit and create React apps live on an infinite canvas.<p>We launched Onlook [1][2] as a local-first Electron app almost a year ago. Since then, “prompt-to-build” tools have blown up, but none let you design and iterate visually. We fixed that by taking a visual-first, AI-powered approach where you can prompt, style, and directly manipulate elements in your app like in a design tool.<p>Two months ago, we decided to move away from Electron and rewrite everything for the browser. We wanted to remove the friction of downloading hundreds of MBs and setting up a development environment just to use the app. I wrote more here [3] about how we did it, but here are some learnings from the whole migration:<p>1. While most of the React UI code can be reused, mapping from Electron’s SPA experience to a Next.js app with routes is non-trivial on the state management side.<p>2. We were storing most of the data locally as large JSON objects. Moving that to a remote database required major refactoring into tables and more loading states. We didn’t have to think as hard about querying and load time before.<p>3. Iframes in the browser enforce many more restrictions than Electron webview. Working around this required us to inject code directly into the user project in order to do cross-iframe communication.<p>4. Keeping API keys secure is much easier on a web application than an Electron app. In Electron, every key we leave on the client can be statically accessed. Hence, we had to proxy any SDK we used that required an API key into a server call. In the web app, we can just keep the keys on the server.<p>5. Pushing a release bundle in Electron can take 1+ hours. And some users may never update. If we had a bug in the autoupdater itself, certain users could be “stranded” in an old version forever, and we’d have to email them to update. Though this is still better than mobile apps that go through an app store, it’s still very poor DX.<p>How does Onlook for web work?<p>We start by connecting to a remote “sandbox” [4]. The visual editing component happens through an iframe. We map the HTML element in the iframe to the location in code. Then, when an edit is made, we simulate the change on the iframe and edit the code at the same time. This way, visual changes always feel instant.<p>While we’re still ironing out the experience, you can already:
- Select elements and prompt changes<p>- Update TailwindCSS classes via the styling UI<p>- Draw in new divs and elements<p>- Preview on multiple screen sizes<p>- Edit your code through an in-browser IDE<p>We want to make it trivial for anyone to create, style, and edit codebases. We’re still porting over functionalities from the desktop app — layers, fonts, hosting, git, etc. Once that is done, we plan on adding support for back-end functionalities such as auth, database, and API calls.<p>Special thank you to the 70+ contributors who have helped create the Onlook experience! I think there’s still a lot to be solved for in the design and dev workflow, and I think the tech is almost there.<p>You can clone the project and run it from our repo (linked to this post) or try it out at <a href="https://beta.onlook.com">https://beta.onlook.com</a> where we’re letting people try it out for free.<p>I’d love to hear what you think and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=41390449">https://news.ycombinator.com/item?id=41390449</a><p>[2] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[3] <a href="https://docs.onlook.com/docs/developer/electron-to-web-migration">https://docs.onlook.com/docs/developer/electron-to-web-migra...</a><p>[4] Currently, the sandbox is through CodeSandbox, but we plan to add support for connecting to a locally running server as well
Show HN: Onlook – Open-source, visual-first Cursor for designers
Hey HN, I’m Kiet – one half of the two-person team building Onlook (<a href="https://beta.onlook.com/">https://beta.onlook.com/</a>), an open-source [<a href="https://github.com/onlook-dev/onlook/">https://github.com/onlook-dev/onlook/</a>] visual editor that lets you edit and create React apps live on an infinite canvas.<p>We launched Onlook [1][2] as a local-first Electron app almost a year ago. Since then, “prompt-to-build” tools have blown up, but none let you design and iterate visually. We fixed that by taking a visual-first, AI-powered approach where you can prompt, style, and directly manipulate elements in your app like in a design tool.<p>Two months ago, we decided to move away from Electron and rewrite everything for the browser. We wanted to remove the friction of downloading hundreds of MBs and setting up a development environment just to use the app. I wrote more here [3] about how we did it, but here are some learnings from the whole migration:<p>1. While most of the React UI code can be reused, mapping from Electron’s SPA experience to a Next.js app with routes is non-trivial on the state management side.<p>2. We were storing most of the data locally as large JSON objects. Moving that to a remote database required major refactoring into tables and more loading states. We didn’t have to think as hard about querying and load time before.<p>3. Iframes in the browser enforce many more restrictions than Electron webview. Working around this required us to inject code directly into the user project in order to do cross-iframe communication.<p>4. Keeping API keys secure is much easier on a web application than an Electron app. In Electron, every key we leave on the client can be statically accessed. Hence, we had to proxy any SDK we used that required an API key into a server call. In the web app, we can just keep the keys on the server.<p>5. Pushing a release bundle in Electron can take 1+ hours. And some users may never update. If we had a bug in the autoupdater itself, certain users could be “stranded” in an old version forever, and we’d have to email them to update. Though this is still better than mobile apps that go through an app store, it’s still very poor DX.<p>How does Onlook for web work?<p>We start by connecting to a remote “sandbox” [4]. The visual editing component happens through an iframe. We map the HTML element in the iframe to the location in code. Then, when an edit is made, we simulate the change on the iframe and edit the code at the same time. This way, visual changes always feel instant.<p>While we’re still ironing out the experience, you can already:
- Select elements and prompt changes<p>- Update TailwindCSS classes via the styling UI<p>- Draw in new divs and elements<p>- Preview on multiple screen sizes<p>- Edit your code through an in-browser IDE<p>We want to make it trivial for anyone to create, style, and edit codebases. We’re still porting over functionalities from the desktop app — layers, fonts, hosting, git, etc. Once that is done, we plan on adding support for back-end functionalities such as auth, database, and API calls.<p>Special thank you to the 70+ contributors who have helped create the Onlook experience! I think there’s still a lot to be solved for in the design and dev workflow, and I think the tech is almost there.<p>You can clone the project and run it from our repo (linked to this post) or try it out at <a href="https://beta.onlook.com">https://beta.onlook.com</a> where we’re letting people try it out for free.<p>I’d love to hear what you think and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=41390449">https://news.ycombinator.com/item?id=41390449</a><p>[2] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[3] <a href="https://docs.onlook.com/docs/developer/electron-to-web-migration">https://docs.onlook.com/docs/developer/electron-to-web-migra...</a><p>[4] Currently, the sandbox is through CodeSandbox, but we plan to add support for connecting to a locally running server as well
Show HN: Typed-FFmpeg 3.0–Typed Interface to FFmpeg and Visual Filter Editor
Hi HN,<p>I built typed-ffmpeg, a Python package that lets you build FFmpeg filter graphs with full type safety, autocomplete, and validation. It’s inspired by ffmpeg-python, but addresses some long-standing issues like lack of IDE support and fragile CLI strings.<p>What’s New in v3.0:
• Source filter support (e.g. color, testsrc, etc.)
• Input stream selection (e.g. [0:a], [1:v])
• A new interactive playground where you can:
• Build filter graphs visually
• Generate both FFmpeg CLI and typed Python code
• Paste existing FFmpeg commands and reverse-parse them into graphs<p>Playground link: <a href="https://livingbio.github.io/typed-ffmpeg-playground/" rel="nofollow">https://livingbio.github.io/typed-ffmpeg-playground/</a>
(It’s open source and runs fully in-browser.)<p>The internal core also supports converting CLI → graph → typed Python code. This is useful for building educational tools, FFmpeg IDEs, or visual editors.<p>I’d love feedback, bug reports, or ideas for next steps. If you’ve ever struggled with FFmpeg’s CLI or tried to teach it, this might help.<p>Thanks!
— David (maintainer)
Show HN: Typed-FFmpeg 3.0–Typed Interface to FFmpeg and Visual Filter Editor
Hi HN,<p>I built typed-ffmpeg, a Python package that lets you build FFmpeg filter graphs with full type safety, autocomplete, and validation. It’s inspired by ffmpeg-python, but addresses some long-standing issues like lack of IDE support and fragile CLI strings.<p>What’s New in v3.0:
• Source filter support (e.g. color, testsrc, etc.)
• Input stream selection (e.g. [0:a], [1:v])
• A new interactive playground where you can:
• Build filter graphs visually
• Generate both FFmpeg CLI and typed Python code
• Paste existing FFmpeg commands and reverse-parse them into graphs<p>Playground link: <a href="https://livingbio.github.io/typed-ffmpeg-playground/" rel="nofollow">https://livingbio.github.io/typed-ffmpeg-playground/</a>
(It’s open source and runs fully in-browser.)<p>The internal core also supports converting CLI → graph → typed Python code. This is useful for building educational tools, FFmpeg IDEs, or visual editors.<p>I’d love feedback, bug reports, or ideas for next steps. If you’ve ever struggled with FFmpeg’s CLI or tried to teach it, this might help.<p>Thanks!
— David (maintainer)
Show HN: Porting Terraria and Celeste to WebAssembly
Show HN: Porting Terraria and Celeste to WebAssembly
Show HN: I wrote a modern Command Line Handbook
TLDR: I wrote a handbook for the Linux command line. 120 pages in PDF. Updated for 2025. Pay what you want.<p>A few years back I wrote an ebook about the Linux command line. Instead of focusing on a specific shell, paraphrasing manual pages, or providing long repetitive explanations, the idea was to create a modern guide that would help readers to understand the command line in the practical sense, cover the most common things people use the command line for, and do so without wasting the readers' time.<p>The book contains material on terminals, shells (compatible with both Bash and Zsh), configuration, command line programs for typical use cases, shell scripting, and many tips and tricks to make working on the command line more convenient. I still consider it "an introduction" and it is not necessarily a book for the HN crowd that lives in the terminal, but I believe that the book will easily cover 80 % of the things most people want or need to do in the terminal.<p>I made a couple of updates to the book over the years and just finished a significant one for 2025. The book is not perfect. I still see a lot of room for improvement, but I think it is good enough and I truly want to share it with everyone. Hence, pay what you want.<p>EXAMPLE PAGES: <a href="https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVffAs3Sp/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVff...</a><p><a href="https://commandline.stribny.name/" rel="nofollow">https://commandline.stribny.name/</a>