The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights
Hey HN!<p>Pilots everywhere are required to keep a logbook of all their flying hours, aircraft, airports, and so on. Since I track everything digitally (some people still just use paper logbooks!), I put together some data visualizations and a few 3D globes to show my flying history.<p>This globe is probably my favourite so far: <a href="https://jameshard.ing/pilot/globes/all" rel="nofollow">https://jameshard.ing/pilot/globes/all</a><p>If you’ve got ideas for other graphs or ways to show this kind of data, I’d love to hear them!
Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights
Hey HN!<p>Pilots everywhere are required to keep a logbook of all their flying hours, aircraft, airports, and so on. Since I track everything digitally (some people still just use paper logbooks!), I put together some data visualizations and a few 3D globes to show my flying history.<p>This globe is probably my favourite so far: <a href="https://jameshard.ing/pilot/globes/all" rel="nofollow">https://jameshard.ing/pilot/globes/all</a><p>If you’ve got ideas for other graphs or ways to show this kind of data, I’d love to hear them!
Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights
Hey HN!<p>Pilots everywhere are required to keep a logbook of all their flying hours, aircraft, airports, and so on. Since I track everything digitally (some people still just use paper logbooks!), I put together some data visualizations and a few 3D globes to show my flying history.<p>This globe is probably my favourite so far: <a href="https://jameshard.ing/pilot/globes/all" rel="nofollow">https://jameshard.ing/pilot/globes/all</a><p>If you’ve got ideas for other graphs or ways to show this kind of data, I’d love to hear them!
Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights
Hey HN!<p>Pilots everywhere are required to keep a logbook of all their flying hours, aircraft, airports, and so on. Since I track everything digitally (some people still just use paper logbooks!), I put together some data visualizations and a few 3D globes to show my flying history.<p>This globe is probably my favourite so far: <a href="https://jameshard.ing/pilot/globes/all" rel="nofollow">https://jameshard.ing/pilot/globes/all</a><p>If you’ve got ideas for other graphs or ways to show this kind of data, I’d love to hear them!
Show HN: PLJS – JavaScript for Postgres
PLJS is a new, modern JavaScript trusted language extension, bundling QuickJS, a small and fast JavaScript runtime with Postgres, providing fast type conversion between Postgres and JavaScript, fast execution, and a very light footprint.<p>Here are bencharks that show how it compares to PLV8: <a href="https://github.com/plv8/pljs/blob/main/docs/BENCHMARKS.md">https://github.com/plv8/pljs/blob/main/docs/BENCHMARKS.md</a><p>This is the first step toward a truly light-weight, fast, and extensible JavaScript runtime embedded inside of Postgres. The initial roadmap has been published at <a href="https://github.com/plv8/pljs/blob/main/docs/ROADMAP.md">https://github.com/plv8/pljs/blob/main/docs/ROADMAP.md</a><p>You can join the discussion by joining the PLV8 Discord: <a href="https://discord.gg/XYGSCfVNBC" rel="nofollow">https://discord.gg/XYGSCfVNBC</a><p>You can find PLJS at <a href="https://github.com/plv8/pljs">https://github.com/plv8/pljs</a>
Show HN: Autohive – Build AI agents the easy way for everyday teams
Show HN: AI Phone Interviewer – get a call in 30 seconds
Enter your phone number, get called in 30 seconds for a 2–3 minute AI-powered screening interview.
<a href="https://prepin.ai/aiphonecall" rel="nofollow">https://prepin.ai/aiphonecall</a><p>Current MVP scope
Right now it handles general screening questions and generates simple reports. We’re validating demand before building:<p>Technical screening libraries<p>ATS integrations<p>Custom question sets per role or company<p>Multi-language support<p>Who we’re looking for
We’d love feedback from recruiters and startup founders who are (or soon will be) running hiring processes.<p>Request for feedback
Please actually try the call first—I know it sounds gimmicky, but the voice quality will surprise you. Then let us know:<p>Did it feel natural?<p>Would you be comfortable being screened this way?<p>If you hire, could you see your team using this?<p>What needs improvement?<p>To see the full recruiter dashboard, leave your email on the page and we’ll send you the demo.<p>This is just an MVP to test the concept. Curious what HN thinks—future of recruiting or unnecessary automation?
Show HN: PRSS Site Creator – Create Blogs and Websites from Your Desktop
Show HN: PRSS Site Creator – Create Blogs and Websites from Your Desktop
Show HN: Magnitude – Open-source AI browser automation framework
Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (<a href="https://news.ycombinator.com/item?id=43796003">https://news.ycombinator.com/item?id=43796003</a>).<p>We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.<p>You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.<p>Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:<p>- Drag and drop interactions<p>- Data visualizations, charts, and tables<p>- Legacy apps with nested iframes<p>- Canvas and webGL-heavy sites (like design tools or photo editing)<p>- Remote desktops streamed into the browser<p>To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.<p>Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.<p>```ts<p>// Magnitude can handle high-level tasks<p>await agent.act('Create an issue', {<p><pre><code> // Optionally pass data that the agent will use where appropriate
data: {
title: 'Use Magnitude',
description: 'Run "npx create-magnitude-app" and follow the instructions',
},
</code></pre>
});<p>// It can also handle low-level actions<p>await agent.act('Drag "Use Magnitude" to the top of the in progress column');<p>// Intelligently extract data based on the DOM content matching a provided zod schema<p>const tasks = await agent.extract(<p><pre><code> 'List in progress issues',
z.array(z.object({
title: z.string(),
description: z.string(),
// Agent can extract existing data or new insights
difficulty: z.number().describe('Rate the difficulty between 1-5')
})),
</code></pre>
);<p>```<p>We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!<p>Repo: <a href="https://github.com/magnitudedev/magnitude">https://github.com/magnitudedev/magnitude</a>
Show HN: Magnitude – Open-source AI browser automation framework
Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (<a href="https://news.ycombinator.com/item?id=43796003">https://news.ycombinator.com/item?id=43796003</a>).<p>We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.<p>You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.<p>Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:<p>- Drag and drop interactions<p>- Data visualizations, charts, and tables<p>- Legacy apps with nested iframes<p>- Canvas and webGL-heavy sites (like design tools or photo editing)<p>- Remote desktops streamed into the browser<p>To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.<p>Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.<p>```ts<p>// Magnitude can handle high-level tasks<p>await agent.act('Create an issue', {<p><pre><code> // Optionally pass data that the agent will use where appropriate
data: {
title: 'Use Magnitude',
description: 'Run "npx create-magnitude-app" and follow the instructions',
},
</code></pre>
});<p>// It can also handle low-level actions<p>await agent.act('Drag "Use Magnitude" to the top of the in progress column');<p>// Intelligently extract data based on the DOM content matching a provided zod schema<p>const tasks = await agent.extract(<p><pre><code> 'List in progress issues',
z.array(z.object({
title: z.string(),
description: z.string(),
// Agent can extract existing data or new insights
difficulty: z.number().describe('Rate the difficulty between 1-5')
})),
</code></pre>
);<p>```<p>We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!<p>Repo: <a href="https://github.com/magnitudedev/magnitude">https://github.com/magnitudedev/magnitude</a>
Show HN: Magnitude – Open-source AI browser automation framework
Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (<a href="https://news.ycombinator.com/item?id=43796003">https://news.ycombinator.com/item?id=43796003</a>).<p>We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.<p>You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.<p>Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:<p>- Drag and drop interactions<p>- Data visualizations, charts, and tables<p>- Legacy apps with nested iframes<p>- Canvas and webGL-heavy sites (like design tools or photo editing)<p>- Remote desktops streamed into the browser<p>To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.<p>Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.<p>```ts<p>// Magnitude can handle high-level tasks<p>await agent.act('Create an issue', {<p><pre><code> // Optionally pass data that the agent will use where appropriate
data: {
title: 'Use Magnitude',
description: 'Run "npx create-magnitude-app" and follow the instructions',
},
</code></pre>
});<p>// It can also handle low-level actions<p>await agent.act('Drag "Use Magnitude" to the top of the in progress column');<p>// Intelligently extract data based on the DOM content matching a provided zod schema<p>const tasks = await agent.extract(<p><pre><code> 'List in progress issues',
z.array(z.object({
title: z.string(),
description: z.string(),
// Agent can extract existing data or new insights
difficulty: z.number().describe('Rate the difficulty between 1-5')
})),
</code></pre>
);<p>```<p>We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!<p>Repo: <a href="https://github.com/magnitudedev/magnitude">https://github.com/magnitudedev/magnitude</a>
Show HN: Magnitude – Open-source AI browser automation framework
Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (<a href="https://news.ycombinator.com/item?id=43796003">https://news.ycombinator.com/item?id=43796003</a>).<p>We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.<p>You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.<p>Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:<p>- Drag and drop interactions<p>- Data visualizations, charts, and tables<p>- Legacy apps with nested iframes<p>- Canvas and webGL-heavy sites (like design tools or photo editing)<p>- Remote desktops streamed into the browser<p>To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.<p>Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.<p>```ts<p>// Magnitude can handle high-level tasks<p>await agent.act('Create an issue', {<p><pre><code> // Optionally pass data that the agent will use where appropriate
data: {
title: 'Use Magnitude',
description: 'Run "npx create-magnitude-app" and follow the instructions',
},
</code></pre>
});<p>// It can also handle low-level actions<p>await agent.act('Drag "Use Magnitude" to the top of the in progress column');<p>// Intelligently extract data based on the DOM content matching a provided zod schema<p>const tasks = await agent.extract(<p><pre><code> 'List in progress issues',
z.array(z.object({
title: z.string(),
description: z.string(),
// Agent can extract existing data or new insights
difficulty: z.number().describe('Rate the difficulty between 1-5')
})),
</code></pre>
);<p>```<p>We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!<p>Repo: <a href="https://github.com/magnitudedev/magnitude">https://github.com/magnitudedev/magnitude</a>
Show HN: I built an AI dataset generator
Show HN: I built an AI dataset generator
Show HN: I built an AI dataset generator
Show HN: Elelem, a tool-calling CLI for Ollama and DeepSeek in C
Show HN: Elelem, a tool-calling CLI for Ollama and DeepSeek in C
Show HN: VSCan - Detect Malicious VSCode Extensions
Did you know that VSCode extensions run with full access to your system—including file system, network, and credentials? Worse, dozens of malicious extensions have already made it into the marketplace, silently compromising devices.<p>I am a security researcher and student developer who ran into this problem myself. To help tackle this, I built a 100% free tool (no login required) that scans VSCode (and Cursor/Windsurf) extensions for:<p>- Hidden malware and obfuscated code<p>- Dangerous permissions and API misuse<p>- Vulnerable dependencies and suspicious network connections<p>Users have already found hundreds of vulnerabilities in extensions. VSCan generates a clean, developer-friendly security report to help you understand what you're installing.<p>Try it out: <a href="https://www.vscan.dev" rel="nofollow">https://www.vscan.dev</a><p>I have also developed custom sandboxing security architecture to restrict extensions from malicious activity during runtime. There is no existing technology that does this, so if you would be interested in trying it out or learning more, please reach out!<p>I would greatly appreciate any feedback and thanks for your help!<p>_______________________________________________________________________________<p>Here are some numbers as to what I have detected from a sample of 1077 extensions that are available on the Marketplace:<p>- 3 extensions are marked as malicious by VirusTotal
- 7 extensions use malicious network connections (verified by VirusTotal)
- 33 extensions have dependencies with critical vulnerabilities
- 39 extensions have sensitive information (I have seen api keys, usernames, passwords, etc.)
- 204 extension have poor development practices as marked by OSSF
- 71 extensions have very high permissions (while not bad can be indicator of potential malicious activity)<p>As an example here is the link to an extension analysis with malicious network endpoints:
<a href="https://vscan.dev/?analysisId=9e6c1849-3973-402b-a4ff-3b4023508fb8" rel="nofollow">https://vscan.dev/?analysisId=9e6c1849-3973-402b-a4ff-3b4023...</a>
Show HN: VSCan - Detect Malicious VSCode Extensions
Did you know that VSCode extensions run with full access to your system—including file system, network, and credentials? Worse, dozens of malicious extensions have already made it into the marketplace, silently compromising devices.<p>I am a security researcher and student developer who ran into this problem myself. To help tackle this, I built a 100% free tool (no login required) that scans VSCode (and Cursor/Windsurf) extensions for:<p>- Hidden malware and obfuscated code<p>- Dangerous permissions and API misuse<p>- Vulnerable dependencies and suspicious network connections<p>Users have already found hundreds of vulnerabilities in extensions. VSCan generates a clean, developer-friendly security report to help you understand what you're installing.<p>Try it out: <a href="https://www.vscan.dev" rel="nofollow">https://www.vscan.dev</a><p>I have also developed custom sandboxing security architecture to restrict extensions from malicious activity during runtime. There is no existing technology that does this, so if you would be interested in trying it out or learning more, please reach out!<p>I would greatly appreciate any feedback and thanks for your help!<p>_______________________________________________________________________________<p>Here are some numbers as to what I have detected from a sample of 1077 extensions that are available on the Marketplace:<p>- 3 extensions are marked as malicious by VirusTotal
- 7 extensions use malicious network connections (verified by VirusTotal)
- 33 extensions have dependencies with critical vulnerabilities
- 39 extensions have sensitive information (I have seen api keys, usernames, passwords, etc.)
- 204 extension have poor development practices as marked by OSSF
- 71 extensions have very high permissions (while not bad can be indicator of potential malicious activity)<p>As an example here is the link to an extension analysis with malicious network endpoints:
<a href="https://vscan.dev/?analysisId=9e6c1849-3973-402b-a4ff-3b4023508fb8" rel="nofollow">https://vscan.dev/?analysisId=9e6c1849-3973-402b-a4ff-3b4023...</a>