The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Explore what the browser exposes about you

I built a tool that reveals the data your browser exposes automatically every time you visit a website.<p>GitHub: <a href="https://github.com/neberej/exposedbydefault" rel="nofollow">https://github.com/neberej/exposedbydefault</a><p>Demo: <a href="https://neberej.github.io/exposedbydefault/" rel="nofollow">https://neberej.github.io/exposedbydefault/</a><p>Note: No data is sent anywhere. Everything runs in your browser.

Show HN: Explore what the browser exposes about you

I built a tool that reveals the data your browser exposes automatically every time you visit a website.<p>GitHub: <a href="https://github.com/neberej/exposedbydefault" rel="nofollow">https://github.com/neberej/exposedbydefault</a><p>Demo: <a href="https://neberej.github.io/exposedbydefault/" rel="nofollow">https://neberej.github.io/exposedbydefault/</a><p>Note: No data is sent anywhere. Everything runs in your browser.

Show HN: Explore what the browser exposes about you

I built a tool that reveals the data your browser exposes automatically every time you visit a website.<p>GitHub: <a href="https://github.com/neberej/exposedbydefault" rel="nofollow">https://github.com/neberej/exposedbydefault</a><p>Demo: <a href="https://neberej.github.io/exposedbydefault/" rel="nofollow">https://neberej.github.io/exposedbydefault/</a><p>Note: No data is sent anywhere. Everything runs in your browser.

Anthony Bourdain's Lost Li.st's

I read through the years about Bourdain's content on the defunct li.st service, but was never able to find an archive of it. A more thorough perusing of archive.org and a pointer from an Internet stranger led me to create this site. Cheers

Show HN: Pulse 2.0 – Live co-listening rooms where anyone can be a DJ

I wanted to listen to music with friends who live far away. Not "watch a YouTube video together" - actually share what I'm hearing in real-time, like we're in the same room.<p>Pulse is what came out of that. Anyone can host a live audio stream from their browser tab or system audio. Listeners join, music recognition identifies tracks automatically, and there's chat with 7TV emotes. No account required - you get an anonymous code and you're in.<p>We're running demo rooms that stream NTS Radio and SomaFM 24/7 (indie project, not affiliated - we backlink to the original stations). There's also a "Money For Nothing 24/7" room if you want to loop that Dire Straits instrumental forever.<p>Think of it as co-listening infrastructure. Bedroom DJs, listening parties, or just sharing your current vibe.

Show HN: Pulse 2.0 – Live co-listening rooms where anyone can be a DJ

I wanted to listen to music with friends who live far away. Not "watch a YouTube video together" - actually share what I'm hearing in real-time, like we're in the same room.<p>Pulse is what came out of that. Anyone can host a live audio stream from their browser tab or system audio. Listeners join, music recognition identifies tracks automatically, and there's chat with 7TV emotes. No account required - you get an anonymous code and you're in.<p>We're running demo rooms that stream NTS Radio and SomaFM 24/7 (indie project, not affiliated - we backlink to the original stations). There's also a "Money For Nothing 24/7" room if you want to loop that Dire Straits instrumental forever.<p>Think of it as co-listening infrastructure. Bedroom DJs, listening parties, or just sharing your current vibe.

Show HN: Glasses to detect smart-glasses that have cameras

Hi! Recently smart-glasses with cameras like the Meta Ray-bans seem to be getting more popular. As does some people's desire to remove/cover up the recording indicator LED. I wanted to see if there's a way to detect when people are recording with these types of glasses, so a little bit ago I started working this project. I've hit a little bit of a wall though so I'm very much open to ideas!<p>I've written a bunch more on the link (+photos are there), but essentially this uses 2 fingerprinting approaches: - retro-reflectivity of the camera sensor by looking at IR reflections. mixed results here. - wireless traffic (primarily BLE, also looking into BTC and wifi)<p>For the latter, I'm currently just using an ESP32, and I can consistently detect when the Meta Raybans are 1) pairing, 2) first powered on, 3) (less consistently) when they're taken out of the charging case. When they do detect something, it plays a little jingle next to your ear.<p>Ideally I want to be able to detect them when they're in use, and not just at boot. I've come across the nRF52840, which seems like it can follow directed BLE traffic beyond the initial broadcast, but from my understanding it would still need to catch the first CONNECT_REQ event regardless. On the bluetooth classic side of things, all the hardware looks really expensive! Any ideas are appreciated. Thanks!

Show HN: Glasses to detect smart-glasses that have cameras

Hi! Recently smart-glasses with cameras like the Meta Ray-bans seem to be getting more popular. As does some people's desire to remove/cover up the recording indicator LED. I wanted to see if there's a way to detect when people are recording with these types of glasses, so a little bit ago I started working this project. I've hit a little bit of a wall though so I'm very much open to ideas!<p>I've written a bunch more on the link (+photos are there), but essentially this uses 2 fingerprinting approaches: - retro-reflectivity of the camera sensor by looking at IR reflections. mixed results here. - wireless traffic (primarily BLE, also looking into BTC and wifi)<p>For the latter, I'm currently just using an ESP32, and I can consistently detect when the Meta Raybans are 1) pairing, 2) first powered on, 3) (less consistently) when they're taken out of the charging case. When they do detect something, it plays a little jingle next to your ear.<p>Ideally I want to be able to detect them when they're in use, and not just at boot. I've come across the nRF52840, which seems like it can follow directed BLE traffic beyond the initial broadcast, but from my understanding it would still need to catch the first CONNECT_REQ event regardless. On the bluetooth classic side of things, all the hardware looks really expensive! Any ideas are appreciated. Thanks!

Show HN: Glasses to detect smart-glasses that have cameras

Hi! Recently smart-glasses with cameras like the Meta Ray-bans seem to be getting more popular. As does some people's desire to remove/cover up the recording indicator LED. I wanted to see if there's a way to detect when people are recording with these types of glasses, so a little bit ago I started working this project. I've hit a little bit of a wall though so I'm very much open to ideas!<p>I've written a bunch more on the link (+photos are there), but essentially this uses 2 fingerprinting approaches: - retro-reflectivity of the camera sensor by looking at IR reflections. mixed results here. - wireless traffic (primarily BLE, also looking into BTC and wifi)<p>For the latter, I'm currently just using an ESP32, and I can consistently detect when the Meta Raybans are 1) pairing, 2) first powered on, 3) (less consistently) when they're taken out of the charging case. When they do detect something, it plays a little jingle next to your ear.<p>Ideally I want to be able to detect them when they're in use, and not just at boot. I've come across the nRF52840, which seems like it can follow directed BLE traffic beyond the initial broadcast, but from my understanding it would still need to catch the first CONNECT_REQ event regardless. On the bluetooth classic side of things, all the hardware looks really expensive! Any ideas are appreciated. Thanks!

Show HN: Era – Open-source local sandbox for AI agents

Just watched this video by ThePrimeagen (<a href="https://www.youtube.com/watch?v=efwDZw7l2Nk" rel="nofollow">https://www.youtube.com/watch?v=efwDZw7l2Nk</a>) about attackers jailbreaking Claude to run cyber attacks. The core issue: AI agents need isolation.<p>We built ERA to fix this – local microVM-based sandboxing for AI-generated code with hardware-level security. Think containers, but safer. Such attacks wouldn't touch your host if running in ERA.<p>GitHub: <a href="https://github.com/BinSquare/ERA" rel="nofollow">https://github.com/BinSquare/ERA</a><p>Quick start: <a href="https://github.com/BinSquare/ERA/tree/main/era-agent/tutorials" rel="nofollow">https://github.com/BinSquare/ERA/tree/main/era-agent/tutoria...</a><p>Would love your thoughts and feedback!

Show HN: Era – Open-source local sandbox for AI agents

Just watched this video by ThePrimeagen (<a href="https://www.youtube.com/watch?v=efwDZw7l2Nk" rel="nofollow">https://www.youtube.com/watch?v=efwDZw7l2Nk</a>) about attackers jailbreaking Claude to run cyber attacks. The core issue: AI agents need isolation.<p>We built ERA to fix this – local microVM-based sandboxing for AI-generated code with hardware-level security. Think containers, but safer. Such attacks wouldn't touch your host if running in ERA.<p>GitHub: <a href="https://github.com/BinSquare/ERA" rel="nofollow">https://github.com/BinSquare/ERA</a><p>Quick start: <a href="https://github.com/BinSquare/ERA/tree/main/era-agent/tutorials" rel="nofollow">https://github.com/BinSquare/ERA/tree/main/era-agent/tutoria...</a><p>Would love your thoughts and feedback!

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

As a teacher, we keep our slides as markdown files in git repos and want to build these automatically so they can be viewed online (or offline if needed). To achieve this, I have created MkSlides. This tool converts all markdown in a folder to slides generated with Reveal.js. The workflow is very similar to MkDocs.<p>Install: `pip install mkslides`<p>Building slides: `mkslides build`<p>Live preview during editing: `mkslides serve`<p>Comparison with other tools like marp, slidev, ...:<p>- This tool is a single command and easy to integrate in CI/CD pipelines.<p>- It only needs Python.<p>- The workflow is also very similar to MkDocs, which makes it easy to combine the two in a single GitHub/GitLab repo.<p>- Generates an index landing page for multiple slideshows in a folder which is really convenient if you have e.g. a slideshow per chapter.<p>- It is lightweight.<p>- Everything is IaC.

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Runprompt – run .prompt files from the command line

I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.<p>When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.<p>Here's what it looks like:<p>--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}<p>Running it:<p>cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'<p>The things I think are interesting:<p>* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.<p>* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.<p>* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.<p>* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).<p>You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.<p>Would love your feedback, and PRs are most welcome!

Show HN: Yolodex – real-time customer enrichment API

hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback.<p><i>what it does</i><p>send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests.<p>It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like):<p><pre><code> curl https://api.yolodex.ai/api/v1/email-enrichment \ --request POST \ --header 'Content-Type: application/json' \ --data '{"email": "john.smith@example.com"}' </code></pre> everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you.<p><i>why i built it / what’s different</i><p>i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found:<p>1. <i>outdated data</i> - the data about was out-of-date and misleading, emails didn’t work, etc<p>2. <i>dubious data</i> - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on<p>3. <i>aggressive pricing</i> - monthly/annual commitments, large gaps between plans, pay the same for empty profiles<p>4. <i>painful setup</i> - hard to find the right api, set it up, test it out etc<p>i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is:<p>1. real-time<p>2. public info only (osint)<p>3. transparent simple pricing<p>4. 1 min to setup<p><i>what i’d love feedback on</i><p>* <i>speed</i>: are responses fast enough? would you trade-off speed for better data coverage?<p>* <i>coverage</i>: which fields will you use (or others you need)?<p>* <i>pricing</i>: is the pricing model sane?<p>* <i>use-cases</i>: what you need this type data for (i.e. example use cases)?<p>* <i>accuracy</i>: any examples where i got it badly wrong?<p>happy to answer technical questions in the thread and give more free credits to help anyone test

< 1 2 3 4 5 ... 904 905 906 >