The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: AI game animation sprite generator

I tried to build AI game animation generator last year ( <a href="https://news.ycombinator.com/item?id=40395221">https://news.ycombinator.com/item?id=40395221</a>), a lot of people were interested, but it failed, mainly because the technology was not good enough.<p>1 year passed, there were a lot of developments in video/image generation. I tried it again, I think it works super well now. Actually beyond my expectation.<p>You can generate all kinds of game character animation sprites with only 1 image.<p>1, upload your image of your character 2, choose the action you want 3, generate!<p>Support basic actions like Run, Jump, Punch and complicated ones like: Shoryuken, Spinning kick, etc.<p>High quality sprite sheet will be directly generated to use in Unity and any game engine.<p>If you are an indie game developer, you don't need to high an artist or animator to develop you game.<p>For studios, it's 10x cost saving and 10x efficiency as no more creating animations for 100 NPCs 100 times.<p>Please check it out, looking forward to your feedback!

Show HN: iOS Screen Time from a REST API

We're Oliver and Royce and we're the founders of Clearspace. We build tools to help people reduce their screen time (here’s us two years ago: <a href="https://news.ycombinator.com/item?id=35888644">https://news.ycombinator.com/item?id=35888644</a>)<p>We get all kinds of requests from users for ways they'd like to use their screen time data.<p>- “Auto-donate $x to charity every time I exceed a limit or try to bypass it”<p>- “My 75 Hard group has a screen time requirement, can we set up group visibility?”<p>- “Let my personal agent know if it’s a good time to tackle things on my todo list”<p>- “Auto-report large deviations in my screen time to my therapist “<p>We aren't able to build for all of them, so we're releasing this API.<p>This is the first time iOS Screen Time is accessible on the web. Apple doesn’t expose it, but since we measure it ourselves, we can - via UI or API. We're launching this API so developers can build all these tools and more. Our goal is to enable more solutions to what we believe is the biggest problem in the world - the misalignment of human attention and intention in the digital world.<p>Here's a quick demo of setting up and using the API: <a href="https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdxo-eTu/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdx...</a>

Show HN: iOS Screen Time from a REST API

We're Oliver and Royce and we're the founders of Clearspace. We build tools to help people reduce their screen time (here’s us two years ago: <a href="https://news.ycombinator.com/item?id=35888644">https://news.ycombinator.com/item?id=35888644</a>)<p>We get all kinds of requests from users for ways they'd like to use their screen time data.<p>- “Auto-donate $x to charity every time I exceed a limit or try to bypass it”<p>- “My 75 Hard group has a screen time requirement, can we set up group visibility?”<p>- “Let my personal agent know if it’s a good time to tackle things on my todo list”<p>- “Auto-report large deviations in my screen time to my therapist “<p>We aren't able to build for all of them, so we're releasing this API.<p>This is the first time iOS Screen Time is accessible on the web. Apple doesn’t expose it, but since we measure it ourselves, we can - via UI or API. We're launching this API so developers can build all these tools and more. Our goal is to enable more solutions to what we believe is the biggest problem in the world - the misalignment of human attention and intention in the digital world.<p>Here's a quick demo of setting up and using the API: <a href="https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdxo-eTu/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdx...</a>

Show HN: iOS Screen Time from a REST API

We're Oliver and Royce and we're the founders of Clearspace. We build tools to help people reduce their screen time (here’s us two years ago: <a href="https://news.ycombinator.com/item?id=35888644">https://news.ycombinator.com/item?id=35888644</a>)<p>We get all kinds of requests from users for ways they'd like to use their screen time data.<p>- “Auto-donate $x to charity every time I exceed a limit or try to bypass it”<p>- “My 75 Hard group has a screen time requirement, can we set up group visibility?”<p>- “Let my personal agent know if it’s a good time to tackle things on my todo list”<p>- “Auto-report large deviations in my screen time to my therapist “<p>We aren't able to build for all of them, so we're releasing this API.<p>This is the first time iOS Screen Time is accessible on the web. Apple doesn’t expose it, but since we measure it ourselves, we can - via UI or API. We're launching this API so developers can build all these tools and more. Our goal is to enable more solutions to what we believe is the biggest problem in the world - the misalignment of human attention and intention in the digital world.<p>Here's a quick demo of setting up and using the API: <a href="https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdxo-eTu/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdx...</a>

Show HN: iOS Screen Time from a REST API

We're Oliver and Royce and we're the founders of Clearspace. We build tools to help people reduce their screen time (here’s us two years ago: <a href="https://news.ycombinator.com/item?id=35888644">https://news.ycombinator.com/item?id=35888644</a>)<p>We get all kinds of requests from users for ways they'd like to use their screen time data.<p>- “Auto-donate $x to charity every time I exceed a limit or try to bypass it”<p>- “My 75 Hard group has a screen time requirement, can we set up group visibility?”<p>- “Let my personal agent know if it’s a good time to tackle things on my todo list”<p>- “Auto-report large deviations in my screen time to my therapist “<p>We aren't able to build for all of them, so we're releasing this API.<p>This is the first time iOS Screen Time is accessible on the web. Apple doesn’t expose it, but since we measure it ourselves, we can - via UI or API. We're launching this API so developers can build all these tools and more. Our goal is to enable more solutions to what we believe is the biggest problem in the world - the misalignment of human attention and intention in the digital world.<p>Here's a quick demo of setting up and using the API: <a href="https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdxo-eTu/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1QahETj3xaaIsn0JiNbuqvTaSLdx...</a>

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.<p>ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.<p>the pain:<p>your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads<p>the fix:<p>ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.<p>some features:<p>- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB<p>the quickstart<p>pip install ask-human-mcp ask-human-mcp --help<p>add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }<p>now any call like:<p>answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )<p>creates:<p>### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING<p>just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.<p>link:<p>github -> <a href="https://github.com/Masony817/ask-human-mcp">https://github.com/Masony817/ask-human-mcp</a><p>feedback:<p>I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.<p>ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.<p>the pain:<p>your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads<p>the fix:<p>ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.<p>some features:<p>- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB<p>the quickstart<p>pip install ask-human-mcp ask-human-mcp --help<p>add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }<p>now any call like:<p>answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )<p>creates:<p>### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING<p>just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.<p>link:<p>github -> <a href="https://github.com/Masony817/ask-human-mcp">https://github.com/Masony817/ask-human-mcp</a><p>feedback:<p>I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.<p>ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.<p>the pain:<p>your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads<p>the fix:<p>ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.<p>some features:<p>- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB<p>the quickstart<p>pip install ask-human-mcp ask-human-mcp --help<p>add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }<p>now any call like:<p>answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )<p>creates:<p>### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING<p>just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.<p>link:<p>github -> <a href="https://github.com/Masony817/ask-human-mcp">https://github.com/Masony817/ask-human-mcp</a><p>feedback:<p>I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.<p>ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.<p>the pain:<p>your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads<p>the fix:<p>ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.<p>some features:<p>- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB<p>the quickstart<p>pip install ask-human-mcp ask-human-mcp --help<p>add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }<p>now any call like:<p>answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )<p>creates:<p>### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING<p>just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.<p>link:<p>github -> <a href="https://github.com/Masony817/ask-human-mcp">https://github.com/Masony817/ask-human-mcp</a><p>feedback:<p>I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations

While building my startup i kept running into the issue where ai agents in cursor create endpoints or code that shouldn't exist, hallucinates strings, or just don't understand the code.<p>ask-human-mcp pauses your agent whenever it’s stuck, logs a question into ask_human.md in your root directory with answer: PENDING, and then resumes as soon as you fill in the correct answer.<p>the pain:<p>your agent screams out an endpoint that never existed it makes confident assumptions and you spend hours debugging false leads<p>the fix:<p>ask-human-mcp gives your agent an escape hatch. when it’s unsure, it calls ask_human(), writes a question into ask_human.md, and waits. you swap answer: PENDING for the real answer and it keeps going.<p>some features:<p>- zero config: pip install ask-human-mcp + one line in .cursor/mcp.json → boom, you’re live - cross-platform: works on macOS, Linux, and Windows—no extra servers or webhooks. - markdown Q\&A: agent calls await ask_human(), question lands in ask_human.md with answer: PENDING. you write the answer, agent picks back up - file locking & rotation: prevents corrupt files, limits pending questions, auto-rotates when ask_human.md hits ~50 MB<p>the quickstart<p>pip install ask-human-mcp ask-human-mcp --help<p>add to .cursor/mcp.json and restart: { "mcpServers": { "ask-human": { "command": "ask-human-mcp" } } }<p>now any call like:<p>answer = await ask_human( "which auth endpoint do we use?", "building login form in auth.js" )<p>creates:<p>### Q8c4f1e2a ts: 2025-01-15 14:30 q: which auth endpoint do we use? ctx: building login form in auth.js answer: PENDING<p>just replace answer: PENDING with the real endpoint (e.g., `POST /api/v2/auth/login`) and your agent continues.<p>link:<p>github -> <a href="https://github.com/Masony817/ask-human-mcp">https://github.com/Masony817/ask-human-mcp</a><p>feedback:<p>I'm Mason a 19yo solo-founder at Kallro. Happy to hear any bugs, feature requests, or weird edge cases you uncover - drop a comment or open an issue! buy me a coffee -> coff.ee/masonyarbrough

Show HN: Claude Composer

Central feature is a something like "yolo mode" but with fine grained controls over how yolo you're feeling. Also makes it easy to use "presets" of tools and permissions.<p>Let me know if you have any questions and feel free to contact me on X at <a href="https://x.com/possibilities" rel="nofollow">https://x.com/possibilities</a>

Show HN: Claude Composer

Central feature is a something like "yolo mode" but with fine grained controls over how yolo you're feeling. Also makes it easy to use "presets" of tools and permissions.<p>Let me know if you have any questions and feel free to contact me on X at <a href="https://x.com/possibilities" rel="nofollow">https://x.com/possibilities</a>

Show HN: Claude Composer

Central feature is a something like "yolo mode" but with fine grained controls over how yolo you're feeling. Also makes it easy to use "presets" of tools and permissions.<p>Let me know if you have any questions and feel free to contact me on X at <a href="https://x.com/possibilities" rel="nofollow">https://x.com/possibilities</a>

Show HN: Claude Composer

Central feature is a something like "yolo mode" but with fine grained controls over how yolo you're feeling. Also makes it easy to use "presets" of tools and permissions.<p>Let me know if you have any questions and feel free to contact me on X at <a href="https://x.com/possibilities" rel="nofollow">https://x.com/possibilities</a>

Show HN: I made a 3D SVG Renderer that projects textures without rasterization

Show HN: I made a 3D SVG Renderer that projects textures without rasterization

Show HN: ClickStack – Open-source Datadog alternative by ClickHouse and HyperDX

Hey HN! Mike & Warren here from HyperDX (now part of ClickHouse)! We’ve been building ClickStack, an open source observability stack that helps you collect, centralize, search/viz/alert on your telemetry (logs, metrics, traces) in just a few minutes - all powered by ClickHouse (Apache2) for storage, HyperDX (MIT) for visualization and OpenTelemetry (Apache2) for ingestion.<p>You can check out the quick start for spinning things up in the repo here: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a><p>ClickStack makes it really easy to instrument your application so you can go from bug reports of “my checkout didn’t go through” to a session replay of the user, backend API calls, to DB queries and infrastructure metrics related to that specific request in a single view.<p>For those that might be migrating from Very Expensive Observability Vendor (TM) to something open source, more performant, and doesn’t require extensive culling of retention limits and sampling rates - ClickStack gives a batteries-included way of starting that migration journey.<p>For those that aren’t familiar with ClickHouse, it’s a high performance database that has already been used by companies such as Anthropic, Cloudflare, and DoorDash to power their core observability at scale due to its flexibility, ease of use, and cost effectiveness. However, this required teams to dedicate engineers to building a custom observability stack, where it’s difficult to not only get their telemetry data easily into ClickHouse but also struggling without a native UI experience.<p>That’s why we’re building ClickStack - we wanted to bundle an easy way to get started ingesting your telemetry data whether it’s logs & traces from Node.js or Ruby to metrics from Kubernetes or your bare metal infrastructure. Just as important we wanted our users to enjoy a visualization experience that allowed users to quickly search using a familiar lucene-like search syntax (similar to what you’d use in Google!). We recognise though, that a SQL mode is needed for the most complex of queries. We've also added high cardinality outlier analysis by charting the delta between outlier and inlier events - which we've found really helpful in narrowing down causes of regressions/anomalies in our traces as well as log patterns to condense down clusters of similar logs.<p>We’re really excited about the roadmap ahead in terms of improving ClickStack as a product and the ClickHouse core database to improve observability. Would love to hear everyone’s feedback and what they think!<p>Spinning up a container is pretty simple: `docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one` In browser live demo (no sign ups or anything silly, it runs fully in your browser!): <a href="https://play.hyperdx.io/" rel="nofollow">https://play.hyperdx.io/</a> Landing Page: <a href="https://clickhouse.com/o11y" rel="nofollow">https://clickhouse.com/o11y</a> Github Repo: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a> Discord community: <a href="https://hyperdx.io/discord" rel="nofollow">https://hyperdx.io/discord</a> Docs: <a href="https://clickhouse.com/docs/use-cases/observability/clickstack/getting-started" rel="nofollow">https://clickhouse.com/docs/use-cases/observability/clicksta...</a>

Show HN: ClickStack – Open-source Datadog alternative by ClickHouse and HyperDX

Hey HN! Mike & Warren here from HyperDX (now part of ClickHouse)! We’ve been building ClickStack, an open source observability stack that helps you collect, centralize, search/viz/alert on your telemetry (logs, metrics, traces) in just a few minutes - all powered by ClickHouse (Apache2) for storage, HyperDX (MIT) for visualization and OpenTelemetry (Apache2) for ingestion.<p>You can check out the quick start for spinning things up in the repo here: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a><p>ClickStack makes it really easy to instrument your application so you can go from bug reports of “my checkout didn’t go through” to a session replay of the user, backend API calls, to DB queries and infrastructure metrics related to that specific request in a single view.<p>For those that might be migrating from Very Expensive Observability Vendor (TM) to something open source, more performant, and doesn’t require extensive culling of retention limits and sampling rates - ClickStack gives a batteries-included way of starting that migration journey.<p>For those that aren’t familiar with ClickHouse, it’s a high performance database that has already been used by companies such as Anthropic, Cloudflare, and DoorDash to power their core observability at scale due to its flexibility, ease of use, and cost effectiveness. However, this required teams to dedicate engineers to building a custom observability stack, where it’s difficult to not only get their telemetry data easily into ClickHouse but also struggling without a native UI experience.<p>That’s why we’re building ClickStack - we wanted to bundle an easy way to get started ingesting your telemetry data whether it’s logs & traces from Node.js or Ruby to metrics from Kubernetes or your bare metal infrastructure. Just as important we wanted our users to enjoy a visualization experience that allowed users to quickly search using a familiar lucene-like search syntax (similar to what you’d use in Google!). We recognise though, that a SQL mode is needed for the most complex of queries. We've also added high cardinality outlier analysis by charting the delta between outlier and inlier events - which we've found really helpful in narrowing down causes of regressions/anomalies in our traces as well as log patterns to condense down clusters of similar logs.<p>We’re really excited about the roadmap ahead in terms of improving ClickStack as a product and the ClickHouse core database to improve observability. Would love to hear everyone’s feedback and what they think!<p>Spinning up a container is pretty simple: `docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one` In browser live demo (no sign ups or anything silly, it runs fully in your browser!): <a href="https://play.hyperdx.io/" rel="nofollow">https://play.hyperdx.io/</a> Landing Page: <a href="https://clickhouse.com/o11y" rel="nofollow">https://clickhouse.com/o11y</a> Github Repo: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a> Discord community: <a href="https://hyperdx.io/discord" rel="nofollow">https://hyperdx.io/discord</a> Docs: <a href="https://clickhouse.com/docs/use-cases/observability/clickstack/getting-started" rel="nofollow">https://clickhouse.com/docs/use-cases/observability/clicksta...</a>

Show HN: Air Lab – A portable and open air quality measuring device

Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project: - <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction) - <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!

Show HN: Air Lab – A portable and open air quality measuring device

Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project: - <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction) - <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!

< 1 2 3 ... 47 48 49 50 51 ... 863 864 865 >