The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I made a cheap alternative to college-level math & physics tutoring
Hi everyone! I’m the founder of Explanations (https://explanations.app). I’m building a website where students can get college level math & physics help for 1/10th the cost of private tutoring. You’d type a question, and your teacher replies by drawing a Youtube/KhanAcademy-style video; and this happens asynchronously throughout the week.<p>When I was studying at MIT, I often had to wait 40-60 minutes in line just to get 5 minutes of “help” from a TA - when I needed 1-2 hours. I understood that TAs can’t spend all their time helping me. That’s understandable. But what made me bitter was that, the school went the extra mile to ensure I don’t have the resources to learn on my own,<p>1. Blocking access to solutions for past problems (to prevent cheating)<p>2. Purposely not recording explanations to increase attendance: https://piazza.com/class/ky0jj3k89mz5d2/post/9<p>3. Insisting that Office Hours is a 1-by-1 format even when crowded (to prevent solutions from leaking)<p>These policies have good intentions - it’s to encourage a synchronous, in-person learning experience. But in practice, it had side-effects:<p>1. Help resources become inefficient - because so much material is restricted, and so much time is spent on delivering live lectures, there’d often be 40 students competing for help from 2 TAs in a 2-hour Office Hours<p>2. Because help resources are inefficient, it’s very hard to catch-up: once you fall behind, you have no way to review past material efficiently enough to compensate the difference - like credit card debt<p>3.Every day, I’d wake up, go to a lecture I don’t understand, go to Office Hours so I can hopefully ask for a review (which’d would take a few hours), realize TAs aren’t willing to do that, then realize there is nothing I can do to recover. I fell into a depression for many years, and my bitterness fueled me to work on the early versions of explanations.app<p>It turns out that universities succeed by being prestigious, not by teaching well. To win at prestige, be highly selective (by keeping supply low), keep a huge endowment (because it affects school rankings), and hire the best researchers (not teachers). This is actually the fundamental reason for the odd incentives in higher education, and something felt wrong.<p>So explanations.app is completely inspired by KhanAcademy and Youtube. The mystery to me was - why weren’t there more Youtube teachers & KhanAcademy videos? I believe it’s a combination of:<p>1. People who teach college subjects well often have better opportunities e.g. work, research<p>2. Lack of rewards: even Youtubers with 100K views and 10K subscribers would have at most 1-5 paying members on Patreon<p>On the one hand, there are all these free resources, where teachers changed the world way more than they ever got rewarded for. Then on the other hand, there is private tutoring - very effective - but very expensive e.g. $100/hour for college level subjects.<p>I believe the balanced solution is a system where lots of students pay $10/week to a few teachers who make videos, like a paid, Q&A Youtube/KhanAcademy, so it’s personalized, effective, but still affordable.<p>There are currently 2 teachers on explanations.app - Ben & Esther - both MIT grads, teaching physics & math for subjects like linear algebra and electromagnetism. 3 students - Laquazia, Lidija and Chandra from US, Serbia and Korea joined this month following r/physicsStudents launch: [https://www.reddit.com/r/PhysicsStudents/comments/1b2t5u6/i_started_a_program_where_mit_grads_do_physics/]<p>While explanations.app is focused on college-level math and physics, the platform is completely open for anyone to learn and/or teach. I hope you can try it :^) and give me the chance to work with you.
Show HN: Flatito, grep for YAML and JSON files
It is a kind of grep for YAML and JSON files. It allows you to search for a key and get the value and the line number where it is located.<p>I created this tool because I sometimes struggle to find specific keys in typical i18n rails yamls. In the views, you don't always have the whole key, but it is extrapolated from the context. I am sure there are other use cases; I hope you find it useful.<p>Cheers!
Show HN: Charcoal – Faster utf8.Valid using multi-byte processing without SIMD
Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files
WhatTheDuck is an in-browser sql tool to analyze csv files that uses duckdb-wasm under the hood.<p>Please provide feedback/issues/pull requests.
Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files
WhatTheDuck is an in-browser sql tool to analyze csv files that uses duckdb-wasm under the hood.<p>Please provide feedback/issues/pull requests.
Show HN: I made a books recommendation app based on your mood
Hello HN,<p>I noticed that I often looked for new books, depending on my mood (e.g., if I'm feeling tired, I want to find books that'll help me fix that and improve my sleep).<p>So, I created my 1st indie project, BooksByMood.<p>BooksByMood will help you find your next read based on your mood w/<p>- Books averaging 4.09/5 on Goodreads<p>- Each book comes with an explanation of why it's selected for your mood<p>- 18 moods to explore<p>I hope you'll enjoy using the website,<p>Cheers!
Show HN: I made a books recommendation app based on your mood
Hello HN,<p>I noticed that I often looked for new books, depending on my mood (e.g., if I'm feeling tired, I want to find books that'll help me fix that and improve my sleep).<p>So, I created my 1st indie project, BooksByMood.<p>BooksByMood will help you find your next read based on your mood w/<p>- Books averaging 4.09/5 on Goodreads<p>- Each book comes with an explanation of why it's selected for your mood<p>- 18 moods to explore<p>I hope you'll enjoy using the website,<p>Cheers!
Show HN: I made a books recommendation app based on your mood
Hello HN,<p>I noticed that I often looked for new books, depending on my mood (e.g., if I'm feeling tired, I want to find books that'll help me fix that and improve my sleep).<p>So, I created my 1st indie project, BooksByMood.<p>BooksByMood will help you find your next read based on your mood w/<p>- Books averaging 4.09/5 on Goodreads<p>- Each book comes with an explanation of why it's selected for your mood<p>- 18 moods to explore<p>I hope you'll enjoy using the website,<p>Cheers!
Show HN: Invertornot.com – API to enhance your images in dark-mode
Hi HN, I built (<a href="https://invertornot.com" rel="nofollow">https://invertornot.com</a>) it's an API that can predict whether an image will look good/bad while inverted. This is particularly useful for images in dark-mode as you can now safely invert them.<p>The conservative solution to adapt images for dark-mode consist in dimming the image, however there is a lot of images that can be inverted (graph for example). Using deep learning we can avoid heuristics and obtain a much more reliable solution.<p>The API uses an EfficientNet pre-trained model fine-tuned on a custom dataset (1.1k samples). EfficientNet was chosen as it was pre-trained and offered the best performance for its size.<p>The trained model is very small (16MB) which means you can easily run your own instance. This problem is very simple for deep learning as it's a simple binary classification.<p>For this project training the model wasn't the challenge as most of the time was spent on the construction of the dataset.<p>For the API I'm using FastAPI, Redis and ONNX Runtime to run the model. The API can be used by posting the images to the API, using URL and using SHA-1 for already processed images.<p>The API is free and open-sourced (<a href="http://github.com/mattismegevand/invertornot">http://github.com/mattismegevand/invertornot</a>).
Show HN: Invertornot.com – API to enhance your images in dark-mode
Hi HN, I built (<a href="https://invertornot.com" rel="nofollow">https://invertornot.com</a>) it's an API that can predict whether an image will look good/bad while inverted. This is particularly useful for images in dark-mode as you can now safely invert them.<p>The conservative solution to adapt images for dark-mode consist in dimming the image, however there is a lot of images that can be inverted (graph for example). Using deep learning we can avoid heuristics and obtain a much more reliable solution.<p>The API uses an EfficientNet pre-trained model fine-tuned on a custom dataset (1.1k samples). EfficientNet was chosen as it was pre-trained and offered the best performance for its size.<p>The trained model is very small (16MB) which means you can easily run your own instance. This problem is very simple for deep learning as it's a simple binary classification.<p>For this project training the model wasn't the challenge as most of the time was spent on the construction of the dataset.<p>For the API I'm using FastAPI, Redis and ONNX Runtime to run the model. The API can be used by posting the images to the API, using URL and using SHA-1 for already processed images.<p>The API is free and open-sourced (<a href="http://github.com/mattismegevand/invertornot">http://github.com/mattismegevand/invertornot</a>).
Show HN: Auto-generate an OpenAPI spec by listening to localhost
Hey HN! We've developed OpenAPI AutoSpec, a tool for automatically generating OpenAPI specifications from localhost network traffic. It’s designed to simplify the creation of API documentation by just using your website or service, especially useful when you're pressed for time.<p>Documenting endpoints one by one sucks. This project originated from us needing it at our past jobs when building 3rd-party integrations.<p>It acts as a local server proxy that listens to your application’s HTTP traffic and automatically translates this into OpenAPI 3.0 specs, documenting endpoints, requests, and responses without much effort.<p>Installation is straightforward with NPM, and starting the server only requires a few command-line arguments to specify how and where you want your documentation generated ex. npx autospec --portTo PORT --portFrom PORT --filePath openapi.json<p>It's designed to work with any local website or application setup without extensive setup or interference with your existing code, making it flexible for different frameworks. We tried capturing network traffic on Chrome extension and it didn't help us catch the full picture of backend and frontend interactions.<p>We aim in future updates to introduce features like HTTPS and OpenAPI 3.1 specification support.<p>For more details and to get started, visit our GitHub page (<a href="https://github.com/Adawg4/openapi-autospec">https://github.com/Adawg4/openapi-autospec</a>). We also have a Discord community (<a href="https://discord.com/invite/CRnxg7uduH" rel="nofollow">https://discord.com/invite/CRnxg7uduH</a>) for support and discussions around using OpenAPI AutoSpec effectively.<p>We're excited to hear what you all think!
Show HN: Auto-generate an OpenAPI spec by listening to localhost
Hey HN! We've developed OpenAPI AutoSpec, a tool for automatically generating OpenAPI specifications from localhost network traffic. It’s designed to simplify the creation of API documentation by just using your website or service, especially useful when you're pressed for time.<p>Documenting endpoints one by one sucks. This project originated from us needing it at our past jobs when building 3rd-party integrations.<p>It acts as a local server proxy that listens to your application’s HTTP traffic and automatically translates this into OpenAPI 3.0 specs, documenting endpoints, requests, and responses without much effort.<p>Installation is straightforward with NPM, and starting the server only requires a few command-line arguments to specify how and where you want your documentation generated ex. npx autospec --portTo PORT --portFrom PORT --filePath openapi.json<p>It's designed to work with any local website or application setup without extensive setup or interference with your existing code, making it flexible for different frameworks. We tried capturing network traffic on Chrome extension and it didn't help us catch the full picture of backend and frontend interactions.<p>We aim in future updates to introduce features like HTTPS and OpenAPI 3.1 specification support.<p>For more details and to get started, visit our GitHub page (<a href="https://github.com/Adawg4/openapi-autospec">https://github.com/Adawg4/openapi-autospec</a>). We also have a Discord community (<a href="https://discord.com/invite/CRnxg7uduH" rel="nofollow">https://discord.com/invite/CRnxg7uduH</a>) for support and discussions around using OpenAPI AutoSpec effectively.<p>We're excited to hear what you all think!
Show HN: Auto-generate an OpenAPI spec by listening to localhost
Hey HN! We've developed OpenAPI AutoSpec, a tool for automatically generating OpenAPI specifications from localhost network traffic. It’s designed to simplify the creation of API documentation by just using your website or service, especially useful when you're pressed for time.<p>Documenting endpoints one by one sucks. This project originated from us needing it at our past jobs when building 3rd-party integrations.<p>It acts as a local server proxy that listens to your application’s HTTP traffic and automatically translates this into OpenAPI 3.0 specs, documenting endpoints, requests, and responses without much effort.<p>Installation is straightforward with NPM, and starting the server only requires a few command-line arguments to specify how and where you want your documentation generated ex. npx autospec --portTo PORT --portFrom PORT --filePath openapi.json<p>It's designed to work with any local website or application setup without extensive setup or interference with your existing code, making it flexible for different frameworks. We tried capturing network traffic on Chrome extension and it didn't help us catch the full picture of backend and frontend interactions.<p>We aim in future updates to introduce features like HTTPS and OpenAPI 3.1 specification support.<p>For more details and to get started, visit our GitHub page (<a href="https://github.com/Adawg4/openapi-autospec">https://github.com/Adawg4/openapi-autospec</a>). We also have a Discord community (<a href="https://discord.com/invite/CRnxg7uduH" rel="nofollow">https://discord.com/invite/CRnxg7uduH</a>) for support and discussions around using OpenAPI AutoSpec effectively.<p>We're excited to hear what you all think!
Show HN: Nano-web – a low latency one binary webserver designed for serving SPAs
I'd found that whilst there's a lot of good options out there for webservers, I was looking for something that is a single deployable binary for usage with containers or unikernels and solves some of the problems that you get with other setups like nginx.<p>It uses a single compiled binary, it's going to have extremely low latency on account of caching all files in memory at runtime, and also the fun feature and really useful that's good for SPAs (e.g. Vite) or things like Astro is that you can inject configuration variables into it at runtime and access them from within your frontend code, so you don't have to rebuild any images for different environments as part of your CI.<p>Whilst I'm sure this problem has been solved time and time again, I could never get a solution to be quite right.<p>Also, serving things like Astro from S3 gets to be tricky because CloudFront doesn't support index pages in subdirectories so the routing breaks. This fixes this.<p>Use it or don't, I think it's a cool little project and it's been a while since I worked on and released something :)
Show HN: Nano-web – a low latency one binary webserver designed for serving SPAs
I'd found that whilst there's a lot of good options out there for webservers, I was looking for something that is a single deployable binary for usage with containers or unikernels and solves some of the problems that you get with other setups like nginx.<p>It uses a single compiled binary, it's going to have extremely low latency on account of caching all files in memory at runtime, and also the fun feature and really useful that's good for SPAs (e.g. Vite) or things like Astro is that you can inject configuration variables into it at runtime and access them from within your frontend code, so you don't have to rebuild any images for different environments as part of your CI.<p>Whilst I'm sure this problem has been solved time and time again, I could never get a solution to be quite right.<p>Also, serving things like Astro from S3 gets to be tricky because CloudFront doesn't support index pages in subdirectories so the routing breaks. This fixes this.<p>Use it or don't, I think it's a cool little project and it's been a while since I worked on and released something :)
Show HN: Detecting adblock, without JavaScript, by abusing HTTP 103 responses
Show HN: Detecting adblock, without JavaScript, by abusing HTTP 103 responses
Show HN: Tracecat – Open-source security alert automation / SOAR alternative
Hi HN, we are building Tracecat (<a href="https://tracecat.com/">https://tracecat.com/</a>), an open source automation platform for security alerts. Tracecat automates the tasks a security analyst has to do when responding to a security alert: e.g. contact victims, investigate security logs, report vulnerability.<p>The average security analyst deals with 100 alerts per day. As soon as an alert comes in, you have to investigate and respond. An average alert takes ~30 minutes to analyze (and 100 x 30 min = 50 hours > one whole day) Lots of things get dropped, and this creates vulnerabilities. Many breaches can be traced back to week old alerts that didn’t get properly investigated.<p>Since the risks and costs are so high, top security teams currently pay Splunk SOAR $100,000/year to help automate alert processing. It’s a click-and-drag workflow builder with webhooks, REST API integrations, and JSON processors. A security engineer would use it to build alert automations that look like this: (1) webhook to receive alert (e.g. unusual powershell cmd) from Microsoft Defender; (2) send yes/no Slackbot to ask employee about the alert; (3) if confirmed as suspicious, send malware sample to VirusTotal for report (4) collect evidence from previous steps and dump it into a ticket.<p>If $100k a year seems wildly expensive for a Zapier-like platform, you’d be half right. Splunk SOAR is actually a Zapier + log search + Jira ticketing system.<p>Log storage—that’s how Splunk turns a $99/month workflow automation tool into a pricey enterprise product. Every piece of evidence collected (e.g. Slackbot response, malware report, GeoIP enrichment) and every past workflow trail has to be searchable by a human incident responder or auditor. Security teams need to know why every alert escalated to a SEV1 or not.<p>My cofounder and I are data engineers who fell into this space. We heard our security friends constantly complain about being priced out of a SOAR (security orchestration, automation, and response platform) like Splunk SOAR.<p>We both wrote a lot of event-driven code at school (Master’s thesis) and work (Meta / PwC). We’re also early adopters of Quickwit / Tantivy, an OSS alternative to Elasticsearch / Apache Lucene that is cheaper and faster. It didn’t seem that difficult to build a cheaper open source SOAR, so we decided to do it.<p>Tracecat is also different as it can run in a single VM / laptop. Splunk SOAR and Tines are built for Fortune 10 needs, which means expensive Kubernetes clusters. Most security teams don’t need that scale, but are forced to pay the K8s “premium” (high complexity, hard to maintain). Tracecat uses OSS embedded databases (SQLite) and an event processing engine we built using Python 3.12 asyncio.<p>So far, we’ve just got a bare-bones alpha but you can already do quite a few things with it. e.g. trigger event-driven workflows from webhooks; use REST API integrations; parse responses using JSONPath; control flow using conditional blocks; store logs cheaply in Tantivy; open cases directly from workflows; prioritize and manage cases in a Jira-like table.<p>Tracecat uses Pydantic V2 for fast input / output validation and Zod for fast form validation. We care a lot about data quality! It’s also Apache-2.0 licensed so anyone can self-host the platform.<p>On our roadmap: integrations with popular security tools (Crowdstrike, Microsoft defender); pre-built workflows (e.g. investigating phishing email); better docs; more AI features like auto-labeling tickets, extracting data from unstructured text etc.<p>We’re still early so would love your feedback and opinions. Feel free to try us out or share it with your security friends. We have a cloud version up and running: <a href="https://platform.tracecat.com">https://platform.tracecat.com</a>.<p>Dear HN readers, we’d love to hear your incident response stories and the software you use (or not) to automate the work. Stories from security, site reliability engineering, or even physical systems like critical infrastructure monitoring are all very welcome!
Show HN: Tracecat – Open-source security alert automation / SOAR alternative
Hi HN, we are building Tracecat (<a href="https://tracecat.com/">https://tracecat.com/</a>), an open source automation platform for security alerts. Tracecat automates the tasks a security analyst has to do when responding to a security alert: e.g. contact victims, investigate security logs, report vulnerability.<p>The average security analyst deals with 100 alerts per day. As soon as an alert comes in, you have to investigate and respond. An average alert takes ~30 minutes to analyze (and 100 x 30 min = 50 hours > one whole day) Lots of things get dropped, and this creates vulnerabilities. Many breaches can be traced back to week old alerts that didn’t get properly investigated.<p>Since the risks and costs are so high, top security teams currently pay Splunk SOAR $100,000/year to help automate alert processing. It’s a click-and-drag workflow builder with webhooks, REST API integrations, and JSON processors. A security engineer would use it to build alert automations that look like this: (1) webhook to receive alert (e.g. unusual powershell cmd) from Microsoft Defender; (2) send yes/no Slackbot to ask employee about the alert; (3) if confirmed as suspicious, send malware sample to VirusTotal for report (4) collect evidence from previous steps and dump it into a ticket.<p>If $100k a year seems wildly expensive for a Zapier-like platform, you’d be half right. Splunk SOAR is actually a Zapier + log search + Jira ticketing system.<p>Log storage—that’s how Splunk turns a $99/month workflow automation tool into a pricey enterprise product. Every piece of evidence collected (e.g. Slackbot response, malware report, GeoIP enrichment) and every past workflow trail has to be searchable by a human incident responder or auditor. Security teams need to know why every alert escalated to a SEV1 or not.<p>My cofounder and I are data engineers who fell into this space. We heard our security friends constantly complain about being priced out of a SOAR (security orchestration, automation, and response platform) like Splunk SOAR.<p>We both wrote a lot of event-driven code at school (Master’s thesis) and work (Meta / PwC). We’re also early adopters of Quickwit / Tantivy, an OSS alternative to Elasticsearch / Apache Lucene that is cheaper and faster. It didn’t seem that difficult to build a cheaper open source SOAR, so we decided to do it.<p>Tracecat is also different as it can run in a single VM / laptop. Splunk SOAR and Tines are built for Fortune 10 needs, which means expensive Kubernetes clusters. Most security teams don’t need that scale, but are forced to pay the K8s “premium” (high complexity, hard to maintain). Tracecat uses OSS embedded databases (SQLite) and an event processing engine we built using Python 3.12 asyncio.<p>So far, we’ve just got a bare-bones alpha but you can already do quite a few things with it. e.g. trigger event-driven workflows from webhooks; use REST API integrations; parse responses using JSONPath; control flow using conditional blocks; store logs cheaply in Tantivy; open cases directly from workflows; prioritize and manage cases in a Jira-like table.<p>Tracecat uses Pydantic V2 for fast input / output validation and Zod for fast form validation. We care a lot about data quality! It’s also Apache-2.0 licensed so anyone can self-host the platform.<p>On our roadmap: integrations with popular security tools (Crowdstrike, Microsoft defender); pre-built workflows (e.g. investigating phishing email); better docs; more AI features like auto-labeling tickets, extracting data from unstructured text etc.<p>We’re still early so would love your feedback and opinions. Feel free to try us out or share it with your security friends. We have a cloud version up and running: <a href="https://platform.tracecat.com">https://platform.tracecat.com</a>.<p>Dear HN readers, we’d love to hear your incident response stories and the software you use (or not) to automate the work. Stories from security, site reliability engineering, or even physical systems like critical infrastructure monitoring are all very welcome!
Show HN: Tracecat – Open-source security alert automation / SOAR alternative
Hi HN, we are building Tracecat (<a href="https://tracecat.com/">https://tracecat.com/</a>), an open source automation platform for security alerts. Tracecat automates the tasks a security analyst has to do when responding to a security alert: e.g. contact victims, investigate security logs, report vulnerability.<p>The average security analyst deals with 100 alerts per day. As soon as an alert comes in, you have to investigate and respond. An average alert takes ~30 minutes to analyze (and 100 x 30 min = 50 hours > one whole day) Lots of things get dropped, and this creates vulnerabilities. Many breaches can be traced back to week old alerts that didn’t get properly investigated.<p>Since the risks and costs are so high, top security teams currently pay Splunk SOAR $100,000/year to help automate alert processing. It’s a click-and-drag workflow builder with webhooks, REST API integrations, and JSON processors. A security engineer would use it to build alert automations that look like this: (1) webhook to receive alert (e.g. unusual powershell cmd) from Microsoft Defender; (2) send yes/no Slackbot to ask employee about the alert; (3) if confirmed as suspicious, send malware sample to VirusTotal for report (4) collect evidence from previous steps and dump it into a ticket.<p>If $100k a year seems wildly expensive for a Zapier-like platform, you’d be half right. Splunk SOAR is actually a Zapier + log search + Jira ticketing system.<p>Log storage—that’s how Splunk turns a $99/month workflow automation tool into a pricey enterprise product. Every piece of evidence collected (e.g. Slackbot response, malware report, GeoIP enrichment) and every past workflow trail has to be searchable by a human incident responder or auditor. Security teams need to know why every alert escalated to a SEV1 or not.<p>My cofounder and I are data engineers who fell into this space. We heard our security friends constantly complain about being priced out of a SOAR (security orchestration, automation, and response platform) like Splunk SOAR.<p>We both wrote a lot of event-driven code at school (Master’s thesis) and work (Meta / PwC). We’re also early adopters of Quickwit / Tantivy, an OSS alternative to Elasticsearch / Apache Lucene that is cheaper and faster. It didn’t seem that difficult to build a cheaper open source SOAR, so we decided to do it.<p>Tracecat is also different as it can run in a single VM / laptop. Splunk SOAR and Tines are built for Fortune 10 needs, which means expensive Kubernetes clusters. Most security teams don’t need that scale, but are forced to pay the K8s “premium” (high complexity, hard to maintain). Tracecat uses OSS embedded databases (SQLite) and an event processing engine we built using Python 3.12 asyncio.<p>So far, we’ve just got a bare-bones alpha but you can already do quite a few things with it. e.g. trigger event-driven workflows from webhooks; use REST API integrations; parse responses using JSONPath; control flow using conditional blocks; store logs cheaply in Tantivy; open cases directly from workflows; prioritize and manage cases in a Jira-like table.<p>Tracecat uses Pydantic V2 for fast input / output validation and Zod for fast form validation. We care a lot about data quality! It’s also Apache-2.0 licensed so anyone can self-host the platform.<p>On our roadmap: integrations with popular security tools (Crowdstrike, Microsoft defender); pre-built workflows (e.g. investigating phishing email); better docs; more AI features like auto-labeling tickets, extracting data from unstructured text etc.<p>We’re still early so would love your feedback and opinions. Feel free to try us out or share it with your security friends. We have a cloud version up and running: <a href="https://platform.tracecat.com">https://platform.tracecat.com</a>.<p>Dear HN readers, we’d love to hear your incident response stories and the software you use (or not) to automate the work. Stories from security, site reliability engineering, or even physical systems like critical infrastructure monitoring are all very welcome!