The best Hacker News stories from Show from the past day
Latest posts:
Show HN: K8s Cleaner – Roomba for Kubernetes
Hello HN community!<p>I'm excited to share K8s Cleaner, a tool designed to help you clean up your Kubernetes clusters.<p>As Kubernetes environments grow, they often accumulate unused resources, leading to confusion, waste, and clutter. K8s-cleaner simplifies the process of identifying and removing unnecessary components.<p>The tool scans your Kubernetes clusters for unused or orphaned resources—including pods, services, ingresses, and secrets—and removes them safely. You can fully customize which resources to scan and delete, maintaining complete control over what stays and what goes.<p>Getting Started:<p>Visit <a href="https://sveltos.projectsveltos.io/k8sCleaner.html" rel="nofollow">https://sveltos.projectsveltos.io/k8sCleaner.html</a> and click the "Getting Started" button to try K8s-cleaner.<p>Key Features:<p>- Easy to Use: No complex setup or configuration required—perfect for developers and operators alike
- Open Source: Modify the code to better fit your specific needs
- Community Driven: We welcome your feedback, feature ideas, and bug reports to help improve K8s-cleaner for everyone<p>I'm here to answer questions, address feedback, and discuss ideas for future improvements.<p>Looking forward to your thoughts! And make sure your all you kubernetes clusters are sparkling clean for the holidays. :-)<p>Simone
Show HN: Brisk – Cross-Platform C++ GUI Framework: Declarative, Reactive, Fast
Brisk is an open-source C++ GUI framework with a declarative approach, offering powerful data bindings, GPU-accelerated graphics, and dynamic widget management. It supports macOS, Linux, Windows, and simplifies UI creation with modern paradigms and CSS-like layouts.
Initially developed for a graphics-intensive project with a complex and dynamic GUI, the framework is currently under active development.
Show HN: Brisk – Cross-Platform C++ GUI Framework: Declarative, Reactive, Fast
Brisk is an open-source C++ GUI framework with a declarative approach, offering powerful data bindings, GPU-accelerated graphics, and dynamic widget management. It supports macOS, Linux, Windows, and simplifies UI creation with modern paradigms and CSS-like layouts.
Initially developed for a graphics-intensive project with a complex and dynamic GUI, the framework is currently under active development.
Show HN: ImPlot3D – A 3D Plotting Library for Dear ImGui
Show HN: ImPlot3D – A 3D Plotting Library for Dear ImGui
Show HN: Savvy – Capture and Share CLI Workflows in Seconds
Ever solved a tricky problem at the command line, only to struggle documenting it later? I built Savvy so you can capture and share your CLI solutions in seconds.<p>With a simple savvy record history command, you can:<p><pre><code> - Go back in time and cherry pick commands from your shell history
- Automatically expand aliases
- Redact sensitive information locally
- Convert hard-coded values into runtime placeholders
- Export locally to markdown files or create shareable team workflows using Savvy.
</code></pre>
Savvy never reads command outputs - you explicitly choose what to share. When running workflows, Savvy guides you through each step in a new sub-shell, handling all runtime values automatically.<p>Some FAQs<p>Q) How Does Savvy’s CLI work?<p><pre><code> Savvy’s CLI uses shell hooks (full support for bash and zsh. Fish support is in beta) to capture commands. Savvy’s CLI never looks at any command outputs or active keystrokes. Users have to explicitly opt-in (by typing savvy record/savvy record history) to capture commands every single time.
Once you select a particular workflow with savvy run, savvy starts a new sub-shell and walks you through each step of the way. No manual copy paste or editing commands to provide run time values.
Demo: https://getsavvy.so/demo
</code></pre>
Q) How does Savvy Auto-generate Workflows?<p><pre><code> Savvy takes your redacted commands as input and uses Llama 3.x hosted on Groq to create a first draft of your workflow.
</code></pre>
Q) What other LLM’s do you use?<p><pre><code> Savvy ask/explain are powered by GPT 4/GPT4o to convert natural language to shell commands and vice-versa.
</code></pre>
Try it out:<p><pre><code> GitHub: https://github.com/getsavvyinc/savvy-cli
Demo: https://getsavvy.so/demo
Docs: https://docs.getsavvy.so/guides/quick_start/
Example workflows: https://getsavvy.so/#examples
</code></pre>
Drop a comment below if you have any questions.
Show HN: Adventures in OCR
Hello HN!<p>In a recent "Ask HN: What are you working on?" thread, I mentioned I was working on OCRing a large book:<p><a href="https://news.ycombinator.com/item?id=41971614">https://news.ycombinator.com/item?id=41971614</a><p>The post generated some interest so I thought I would keep HN posted.<p>The book is Saint-Simon’s Memoirs -- an invaluable historical account of the French court under Louis XIV, full of wit, sharp observations, and of incredible literary value. I'm OCRing the edition of reference made between 1879-1930, that contains a lot of comments and footnotes: 45 volumes, ~27,000 pages.<p>Here's a link to a blog post that describes the techniques used so far (the project is still ongoing):<p><a href="https://blog.medusis.com/38_Adventures+in+OCR.html" rel="nofollow">https://blog.medusis.com/38_Adventures+in+OCR.html</a><p>But you may also directly access the result here:<p><a href="https://divers.medusis.net/boislisle/pub" rel="nofollow">https://divers.medusis.net/boislisle/pub</a><p>This web app (not optimized for mobile, sorry) solves a tricky problem of preloading images efficiently. In short: preloading the next image isn't enough, since browsers will repaint if an image is moved, or scaled. Or browsers won't paint at all if visibility is hidden or opacity is zero, and will paint only when those values change. On an average, slow machine, this takes visible time. But if an image is simply behind another element, it will be painted, and the removal of the covering element or changing the z-index will not trigger a repaint.<p>(Preloading is important because it lets one review results fast; if one has to wait 150-200 ms between images it's simply discouraging).<p>Would love to hear feedback; happy to answer any question!
Show HN: Adventures in OCR
Hello HN!<p>In a recent "Ask HN: What are you working on?" thread, I mentioned I was working on OCRing a large book:<p><a href="https://news.ycombinator.com/item?id=41971614">https://news.ycombinator.com/item?id=41971614</a><p>The post generated some interest so I thought I would keep HN posted.<p>The book is Saint-Simon’s Memoirs -- an invaluable historical account of the French court under Louis XIV, full of wit, sharp observations, and of incredible literary value. I'm OCRing the edition of reference made between 1879-1930, that contains a lot of comments and footnotes: 45 volumes, ~27,000 pages.<p>Here's a link to a blog post that describes the techniques used so far (the project is still ongoing):<p><a href="https://blog.medusis.com/38_Adventures+in+OCR.html" rel="nofollow">https://blog.medusis.com/38_Adventures+in+OCR.html</a><p>But you may also directly access the result here:<p><a href="https://divers.medusis.net/boislisle/pub" rel="nofollow">https://divers.medusis.net/boislisle/pub</a><p>This web app (not optimized for mobile, sorry) solves a tricky problem of preloading images efficiently. In short: preloading the next image isn't enough, since browsers will repaint if an image is moved, or scaled. Or browsers won't paint at all if visibility is hidden or opacity is zero, and will paint only when those values change. On an average, slow machine, this takes visible time. But if an image is simply behind another element, it will be painted, and the removal of the covering element or changing the z-index will not trigger a repaint.<p>(Preloading is important because it lets one review results fast; if one has to wait 150-200 ms between images it's simply discouraging).<p>Would love to hear feedback; happy to answer any question!
Show HN: Adventures in OCR
Hello HN!<p>In a recent "Ask HN: What are you working on?" thread, I mentioned I was working on OCRing a large book:<p><a href="https://news.ycombinator.com/item?id=41971614">https://news.ycombinator.com/item?id=41971614</a><p>The post generated some interest so I thought I would keep HN posted.<p>The book is Saint-Simon’s Memoirs -- an invaluable historical account of the French court under Louis XIV, full of wit, sharp observations, and of incredible literary value. I'm OCRing the edition of reference made between 1879-1930, that contains a lot of comments and footnotes: 45 volumes, ~27,000 pages.<p>Here's a link to a blog post that describes the techniques used so far (the project is still ongoing):<p><a href="https://blog.medusis.com/38_Adventures+in+OCR.html" rel="nofollow">https://blog.medusis.com/38_Adventures+in+OCR.html</a><p>But you may also directly access the result here:<p><a href="https://divers.medusis.net/boislisle/pub" rel="nofollow">https://divers.medusis.net/boislisle/pub</a><p>This web app (not optimized for mobile, sorry) solves a tricky problem of preloading images efficiently. In short: preloading the next image isn't enough, since browsers will repaint if an image is moved, or scaled. Or browsers won't paint at all if visibility is hidden or opacity is zero, and will paint only when those values change. On an average, slow machine, this takes visible time. But if an image is simply behind another element, it will be painted, and the removal of the covering element or changing the z-index will not trigger a repaint.<p>(Preloading is important because it lets one review results fast; if one has to wait 150-200 ms between images it's simply discouraging).<p>Would love to hear feedback; happy to answer any question!
Show HN: I built an open-source data pipeline tool in Go
Every data pipeline job I had to tackle required quite a few components to set up:<p>- One tool to ingest data<p>- Another one to transform it<p>- If you wanted to run Python, set up an orchestrator<p>- If you need to check the data, a data quality tool<p>Let alone this being hard to set up and taking time, it is also pretty high-maintenance. I had to do a lot of infra work, and while this being billable hours for me I didn’t enjoy the work at all. For some parts of it, there were nice solutions like dbt, but in the end for an end-to-end workflow, it didn’t work. That’s why I decided to build an end-to-end solution that could take care of data ingestion, transformation, and Python stuff. Initially, it was just for our own usage, but in the end, we thought this could be a useful tool for everyone.<p>In its core, Bruin is a data framework that consists of a CLI application written in Golang, and a VS Code extension that supports it with a local UI.<p>Bruin supports quite a few stuff:<p>- Data ingestion using ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>- Data transformation in SQL & Python, similar to dbt<p>- Python env management using uv<p>- Built-in data quality checks<p>- Secrets management<p>- Query validation & SQL parsing<p>- Built-in templates for common scenarios, e.g. Shopify, Notion, Gorgias, BigQuery, etc<p>This means that you can write end-to-end pipelines within the same framework and get it running with a single command. You can run it on your own computer, on GitHub Actions, or in an EC2 instance somewhere. Using the templates, you can also have ready-to-go pipelines with modeled data for your data warehouse in seconds.<p>It includes an open-source VS Code extension as well, which allows working with the data pipelines locally, in a more visual way. The resulting changes are all in code, which means everything is version-controlled regardless, it just adds a nice layer.<p>Bruin can run SQL, Python, and data ingestion workflows, as well as quality checks. For Python stuff, we use the awesome (and it really is awesome!) uv under the hood, install dependencies in an isolated environment, and install and manage the Python versions locally, all in a cross-platform way. Then in order to manage data uploads to the data warehouse, it uses dlt under the hood to upload the data to the destination. It also uses Arrow’s memory-mapped files to easily access the data between the processes before uploading them to the destination.<p>We went with Golang because of its speed and strong concurrency primitives, but more importantly, I knew Go better than the other languages available to me and I enjoy writing Go, so there’s also that.<p>We had a small pool of beta testers for quite some time and I am really excited to launch Bruin CLI to the rest of the world and get feedback from you all. I know it is not often to build data tooling in Go but I believe we found ourselves in a nice spot in terms of features, speed, and stability.<p><a href="https://github.com/bruin-data/bruin">https://github.com/bruin-data/bruin</a><p>I’d love to hear your feedback and learn more about how we can make data pipelines easier and better to work with, looking forward to your thoughts!<p>Best, Burak
Show HN: I built an open-source data pipeline tool in Go
Every data pipeline job I had to tackle required quite a few components to set up:<p>- One tool to ingest data<p>- Another one to transform it<p>- If you wanted to run Python, set up an orchestrator<p>- If you need to check the data, a data quality tool<p>Let alone this being hard to set up and taking time, it is also pretty high-maintenance. I had to do a lot of infra work, and while this being billable hours for me I didn’t enjoy the work at all. For some parts of it, there were nice solutions like dbt, but in the end for an end-to-end workflow, it didn’t work. That’s why I decided to build an end-to-end solution that could take care of data ingestion, transformation, and Python stuff. Initially, it was just for our own usage, but in the end, we thought this could be a useful tool for everyone.<p>In its core, Bruin is a data framework that consists of a CLI application written in Golang, and a VS Code extension that supports it with a local UI.<p>Bruin supports quite a few stuff:<p>- Data ingestion using ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>- Data transformation in SQL & Python, similar to dbt<p>- Python env management using uv<p>- Built-in data quality checks<p>- Secrets management<p>- Query validation & SQL parsing<p>- Built-in templates for common scenarios, e.g. Shopify, Notion, Gorgias, BigQuery, etc<p>This means that you can write end-to-end pipelines within the same framework and get it running with a single command. You can run it on your own computer, on GitHub Actions, or in an EC2 instance somewhere. Using the templates, you can also have ready-to-go pipelines with modeled data for your data warehouse in seconds.<p>It includes an open-source VS Code extension as well, which allows working with the data pipelines locally, in a more visual way. The resulting changes are all in code, which means everything is version-controlled regardless, it just adds a nice layer.<p>Bruin can run SQL, Python, and data ingestion workflows, as well as quality checks. For Python stuff, we use the awesome (and it really is awesome!) uv under the hood, install dependencies in an isolated environment, and install and manage the Python versions locally, all in a cross-platform way. Then in order to manage data uploads to the data warehouse, it uses dlt under the hood to upload the data to the destination. It also uses Arrow’s memory-mapped files to easily access the data between the processes before uploading them to the destination.<p>We went with Golang because of its speed and strong concurrency primitives, but more importantly, I knew Go better than the other languages available to me and I enjoy writing Go, so there’s also that.<p>We had a small pool of beta testers for quite some time and I am really excited to launch Bruin CLI to the rest of the world and get feedback from you all. I know it is not often to build data tooling in Go but I believe we found ourselves in a nice spot in terms of features, speed, and stability.<p><a href="https://github.com/bruin-data/bruin">https://github.com/bruin-data/bruin</a><p>I’d love to hear your feedback and learn more about how we can make data pipelines easier and better to work with, looking forward to your thoughts!<p>Best, Burak
Show HN: NCompass Technologies – yet another AI Inference API, but hear us out
Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.
Show HN: NCompass Technologies – yet another AI Inference API, but hear us out
Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.
Show HN: NCompass Technologies – yet another AI Inference API, but hear us out
Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.
Show HN: Autonomous AI agents that monitor the stock market for you
We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works:
Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes:
When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try:
Long-term monitoring:
- "Monitor Apple stock and notify me of any important events and red flags"
- "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis:
- "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>
Show HN: Autonomous AI agents that monitor the stock market for you
We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works:
Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes:
When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try:
Long-term monitoring:
- "Monitor Apple stock and notify me of any important events and red flags"
- "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis:
- "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>
Show HN: Autonomous AI agents that monitor the stock market for you
We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works:
Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes:
When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try:
Long-term monitoring:
- "Monitor Apple stock and notify me of any important events and red flags"
- "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis:
- "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>
Show HN: I built an embeddable Unicode library with MISRA C conformance
Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.
Show HN: I built an embeddable Unicode library with MISRA C conformance
Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.
Show HN: I built an embeddable Unicode library with MISRA C conformance
Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.