The best Hacker News stories from Show from the past week
Latest posts:
Show HN: Remote-Controlled IKEA Deathstar Lamp
Repainting the iconic IKEA PS 2014 lamp into the Deathstar from Star Wars has been a popular IKEA hack for quite some time.<p>This variant additionally replaces the manual, rope-operated mechanism to open and close the lamp with a remote-controlled motor.<p>The firmware is based on ESPHome, and its excellent Home Assistant integration enables one to implement higher-level features, like a "sundial" where the aperture of the Deathstar follows the sun elevation throughout the day (see the timelapse video).<p>That said, I will not consider this project as complete until the Imperial March can be played over the stepper motor (just like the legendary Floppotron) ;-)
Show HN: My self-written hobby OS is finally running on my vintage IBM ThinkPad
Finally got my hobby OS up and running on real hardware. I love the old IBM thinkpads, so thought it was the perfect machine to get it working on. Been working on it for quite some time now, but this has been a big milestone!
Show HN: My self-written hobby OS is finally running on my vintage IBM ThinkPad
Finally got my hobby OS up and running on real hardware. I love the old IBM thinkpads, so thought it was the perfect machine to get it working on. Been working on it for quite some time now, but this has been a big milestone!
Show HN: I used OpenAI's new image API for a personalized coloring book service
I've had an idea for a long time to generate a cute coloring book based on family photos, send it to a printing service, and then deliver it to people.<p>Last month, when OpenAI's Sora was released for public use I (foolishly) thought I'd manually drag-and-drop each order’s photos into Sora's UI and copy the resulting images back into my system. This took way too much time (about an hour for each of the few books I made and tested with family and friends). It clearly wasn't possible to release this version because I’d be losing a huge amount of time on every order. So instead, I decided I'd finish off the project as best I could, put it "on ice," and wait for the API release.<p>The API is now released (quicker than I thought it'd be, too!) and I integrated it last night. I'd love your feedback on any and all aspects.<p>The market is mostly family-based, but from my testing of the physical book I've found that both adults and kids enjoy coloring them in (it's surprisingly cathartic and creative). If you would like to order one you can get 10% off by tapping the total price line item five times.
Show HN: My from-scratch OS kernel that runs DOOM
Hi there! I've been on-and-off working on TacOS for a few months, which follows some UNIX-derived concepts (exec/fork, unix-style VFS, etc) and is now able to run a port of Doom, with a fairly small amount of modifications, using my from-scratch libc. The performance is actually decent compared to what I expected. Very interested to hear your thoughts. Thank you!
Show HN: Dia, an open-weights TTS model for generating realistic dialogue
Show HN: JuryNow – Get an anonymous instant verdict from 12 real people
After 16 years, I have just launched my game JuryNow. Imagine having a truly diverse panel of 12 real people of all ages, far removed from your peer group, around the world who will be able to give you an instant decision on your question 24/7. No commentary, just a verdict between two choices. You can ask a moral dilemma, or a fashion dilemma (you can upload 2 images), you can use JuryNow to give you an independent perspective on a family argument, or a workplace problem, or even a trivial thought. You can also ask a mini political poll and receive global verdict in real time.<p>It’s anonymous, fast (under 3 minutes), and...when there are more than 13 people playing simultaneously, completely AI-free<i>.</i><p>How do you pay for this priceless fun? With JuryDuty. While you wait 3 minutes for your verdict, You answer other people’s questions. There is no commentary, just a binary choice.<p>You can ask things like:<p><i>“Do I have a moral duty to go to my brother’s third wedding? We have no parents?”</i><p><i>“Do you feel guilty when you kill mosquitoes?”</i><p><i>"Should I take away my mother's car keys? She is 84 and had two near misses this month."</i><p>As a 58F, I built JuryNow because I wanted to create a truly objective place to get outside opinions that were not from my peer group, but from 12 people in 12 different countries, different ages, professions, cultures, a truly diverse global objective jury with no algorithms.<p>Would love your feedback! It’s totally free, no sign-up needed for a first play.
<a href="https://jurynow.app/" rel="nofollow">https://jurynow.app/</a><p>if there are fewer than 13 people playing (and it only just launched last week and that was just on Reddit!) then a popup will appear saying your verdict is simulated by AI. But this is just a TEMPORARY feature with the MVP. As soon as there are regular players, it will be permanently dismantled and we will celebrate the power of collective human intelligence!
Show HN: JuryNow – Get an anonymous instant verdict from 12 real people
After 16 years, I have just launched my game JuryNow. Imagine having a truly diverse panel of 12 real people of all ages, far removed from your peer group, around the world who will be able to give you an instant decision on your question 24/7. No commentary, just a verdict between two choices. You can ask a moral dilemma, or a fashion dilemma (you can upload 2 images), you can use JuryNow to give you an independent perspective on a family argument, or a workplace problem, or even a trivial thought. You can also ask a mini political poll and receive global verdict in real time.<p>It’s anonymous, fast (under 3 minutes), and...when there are more than 13 people playing simultaneously, completely AI-free<i>.</i><p>How do you pay for this priceless fun? With JuryDuty. While you wait 3 minutes for your verdict, You answer other people’s questions. There is no commentary, just a binary choice.<p>You can ask things like:<p><i>“Do I have a moral duty to go to my brother’s third wedding? We have no parents?”</i><p><i>“Do you feel guilty when you kill mosquitoes?”</i><p><i>"Should I take away my mother's car keys? She is 84 and had two near misses this month."</i><p>As a 58F, I built JuryNow because I wanted to create a truly objective place to get outside opinions that were not from my peer group, but from 12 people in 12 different countries, different ages, professions, cultures, a truly diverse global objective jury with no algorithms.<p>Would love your feedback! It’s totally free, no sign-up needed for a first play.
<a href="https://jurynow.app/" rel="nofollow">https://jurynow.app/</a><p>if there are fewer than 13 people playing (and it only just launched last week and that was just on Reddit!) then a popup will appear saying your verdict is simulated by AI. But this is just a TEMPORARY feature with the MVP. As soon as there are regular players, it will be permanently dismantled and we will celebrate the power of collective human intelligence!
Show HN: Undercutf1 – F1 Live Timing TUI with Driver Tracker, Variable Delay
undercutf1 is a F1 live timing app, built as a TUI. It contains traditional timing pages like a Driver Tracker, Timing Tower, Race Control, along with some more detailed analysis like lap and gap history, so that you can see strategies unfolding.<p>I started to build undercutf1 almost two years ago, after becoming increasingly frustrated with the TV direction and lack of detailed information coming out of the live feed. Overtakes were often missed and strategies were often ill-explained or missed. I discovered that F1 live timing data is available over a simple SignalR stream, so I set out building an app that would let me see all the information I could dream of. Now undercutf1 serves as the perfect companion (like a second Martin Brundle) when I'm watching the sessions live.<p>If you want to test it out, you replay the Suzuka race easily by downloading the timing data, then starting a simulated session:<p>1. Download undercutf1 using the installation instructions in the README.<p>2. Import the Suzuka race session data using `undercutf1 import 2025 -m 1256 -s 10006`.<p>3. Start the app (`undercutf1`) then press S (Session) then F (Simulated Session), then select Suzuka then Race using the arrow keys, then press Enter.<p>4. Use arrow keys to navigate between the timing pages, and use N / Shift+N to fast-forward through the session.<p>If you want to test it out during this weekends Jeddah GP, simply install as in the README then start a live session by pressing S (Session) then L (Live Session).<p>The app is built for a terminal of roughly 110x30 cells, which probably seems an odd size but just so happens to be the size of a fullscreen terminal on a MBP zoomed in far enough that the text is easily glanceable when the laptop is placed on a coffee table some distance away from me :) Other terminal sizes will work fine, but information density/scaling may not be ideal.<p>If you're using the TUI during a live session, you'll want to synchronise the delay of the timing feed to your TV feed. Use the N/M keys to increase/decrease the delay. During non-race session, I find it fairly easy to sync the session clock on TV with the session clock on the bottom left of the timing screen. For race sessions, synchronisation is a little harder. I usually aim to sync the start of the race time (e.g. 13:00 on the timing screen clock) with the start of the formation lap, where the live feed helpfully shows the clock tick over to 0 minutes. I usually delay the feed by 30 to 60 seconds.
Show HN: Undercutf1 – F1 Live Timing TUI with Driver Tracker, Variable Delay
undercutf1 is a F1 live timing app, built as a TUI. It contains traditional timing pages like a Driver Tracker, Timing Tower, Race Control, along with some more detailed analysis like lap and gap history, so that you can see strategies unfolding.<p>I started to build undercutf1 almost two years ago, after becoming increasingly frustrated with the TV direction and lack of detailed information coming out of the live feed. Overtakes were often missed and strategies were often ill-explained or missed. I discovered that F1 live timing data is available over a simple SignalR stream, so I set out building an app that would let me see all the information I could dream of. Now undercutf1 serves as the perfect companion (like a second Martin Brundle) when I'm watching the sessions live.<p>If you want to test it out, you replay the Suzuka race easily by downloading the timing data, then starting a simulated session:<p>1. Download undercutf1 using the installation instructions in the README.<p>2. Import the Suzuka race session data using `undercutf1 import 2025 -m 1256 -s 10006`.<p>3. Start the app (`undercutf1`) then press S (Session) then F (Simulated Session), then select Suzuka then Race using the arrow keys, then press Enter.<p>4. Use arrow keys to navigate between the timing pages, and use N / Shift+N to fast-forward through the session.<p>If you want to test it out during this weekends Jeddah GP, simply install as in the README then start a live session by pressing S (Session) then L (Live Session).<p>The app is built for a terminal of roughly 110x30 cells, which probably seems an odd size but just so happens to be the size of a fullscreen terminal on a MBP zoomed in far enough that the text is easily glanceable when the laptop is placed on a coffee table some distance away from me :) Other terminal sizes will work fine, but information density/scaling may not be ideal.<p>If you're using the TUI during a live session, you'll want to synchronise the delay of the timing feed to your TV feed. Use the N/M keys to increase/decrease the delay. During non-race session, I find it fairly easy to sync the session clock on TV with the session clock on the bottom left of the timing screen. For race sessions, synchronisation is a little harder. I usually aim to sync the start of the race time (e.g. 13:00 on the timing screen clock) with the start of the formation lap, where the live feed helpfully shows the clock tick over to 0 minutes. I usually delay the feed by 30 to 60 seconds.
Show HN: Goldbach Conjecture up to 4*10^18+7*10^13
Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).<p>My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.<p>[Website]
<a href="https://gridbach.com/" rel="nofollow">https://gridbach.com/</a><p>[Medium]
<a href="https://medium.com/@jay_gridbach/grid-computing-shatters-world-record-for-goldbach-conjecture-verification-1ef3dc58a38d" rel="nofollow">https://medium.com/@jay_gridbach/grid-computing-shatters-wor...</a>
Show HN: I built an AI that turns GitHub codebases into easy tutorials
<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>
Show HN: I built an AI that turns GitHub codebases into easy tutorials
<a href="https://the-pocket.github.io/Tutorial-Codebase-Knowledge/" rel="nofollow">https://the-pocket.github.io/Tutorial-Codebase-Knowledge/</a>
Less Slow C++
Earlier this year, I took a month to reexamine my coding habits and rethink some past design choices. I hope to rewrite and improve my FOSS libraries this year, and I needed answers to a few questions first. Perhaps some of these questions will resonate with others in the community, too.<p><pre><code> - Are coroutines viable for high-performance work?
- Should I use SIMD intrinsics for clarity or drop to assembly for easier library distribution?
- Has hardware caught up with vectorized scatter/gather in AVX-512 & SVE?
- How do secure enclaves & pointer tagging differ on Intel, Arm, & AMD?
- What's the throughput gap between CPU and GPU Tensor Cores (TCs)?
- How costly are misaligned memory accesses & split-loads, and what gains do non-temporal loads/stores offer?
- Which parts of the standard library hit performance hardest?
- How do error-handling strategies compare overhead-wise?
- What's the compile-time vs. run-time trade-off for lazily evaluated ranges?
- What practical, non-trivial use cases exist for meta-programming?
- How challenging is Linux Kernel bypass with io_uring vs. POSIX sockets?
- How close are we to effectively using Networking TS or heterogeneous Executors in C++?
- What are best practices for propagating stateful allocators in nested containers, and which libraries support them?
</code></pre>
These questions span from micro-kernel optimizations (nanoseconds) to distributed systems (micro/millisecond latencies). Rather than tackling them all in one post, I compiled my explorations into a repository—extending my previous Google Benchmark tutorial (<<a href="https://ashvardanian.com/posts/google-benchmark" rel="nofollow">https://ashvardanian.com/posts/google-benchmark</a>>)—to serve as a sandbox for performance experimentation.<p>Some fun observations:<p><pre><code> - Compilers now vectorize 3x3x3 and 4x4x4 single/double precision multiplications well! The smaller one is ~60% slower despite 70% fewer operations, outperforming my vanilla SSE/AVX and coming within 10% of AVX-512.
- Nvidia TCs vary dramatically across generations in numeric types, throughput, tile shapes, thread synchronization (thread/quad-pair/warp/warp-groups), and operand storage. Post-Volta, manual PTX is often needed (as intrinsics lag), though the new TileIR (introduced at GTC) promises improvements for dense linear algebra kernels.
- The AI wave drives CPUs and GPUs to converge in mat-mul throughput & programming complexity. It took me a day to debug TMM register initialization, and SME is equally odd. Sierra Forest packs 288 cores/socket, and AVX10.2 drops 256-bit support for 512-bit... I wonder if discrete Intel GPUs are even needed, given CPU advances?
- In common floating-point ranges, scalar sine approximations can be up to 40x faster than standard implementations, even without SIMD. It's a bit hand-wavy, though; I wish more projects documented error bounds and had 1 & 3.5 ULP variants like Sleef.
- Meta-programming tools like CTRE can outperform typical RegEx engines by 5x and simplify building parsers compared to hand-crafted FSMs.
- Once clearly distinct in complexity and performance (DPDK/SPDK vs. io_uring), the gap is narrowing. While pre-5.5 io_uring can boost UDP throughput by 4x on loopback IO, newer zero-copy and concurrency optimizations remain challenging.
</code></pre>
The repository is loaded with links to favorite CppCon lectures, GitHub snippets, and tech blog posts. Recognizing that many high-level concepts are handled differently across languages, I've also started porting examples to Rust & Python in separate repos. Coroutines look bad everywhere :(<p>Overall, this research project was rewarding! Most questions found answers in code — except pointer tagging and secure enclaves, which still elude me in public cloud. I'd love to hear from others, especially on comparing High-Level Synthesis for small matrix multiplications on FPGAs versus hand-written VHDL/Verilog for integral types. Let me know if you have ideas for other cool, obscure topics to cover!
Show HN: I made a Doom-like game fit inside a QR code
I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha)
I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
<a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
Show HN: I made a Doom-like game fit inside a QR code
I sometimes pick up random projects just because I can, this was one of those times. I made it as a week long project a while back this year but never shared here, so thought to go for it haha.<p>I created a game inspired by Doom and the backrooms called The Backdooms under 2.4kb in minified html. (for reference, this entire post would be around 1.8kB haha)
I had to use a not popular way of using GZip with Zlib headers (had to write my own script for compressing it, also in the repo) to eventually convert it a size 40 QR code that works right in your browser using Decompressionstream API.<p>This is of course a very oversimplified description of it, using a lot of the same technologies that DOOM had but combining it with infinite seed based map generation in 2.4kb (QR codes can only store 3kb, which includes changing formats) was pretty hard.<p>Here are some links about it if you want to nerd out and read more:<p>Repository Link (MIT License): <a href="https://github.com/Kuberwastaken/backdooms">https://github.com/Kuberwastaken/backdooms</a><p>A Hosted (slightly improved) version of The Backdooms: <a href="https://kuberwastaken.github.io/backdooms/" rel="nofollow">https://kuberwastaken.github.io/backdooms/</a><p>Game Trailer: <a href="https://www.youtube.com/shorts/QWPr10cAuGc" rel="nofollow">https://www.youtube.com/shorts/QWPr10cAuGc</a><p>My Linkedin post about it: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7295667546089799681/" rel="nofollow">https://www.linkedin.com/feed/update/urn:li:activity:7295667...</a><p>(PS: You'd need something like <a href="https://qrscanner.org/" rel="nofollow">https://qrscanner.org/</a> or something that can scan bigger QR codes and put the text data onto your browser to play it)<p>My Blogs documenting the process and development in detail:<p><a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Get-Doom-In-A-QR-Code" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
<a href="https://kuberwastaken.github.io/blog/Projects/How-I-Managed-To-Make-HTML-Game-Compression-So-Much-Better" rel="nofollow">https://kuberwastaken.github.io/blog/Projects/How-I-Managed-...</a>
Show HN: Plandex v2 – open source AI coding agent for large projects and tasks
Hey HN! I’m Dane, the creator of Plandex (<a href="https://github.com/plandex-ai/plandex">https://github.com/plandex-ai/plandex</a>), an open source AI coding agent focused especially on tackling large tasks in real world software projects.<p>You can watch a 2 minute demo of Plandex in action here: <a href="https://www.youtube.com/watch?v=SFSu2vNmlLk" rel="nofollow">https://www.youtube.com/watch?v=SFSu2vNmlLk</a><p>And here’s more of a tutorial style demo showing how Plandex can automatically debug a browser application: <a href="https://www.youtube.com/watch?v=g-_76U_nK0Y" rel="nofollow">https://www.youtube.com/watch?v=g-_76U_nK0Y</a>.<p>I launched Plandex v1 here on HN a little less than a year ago (<a href="https://news.ycombinator.com/item?id=39918500">https://news.ycombinator.com/item?id=39918500</a>).<p>Now I’m launching a major update, Plandex v2, which is the result of 8 months of heads down work, and is in effect a whole new project/product.<p>In short, Plandex is now a top-tier coding agent with fully autonomous capabilities. It combines models from Anthropic, OpenAI, and Google to achieve better results, more reliable agent behavior, better cost efficiency, and better performance than is possible by using only a single provider’s models.<p>I believe it is now one of the best tools available for working on large tasks in real world codebases with AI. It has an effective context window of 2M tokens, and can index projects of 20M tokens and beyond using tree-sitter project maps (30+ languages are supported). It can effectively find relevant context in massive million-line projects like SQLite, Redis, and Git.<p>A bit more on some of Plandex’s key features:<p>- Plandex has a built-in diff review sandbox that helps you get the benefits of AI without leaving behind a mess in your project. By default, all changes accumulate in the sandbox until you approve them. The sandbox is version-controlled. You can rewind it to any previous point, and you can also create branches to try out alternative approaches.<p>- It offers a ‘full auto mode’ that can complete large tasks autonomously end-to-end, including high level planning, context loading, detailed planning, implementation, command execution (for dependencies, builds, tests, etc.), and debugging.<p>- The autonomy level is highly configurable. You can move up and down the ladder of autonomy depending on the task, your comfort level, and how you weigh cost optimization vs. effort and results.<p>- Models and model settings are also very configurable. There are built-in models and model packs for different use cases. You can also add custom models and model packs, and customize model settings like temperature or top-p. All model changes are version controlled, so you can use branches to try out the same task with different models. The newly released OpenAI models and the paid Gemini 2.5 Pro model will be integrated in the default model pack soon.<p>- It can be easily self-hosted, including a ‘local mode’ for a very fast local single-user setup with Docker.<p>- Cloud hosting is also available for added convenience with a couple of subscription tiers: an ‘Integrated Models’ mode that requires no other accounts or API keys and allows you to manage billing/budgeting/spending alerts and track usage centrally, and a ‘BYO API Key’ mode that allows you to use your own OpenAI/OpenRouter accounts.<p>I’d love to get more HNers in the Plandex Discord (<a href="https://discord.gg/plandex-ai" rel="nofollow">https://discord.gg/plandex-ai</a>). Please join and say hi!<p>And of course I’d love to hear your feedback, whether positive or negative. Thanks so much!
Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator
Show HN: I made a free tool that analyzes SEC filings and posts detailed reports
(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings.
By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples:
<a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>
Show HN: I made a free tool that analyzes SEC filings and posts detailed reports
(* within a few minutes of SEC filing)<p>Currently does it for 1000+ US companies and specifically earnings related filings.
By US companies, I mean the ones that are obliged to file SEC filings.<p>This was the result of almost a year long effort and hundreds of prototypes :)<p>It currently auto-publishes for 1000 ish US companies by market cap, relies on 8-K filing as a trigger.<p>e.g. <a href="https://www.signalbloom.ai/news/NVDA" rel="nofollow">https://www.signalbloom.ai/news/NVDA</a> will take you to NVDA earnings<p>Would be grateful to get some feedback. Especially if you follow a company, check its reports out. Thank you!<p>Some examples:
<a href="https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-despite-revenue-miss-china-woes" rel="nofollow">https://www.signalbloom.ai/news/AAPL/apple-q1-eps-beats-desp...</a><p><a href="https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-margin-headwinds-emerge" rel="nofollow">https://www.signalbloom.ai/news/NVDA/nvidia-revenue-soars-ma...</a><p><a href="https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-cib-strength" rel="nofollow">https://www.signalbloom.ai/news/JPM/jpm-beats-estimates-on-c...</a> (JPM earnings from Friday)<p>Hallucination note: <a href="https://www.signalbloom.ai/hallucination-benchmark" rel="nofollow">https://www.signalbloom.ai/hallucination-benchmark</a>