The best Hacker News stories from Show from the past week
Latest posts:
Show HN: I made a 3D SVG Renderer that projects textures without rasterization
Show HN: ClickStack – Open-source Datadog alternative by ClickHouse and HyperDX
Hey HN! Mike & Warren here from HyperDX (now part of ClickHouse)! We’ve been building ClickStack, an open source observability stack that helps you collect, centralize, search/viz/alert on your telemetry (logs, metrics, traces) in just a few minutes - all powered by ClickHouse (Apache2) for storage, HyperDX (MIT) for visualization and OpenTelemetry (Apache2) for ingestion.<p>You can check out the quick start for spinning things up in the repo here: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a><p>ClickStack makes it really easy to instrument your application so you can go from bug reports of “my checkout didn’t go through” to a session replay of the user, backend API calls, to DB queries and infrastructure metrics related to that specific request in a single view.<p>For those that might be migrating from Very Expensive Observability Vendor (TM) to something open source, more performant, and doesn’t require extensive culling of retention limits and sampling rates - ClickStack gives a batteries-included way of starting that migration journey.<p>For those that aren’t familiar with ClickHouse, it’s a high performance database that has already been used by companies such as Anthropic, Cloudflare, and DoorDash to power their core observability at scale due to its flexibility, ease of use, and cost effectiveness. However, this required teams to dedicate engineers to building a custom observability stack, where it’s difficult to not only get their telemetry data easily into ClickHouse but also struggling without a native UI experience.<p>That’s why we’re building ClickStack - we wanted to bundle an easy way to get started ingesting your telemetry data whether it’s logs & traces from Node.js or Ruby to metrics from Kubernetes or your bare metal infrastructure. Just as important we wanted our users to enjoy a visualization experience that allowed users to quickly search using a familiar lucene-like search syntax (similar to what you’d use in Google!). We recognise though, that a SQL mode is needed for the most complex of queries. We've also added high cardinality outlier analysis by charting the delta between outlier and inlier events - which we've found really helpful in narrowing down causes of regressions/anomalies in our traces as well as log patterns to condense down clusters of similar logs.<p>We’re really excited about the roadmap ahead in terms of improving ClickStack as a product and the ClickHouse core database to improve observability. Would love to hear everyone’s feedback and what they think!<p>Spinning up a container is pretty simple: `docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one`
In browser live demo (no sign ups or anything silly, it runs fully in your browser!): <a href="https://play.hyperdx.io/" rel="nofollow">https://play.hyperdx.io/</a> Landing Page: <a href="https://clickhouse.com/o11y" rel="nofollow">https://clickhouse.com/o11y</a> Github Repo: <a href="https://github.com/hyperdxio/hyperdx">https://github.com/hyperdxio/hyperdx</a>
Discord community: <a href="https://hyperdx.io/discord" rel="nofollow">https://hyperdx.io/discord</a>
Docs: <a href="https://clickhouse.com/docs/use-cases/observability/clickstack/getting-started" rel="nofollow">https://clickhouse.com/docs/use-cases/observability/clicksta...</a>
Show HN: Air Lab – A portable and open air quality measuring device
Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project:
- <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction)
- <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!
Show HN: Air Lab – A portable and open air quality measuring device
Hi HN!<p>I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.<p>To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!<p>The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: <a href="https://www.crowdsupply.com/networked-artifacts/air-lab" rel="nofollow">https://www.crowdsupply.com/networked-artifacts/air-lab</a><p>We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.<p>The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.<p>If you seek more high-level info, here are also some videos covering the project:
- <a href="https://www.youtube.com/watch?v=oBltdMLjUyg" rel="nofollow">https://www.youtube.com/watch?v=oBltdMLjUyg</a> (Introduction)
- <a href="https://www.youtube.com/watch?v=_tzjVYPm_MU" rel="nofollow">https://www.youtube.com/watch?v=_tzjVYPm_MU</a> (Product Update)<p>Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.<p>Happy to answer any questions!
Show HN: GPT image editing, but for 3D models
Hey HN!<p>I’m Zach one of the co-founders of Adam (<a href="https://www.adamcad.com">https://www.adamcad.com</a>). We're building AI-powered tools for CAD and 3D modeling [1].<p>We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!<p>For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.<p>We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: <a href="https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b" rel="nofollow">https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b</a><p>We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!<p>We’d love to hear what you think and where we should take this next :)<p>[1]<a href="https://x.com/zachdive/status/1882858765613228287" rel="nofollow">https://x.com/zachdive/status/1882858765613228287</a><p>[2]<a href="https://news.ycombinator.com/item?id=43774990">https://news.ycombinator.com/item?id=43774990</a><p>[3]<a href="https://willpatrick.xyz/technology/2025/04/23/teaching-llms-how-to-solid-model.html" rel="nofollow">https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...</a>
Show HN: A toy version of Wireshark (student project)
Hi everyone,<p>I recently published a small open-source project. It’s a minimal network packet analyzer written in Go — designed more like a learning toy than a replacement for Wireshark.<p>It currently supports parsing basic protocols like TLS, DNS, and HTTP, and includes a tiny fuzzing engine to test payload responses. You can inspect raw packet content directly from the terminal. The output is colored for readability, and the code structure is kept simple and clear.<p>The entire program is very small — just about 400 lines of Go code. I know it’s not anywhere near Wireshark’s level, and I still use Wireshark myself for real-world analysis. But I built it as a personal experiment in network parsing and to understand protocol behavior more directly.<p>If you're curious or would like to try it out, the project is here:
<a href="https://github.com/lixiasky/vanta">https://github.com/lixiasky/vanta</a><p>I'm happy to hear your thoughts, suggestions, or critiques. It’s just a little network toy, but maybe someone out there finds it useful or fun.<p>Thanks for reading!
Show HN: I build one absurd web project every month
I’ve been building absurd, mostly useless web projects for fun — and I publish one every month at absurd.website.<p>These are deliberately non-functional, weird, sometimes funny, sometimes philosophical — and usually totally unnecessary.<p>Some examples:<p>Sexy Math — solve math problems to reveal erotic images.<p>Trip to Mars — a real-time simulation that takes 7 months to finish.<p>Add Luck to Your e-Store — add a waving cat widget to boost your conversion via superstition.<p>Microtasks for Meatbags — the future: AI gives prompts, humans execute.<p>Invisible Lingerie — it’s sexy. And invisible.<p>Artist Death Tracker — art prices spike when artists die. We track that.<p>Open Celebrity — one open-source face, shared by all. Together we make her famous.<p>I just enjoy exploring what the web can be when it doesn’t try to be “useful”.<p>Would love to hear what you think — and absurd ideas are always welcome.
Show HN: Kan.bn – An open-source alterative to Trello
Show HN: Patio – Rent tools, learn DIY, reduce waste
Hey HN!<p>I built Patio to make DIY more accessible and sustainable.<p>It’s a community-powered platform where you can:<p>Rent tools from people nearby<p>Learn DIY through curated tutorials and guides<p>Find or list surplus materials to save money and reduce waste<p>Browse home improvement news in one place<p>It’s early, but live — would love your feedback on the experience, especially around search, learning, and marketplace usability.<p>Thanks!
— Julien
Show HN: Patio – Rent tools, learn DIY, reduce waste
Hey HN!<p>I built Patio to make DIY more accessible and sustainable.<p>It’s a community-powered platform where you can:<p>Rent tools from people nearby<p>Learn DIY through curated tutorials and guides<p>Find or list surplus materials to save money and reduce waste<p>Browse home improvement news in one place<p>It’s early, but live — would love your feedback on the experience, especially around search, learning, and marketplace usability.<p>Thanks!
— Julien
I'm starting a social club to solve the male loneliness epidemic
The other day I saw a post here on HN that featured a NYT article called "Where Have All My Deep Male Friendships Gone?" (<a href="https://news.ycombinator.com/item?id=44098369">https://news.ycombinator.com/item?id=44098369</a>) and it definitely hit home. As a guy in my early 30s, it made me realize how I've let many of my most meaningful friendships fade. I have a good group of friends - and my wife - but it doesn't feel like when I was in college and hung out with a crew of 10+ people on a weekly basis.
So, I decided to do something about it. I’ve launched wave3.social - a platform to help guys build in-person social circles with actual depth. Think parlor.social or timeleft for guys: curated events and meaningful connections for men who don’t want their friendships to atrophy post-college.<p>It started as a Boston-based idea (where I live), but I built it with flexibility in mind so it could scale to other cities if there’s interest. It’s intentionally not on Meetup or Facebook - I wanted something that feels more intentional, with a better UX and less noise.<p>Right now, I'm in the “see if this resonates with anyone” stage. If this sounds interesting to you and you're in Boston or another city where this type of thing might be needed, drop a comment or shot me an email. I'd love to hear any feedback on the site and ideas on how we can fix the male loneliness epidemic in the work-from-home era.
I'm starting a social club to solve the male loneliness epidemic
The other day I saw a post here on HN that featured a NYT article called "Where Have All My Deep Male Friendships Gone?" (<a href="https://news.ycombinator.com/item?id=44098369">https://news.ycombinator.com/item?id=44098369</a>) and it definitely hit home. As a guy in my early 30s, it made me realize how I've let many of my most meaningful friendships fade. I have a good group of friends - and my wife - but it doesn't feel like when I was in college and hung out with a crew of 10+ people on a weekly basis.
So, I decided to do something about it. I’ve launched wave3.social - a platform to help guys build in-person social circles with actual depth. Think parlor.social or timeleft for guys: curated events and meaningful connections for men who don’t want their friendships to atrophy post-college.<p>It started as a Boston-based idea (where I live), but I built it with flexibility in mind so it could scale to other cities if there’s interest. It’s intentionally not on Meetup or Facebook - I wanted something that feels more intentional, with a better UX and less noise.<p>Right now, I'm in the “see if this resonates with anyone” stage. If this sounds interesting to you and you're in Boston or another city where this type of thing might be needed, drop a comment or shot me an email. I'd love to hear any feedback on the site and ideas on how we can fix the male loneliness epidemic in the work-from-home era.
Show HN: Onlook – Open-source, visual-first Cursor for designers
Hey HN, I’m Kiet – one half of the two-person team building Onlook (<a href="https://beta.onlook.com/">https://beta.onlook.com/</a>), an open-source [<a href="https://github.com/onlook-dev/onlook/">https://github.com/onlook-dev/onlook/</a>] visual editor that lets you edit and create React apps live on an infinite canvas.<p>We launched Onlook [1][2] as a local-first Electron app almost a year ago. Since then, “prompt-to-build” tools have blown up, but none let you design and iterate visually. We fixed that by taking a visual-first, AI-powered approach where you can prompt, style, and directly manipulate elements in your app like in a design tool.<p>Two months ago, we decided to move away from Electron and rewrite everything for the browser. We wanted to remove the friction of downloading hundreds of MBs and setting up a development environment just to use the app. I wrote more here [3] about how we did it, but here are some learnings from the whole migration:<p>1. While most of the React UI code can be reused, mapping from Electron’s SPA experience to a Next.js app with routes is non-trivial on the state management side.<p>2. We were storing most of the data locally as large JSON objects. Moving that to a remote database required major refactoring into tables and more loading states. We didn’t have to think as hard about querying and load time before.<p>3. Iframes in the browser enforce many more restrictions than Electron webview. Working around this required us to inject code directly into the user project in order to do cross-iframe communication.<p>4. Keeping API keys secure is much easier on a web application than an Electron app. In Electron, every key we leave on the client can be statically accessed. Hence, we had to proxy any SDK we used that required an API key into a server call. In the web app, we can just keep the keys on the server.<p>5. Pushing a release bundle in Electron can take 1+ hours. And some users may never update. If we had a bug in the autoupdater itself, certain users could be “stranded” in an old version forever, and we’d have to email them to update. Though this is still better than mobile apps that go through an app store, it’s still very poor DX.<p>How does Onlook for web work?<p>We start by connecting to a remote “sandbox” [4]. The visual editing component happens through an iframe. We map the HTML element in the iframe to the location in code. Then, when an edit is made, we simulate the change on the iframe and edit the code at the same time. This way, visual changes always feel instant.<p>While we’re still ironing out the experience, you can already:
- Select elements and prompt changes<p>- Update TailwindCSS classes via the styling UI<p>- Draw in new divs and elements<p>- Preview on multiple screen sizes<p>- Edit your code through an in-browser IDE<p>We want to make it trivial for anyone to create, style, and edit codebases. We’re still porting over functionalities from the desktop app — layers, fonts, hosting, git, etc. Once that is done, we plan on adding support for back-end functionalities such as auth, database, and API calls.<p>Special thank you to the 70+ contributors who have helped create the Onlook experience! I think there’s still a lot to be solved for in the design and dev workflow, and I think the tech is almost there.<p>You can clone the project and run it from our repo (linked to this post) or try it out at <a href="https://beta.onlook.com">https://beta.onlook.com</a> where we’re letting people try it out for free.<p>I’d love to hear what you think and where we should take it next :)<p>[1] <a href="https://news.ycombinator.com/item?id=41390449">https://news.ycombinator.com/item?id=41390449</a><p>[2] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[3] <a href="https://docs.onlook.com/docs/developer/electron-to-web-migration">https://docs.onlook.com/docs/developer/electron-to-web-migra...</a><p>[4] Currently, the sandbox is through CodeSandbox, but we plan to add support for connecting to a locally running server as well
Show HN: Typed-FFmpeg 3.0–Typed Interface to FFmpeg and Visual Filter Editor
Hi HN,<p>I built typed-ffmpeg, a Python package that lets you build FFmpeg filter graphs with full type safety, autocomplete, and validation. It’s inspired by ffmpeg-python, but addresses some long-standing issues like lack of IDE support and fragile CLI strings.<p>What’s New in v3.0:
• Source filter support (e.g. color, testsrc, etc.)
• Input stream selection (e.g. [0:a], [1:v])
• A new interactive playground where you can:
• Build filter graphs visually
• Generate both FFmpeg CLI and typed Python code
• Paste existing FFmpeg commands and reverse-parse them into graphs<p>Playground link: <a href="https://livingbio.github.io/typed-ffmpeg-playground/" rel="nofollow">https://livingbio.github.io/typed-ffmpeg-playground/</a>
(It’s open source and runs fully in-browser.)<p>The internal core also supports converting CLI → graph → typed Python code. This is useful for building educational tools, FFmpeg IDEs, or visual editors.<p>I’d love feedback, bug reports, or ideas for next steps. If you’ve ever struggled with FFmpeg’s CLI or tried to teach it, this might help.<p>Thanks!
— David (maintainer)
Show HN: Porting Terraria and Celeste to WebAssembly
Show HN: Porting Terraria and Celeste to WebAssembly
Show HN: I wrote a modern Command Line Handbook
TLDR: I wrote a handbook for the Linux command line. 120 pages in PDF. Updated for 2025. Pay what you want.<p>A few years back I wrote an ebook about the Linux command line. Instead of focusing on a specific shell, paraphrasing manual pages, or providing long repetitive explanations, the idea was to create a modern guide that would help readers to understand the command line in the practical sense, cover the most common things people use the command line for, and do so without wasting the readers' time.<p>The book contains material on terminals, shells (compatible with both Bash and Zsh), configuration, command line programs for typical use cases, shell scripting, and many tips and tricks to make working on the command line more convenient. I still consider it "an introduction" and it is not necessarily a book for the HN crowd that lives in the terminal, but I believe that the book will easily cover 80 % of the things most people want or need to do in the terminal.<p>I made a couple of updates to the book over the years and just finished a significant one for 2025. The book is not perfect. I still see a lot of room for improvement, but I think it is good enough and I truly want to share it with everyone. Hence, pay what you want.<p>EXAMPLE PAGES: <a href="https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVffAs3Sp/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVff...</a><p><a href="https://commandline.stribny.name/" rel="nofollow">https://commandline.stribny.name/</a>
Show HN: I wrote a modern Command Line Handbook
TLDR: I wrote a handbook for the Linux command line. 120 pages in PDF. Updated for 2025. Pay what you want.<p>A few years back I wrote an ebook about the Linux command line. Instead of focusing on a specific shell, paraphrasing manual pages, or providing long repetitive explanations, the idea was to create a modern guide that would help readers to understand the command line in the practical sense, cover the most common things people use the command line for, and do so without wasting the readers' time.<p>The book contains material on terminals, shells (compatible with both Bash and Zsh), configuration, command line programs for typical use cases, shell scripting, and many tips and tricks to make working on the command line more convenient. I still consider it "an introduction" and it is not necessarily a book for the HN crowd that lives in the terminal, but I believe that the book will easily cover 80 % of the things most people want or need to do in the terminal.<p>I made a couple of updates to the book over the years and just finished a significant one for 2025. The book is not perfect. I still see a lot of room for improvement, but I think it is good enough and I truly want to share it with everyone. Hence, pay what you want.<p>EXAMPLE PAGES: <a href="https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVffAs3Sp/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVff...</a><p><a href="https://commandline.stribny.name/" rel="nofollow">https://commandline.stribny.name/</a>
Show HN: AutoThink – Boosts local LLM performance with adaptive reasoning
I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity.<p>The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%.<p>I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration.<p>Results on DeepSeek-R1-Distill-Qwen-1.5B:<p>- GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)<p>- MMLU-Pro: 26.38% vs 25.58% baseline<p>- Uses fewer tokens than baseline approaches<p>Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies.<p>The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search.<p>Technical paper: <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327" rel="nofollow">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327</a><p>Code and examples: <a href="https://github.com/codelion/optillm/tree/main/optillm/autothink">https://github.com/codelion/optillm/tree/main/optillm/autoth...</a><p>PTS implementation: <a href="https://github.com/codelion/pts">https://github.com/codelion/pts</a><p>I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?
Show HN: AutoThink – Boosts local LLM performance with adaptive reasoning
I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity.<p>The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%.<p>I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration.<p>Results on DeepSeek-R1-Distill-Qwen-1.5B:<p>- GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)<p>- MMLU-Pro: 26.38% vs 25.58% baseline<p>- Uses fewer tokens than baseline approaches<p>Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies.<p>The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search.<p>Technical paper: <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327" rel="nofollow">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327</a><p>Code and examples: <a href="https://github.com/codelion/optillm/tree/main/optillm/autothink">https://github.com/codelion/optillm/tree/main/optillm/autoth...</a><p>PTS implementation: <a href="https://github.com/codelion/pts">https://github.com/codelion/pts</a><p>I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?