The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Haystack Code Reviewer – Perform code reviews on a canvas

Hi HN!<p>We’re building Haystack Code Reviewer, a tool that lays out code diffs for a GitHub pull request on an interactive canvas. Instead of scrolling through diffs line-by-line, you can view all changes in a more connected, visual format – similar to viewing a call graph. We hope this will make it easier and less cognitively taxing to understand how different changes across files work together.<p>For a quick overview, check out our short demo video: <a href="https://www.youtube.com/watch?v=QeOz70x0WPE" rel="nofollow">https://www.youtube.com/watch?v=QeOz70x0WPE</a>. If you would like to give it a spin, head over to <a href="https://haystackeditor.dev" rel="nofollow">https://haystackeditor.dev</a>, click the “Review pull request button” in the top toolbar, and load any pull request via URL or pick a pull request from a dropdown.<p>We built Haystack Code Reviewer because we found pull requests difficult to review in a pure textual format — especially when hopping between multiple files or trying to break down complex changes. Oftentimes, pull request authors would have to specifically structure their commits so that code reviews would be easier to tackle, which is a time-consuming and error-prone process. Our goal is to make any pull request easy to understand at a glance, and reduce the effort needed from both reviewers and authors to craft a good code review.<p>Haystack Code Reviewer works on private repositories! We have authentication to ensure that someone cannot open the server for your pull request without having access to that pull request on GitHub. For additional security, we plan to build self-hosting soon. Please contact us if you’re interested in this.<p>Alternatively, a completely local option would be to download desktop Haystack and then navigate to your pull request from there. This is great for trying out the feature without exposing any data on the cloud!<p>In the near future, we plan to:<p>1. Introduce step-by-step navigation to guide reviewers through each part of the changeset<p>2. Allow for self-hosting<p>We’d love to hear your thoughts, suggestions, and any feedback on our approach or potential features!

Show HN: Check Supply – Send Checks in the Mail

When I lived in SF, my landlord required rent payments via check. For a while I just used my bank's bill-pay. If you remember Simple, they eventually killed their bill pay feature, and then they later shutdown altogether. I didn't want to buy a checkbook, stamps, and envelopes just for this one bill.<p>That's why I built Checks Supply with a friend to make check sending as simple as sending cash on Venmo. With our app, you can fill out your check details and have your payment processing within minutes after downloading.<p>Check writing is becoming a rarity, and many first-time senders find the process daunting. We hope Check Supply is a quick and convenient option for those moments you're puzzled why someone is asking you to pay by check.

Show HN: Check Supply – Send Checks in the Mail

When I lived in SF, my landlord required rent payments via check. For a while I just used my bank's bill-pay. If you remember Simple, they eventually killed their bill pay feature, and then they later shutdown altogether. I didn't want to buy a checkbook, stamps, and envelopes just for this one bill.<p>That's why I built Checks Supply with a friend to make check sending as simple as sending cash on Venmo. With our app, you can fill out your check details and have your payment processing within minutes after downloading.<p>Check writing is becoming a rarity, and many first-time senders find the process daunting. We hope Check Supply is a quick and convenient option for those moments you're puzzled why someone is asking you to pay by check.

Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation - Dual analysis combining log probabilities and semantic understanding - Structured JSON output with actionable insights - Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type} - high_confidence: array of {step, probability, token, context} - risk_areas: array of {type, steps[], motivation} - suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple: `pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a> - Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)

Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation - Dual analysis combining log probabilities and semantic understanding - Structured JSON output with actionable insights - Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type} - high_confidence: array of {step, probability, token, context} - risk_areas: array of {type, steps[], motivation} - suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple: `pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a> - Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)

Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.<p>What Klarity does:<p>- Real-time analysis of model uncertainty during generation - Dual analysis combining log probabilities and semantic understanding - Structured JSON output with actionable insights - Fully self-hostable with customizable analysis models<p>The tool works by analyzing each step of text generation and returns a structured JSON:<p>- uncertainty_points: array of {step, entropy, options[], type} - high_confidence: array of {step, probability, token, context} - risk_areas: array of {type, steps[], motivation} - suggestions: array of {issue, improvement}<p>Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.<p>Installation is simple: `pip install git+<a href="https://github.com/klara-research/klarity.git">https://github.com/klara-research/klarity.git</a>`<p>We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?<p>Links:<p>- Repo: <a href="https://github.com/klara-research/klarity">https://github.com/klara-research/klarity</a> - Our website: [<a href="https://klaralabs.com" rel="nofollow">https://klaralabs.com</a>](<a href="https://klaralabs.com/" rel="nofollow">https://klaralabs.com/</a>)

Show HN: I convert videos to printed flipbooks for living

I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.

Show HN: I convert videos to printed flipbooks for living

I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.

Show HN: I convert videos to printed flipbooks for living

I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.

Show HN: I convert videos to printed flipbooks for living

I built this product back in 2018 as a small side project: a tool that turns short videos into physical flipbooks. After launching it, I didn't touch it for years. Life and work took over, and it sat idle. But it kept getting a few orders every month, which made it impossible to forget. So in December 2024, I decided to rebrand and revive it.<p>The initial version relied on various local printing offices. I kept switching from one to another, but the results were never quite right. Either the quality wasn't good enough, or the turnaround times were too long. Eventually, me and my wife bought all the necessary machines and moved production in-house.<p>Now, it's a family business. My wife and I handle everything: printing, binding, cutting, addressing, and shipping each flipbook. On the technical side, it’s powered by Next.js, with FFmpeg extracting frames and handling overlays, and ImageMagick used for adding trim marks and creating the final PDFs.<p>After many years of working in IT, working on something tangible feels refreshing. It's satisfying to create something that brings people joy. And that is not hard to sell (like dev tools, for example haha). There are still challenges: we're experimenting with different cover papers, improving production, and testing new ideas without making things confusing. But that’s part of what keeps us moving forward.

Show HN: Groundhog AI Spring API

For anyone building weather-related AI apps, I am releasing an exciting iteration on last year’s model.<p>My Groundhog API is trained on 130 years of data and makes use of 82 separate data sources. Similar to DeepSeek, it is completely open source and free to use.<p>The primary use case is to make inferences about whether spring will come early or not, using a Mixture of Exports (MoE) approach, but surely others can be found if you are creative.<p>Other use cases: - All predicting groundhogs - Where they all live - Whether they are “real” groundhogs or imposters<p>Excited to see what people do with it!

Show HN: Groundhog AI Spring API

For anyone building weather-related AI apps, I am releasing an exciting iteration on last year’s model.<p>My Groundhog API is trained on 130 years of data and makes use of 82 separate data sources. Similar to DeepSeek, it is completely open source and free to use.<p>The primary use case is to make inferences about whether spring will come early or not, using a Mixture of Exports (MoE) approach, but surely others can be found if you are creative.<p>Other use cases: - All predicting groundhogs - Where they all live - Whether they are “real” groundhogs or imposters<p>Excited to see what people do with it!

Show HN: Modest – musical harmony library for Lua

This is a project I've been building in my spare time over the past few months. It's a library that provides methods for working with musical harmony ‒ intervals, notes, chords. For example, it can parse almost any chord symbol (Fmaj7, CminMaj9, etc) and turn it into notes, or it can identify a chord from a given set of notes.<p>I started this project with the idea of using formal grammar to parse chord symbols. I wanted to use it instead of a hand-written parser, which is the common approach among similar libraries. Lua caught my attention because of Lpeg, a Parsing Expression Grammar library that is both fast and easy to use. An additional motivation for using Lua was the lack of comparable libraries for it, even though the language is commonly used in audio programming.<p>However, despite being a Lua library, the project itself is written in Fennel — a "lispy" language that transpiles to Lua. Fennel has features that make writing code for the Lua platform much more pleasant: a concise syntax, macros, and destructuring — a feature Lua sorely lacks!<p>In the process, I definitely learned a lot about music theory, although my new knowledge is quite one-sided. By working on this library, I know a thing or two about types and structure of chords, but I learned almost nothing about their composition and transformation. Perhaps these will be the directions I explore next in the project.

Show HN: Modest – musical harmony library for Lua

This is a project I've been building in my spare time over the past few months. It's a library that provides methods for working with musical harmony ‒ intervals, notes, chords. For example, it can parse almost any chord symbol (Fmaj7, CminMaj9, etc) and turn it into notes, or it can identify a chord from a given set of notes.<p>I started this project with the idea of using formal grammar to parse chord symbols. I wanted to use it instead of a hand-written parser, which is the common approach among similar libraries. Lua caught my attention because of Lpeg, a Parsing Expression Grammar library that is both fast and easy to use. An additional motivation for using Lua was the lack of comparable libraries for it, even though the language is commonly used in audio programming.<p>However, despite being a Lua library, the project itself is written in Fennel — a "lispy" language that transpiles to Lua. Fennel has features that make writing code for the Lua platform much more pleasant: a concise syntax, macros, and destructuring — a feature Lua sorely lacks!<p>In the process, I definitely learned a lot about music theory, although my new knowledge is quite one-sided. By working on this library, I know a thing or two about types and structure of chords, but I learned almost nothing about their composition and transformation. Perhaps these will be the directions I explore next in the project.

Show HN: Simple to build MCP servers that easily connect with custom LLM calls

Hi!<p>After learning about MCP, I'm really excited about the future of provider-agnostic, re-usable tooling.<p>Unfortunately I've found that while it's easy to implement an MCP server for use with tools that support it (such as Claude Desktop), it's not as easy to implement your own support (such as integrating an MCP server into your own LLM application).<p>We implemented a thin MCP wrapper that easily integrates with Mirascope calls so that you can hook up an MCP server and client super easily to any supported LLM provider.<p>Excited to see what people build with this!

Show HN: Lume – OS lightweight CLI for MacOS and Linux VMs on Apple Silicon

We just open-sourced Lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.<p>Why we built Lume: - Run native macOS VMs in 1 command, using Apple Virtualization.Framework: `lume run macos-sequoia-vanilla:latest`<p>- Prebuilt images on <a href="https://ghcr.io/trycua" rel="nofollow">https://ghcr.io/trycua</a> (macOS, Ubuntu on ARM)<p>- API server to manage VMs programmatically `POST /lume/vms`<p>- A python SDK on github.com/trycua/pylume<p>Run prebuilt macOS images in just 1 step: lume run macos-sequoia-vanilla:latest<p>How to Install:<p>brew tap trycua/lume<p>brew install lume<p>You can also download the `lume.pkg.tar.gz` archive from the latest release <a href="https://github.com/trycua/lume/releases">https://github.com/trycua/lume/releases</a>, extract it, and install the package manually.<p>Local API Server: `lume` exposes a local HTTP API server that listens on `<a href="http://localhost:3000/lume" rel="nofollow">http://localhost:3000/lume</a>`, enabling automated management of VMs.<p>lume serve<p>For detailed API documentation, please refer to API Reference(<a href="https://github.com/trycua/lume/blob/main/docs/API-Reference.md">https://github.com/trycua/lume/blob/main/docs/API-Reference....</a>).<p>HN devs - would love raw feedback on the API design and whether this solves your Apple Silicon VM pain points. What would make you replace UTM/Multipass/Docker Desktop with this?<p>Repo: <a href="https://github.com/trycua/lume">https://github.com/trycua/lume</a> Python SDK: github.com/trycua/pylume Discord for direct feedback: <a href="https://discord.gg/8p56E2KJ" rel="nofollow">https://discord.gg/8p56E2KJ</a>

Show HN: Lume – OS lightweight CLI for MacOS and Linux VMs on Apple Silicon

We just open-sourced Lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.<p>Why we built Lume: - Run native macOS VMs in 1 command, using Apple Virtualization.Framework: `lume run macos-sequoia-vanilla:latest`<p>- Prebuilt images on <a href="https://ghcr.io/trycua" rel="nofollow">https://ghcr.io/trycua</a> (macOS, Ubuntu on ARM)<p>- API server to manage VMs programmatically `POST /lume/vms`<p>- A python SDK on github.com/trycua/pylume<p>Run prebuilt macOS images in just 1 step: lume run macos-sequoia-vanilla:latest<p>How to Install:<p>brew tap trycua/lume<p>brew install lume<p>You can also download the `lume.pkg.tar.gz` archive from the latest release <a href="https://github.com/trycua/lume/releases">https://github.com/trycua/lume/releases</a>, extract it, and install the package manually.<p>Local API Server: `lume` exposes a local HTTP API server that listens on `<a href="http://localhost:3000/lume" rel="nofollow">http://localhost:3000/lume</a>`, enabling automated management of VMs.<p>lume serve<p>For detailed API documentation, please refer to API Reference(<a href="https://github.com/trycua/lume/blob/main/docs/API-Reference.md">https://github.com/trycua/lume/blob/main/docs/API-Reference....</a>).<p>HN devs - would love raw feedback on the API design and whether this solves your Apple Silicon VM pain points. What would make you replace UTM/Multipass/Docker Desktop with this?<p>Repo: <a href="https://github.com/trycua/lume">https://github.com/trycua/lume</a> Python SDK: github.com/trycua/pylume Discord for direct feedback: <a href="https://discord.gg/8p56E2KJ" rel="nofollow">https://discord.gg/8p56E2KJ</a>

Show HN: Lume – OS lightweight CLI for MacOS and Linux VMs on Apple Silicon

We just open-sourced Lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.<p>Why we built Lume: - Run native macOS VMs in 1 command, using Apple Virtualization.Framework: `lume run macos-sequoia-vanilla:latest`<p>- Prebuilt images on <a href="https://ghcr.io/trycua" rel="nofollow">https://ghcr.io/trycua</a> (macOS, Ubuntu on ARM)<p>- API server to manage VMs programmatically `POST /lume/vms`<p>- A python SDK on github.com/trycua/pylume<p>Run prebuilt macOS images in just 1 step: lume run macos-sequoia-vanilla:latest<p>How to Install:<p>brew tap trycua/lume<p>brew install lume<p>You can also download the `lume.pkg.tar.gz` archive from the latest release <a href="https://github.com/trycua/lume/releases">https://github.com/trycua/lume/releases</a>, extract it, and install the package manually.<p>Local API Server: `lume` exposes a local HTTP API server that listens on `<a href="http://localhost:3000/lume" rel="nofollow">http://localhost:3000/lume</a>`, enabling automated management of VMs.<p>lume serve<p>For detailed API documentation, please refer to API Reference(<a href="https://github.com/trycua/lume/blob/main/docs/API-Reference.md">https://github.com/trycua/lume/blob/main/docs/API-Reference....</a>).<p>HN devs - would love raw feedback on the API design and whether this solves your Apple Silicon VM pain points. What would make you replace UTM/Multipass/Docker Desktop with this?<p>Repo: <a href="https://github.com/trycua/lume">https://github.com/trycua/lume</a> Python SDK: github.com/trycua/pylume Discord for direct feedback: <a href="https://discord.gg/8p56E2KJ" rel="nofollow">https://discord.gg/8p56E2KJ</a>

Show HN: Lume – OS lightweight CLI for MacOS and Linux VMs on Apple Silicon

We just open-sourced Lume - a tool we built after hitting walls with existing virtualization options on Apple Silicon. No GUI, no complex stacks - just a single binary that lets you spin up macOS or Linux VMs via CLI or API.<p>Why we built Lume: - Run native macOS VMs in 1 command, using Apple Virtualization.Framework: `lume run macos-sequoia-vanilla:latest`<p>- Prebuilt images on <a href="https://ghcr.io/trycua" rel="nofollow">https://ghcr.io/trycua</a> (macOS, Ubuntu on ARM)<p>- API server to manage VMs programmatically `POST /lume/vms`<p>- A python SDK on github.com/trycua/pylume<p>Run prebuilt macOS images in just 1 step: lume run macos-sequoia-vanilla:latest<p>How to Install:<p>brew tap trycua/lume<p>brew install lume<p>You can also download the `lume.pkg.tar.gz` archive from the latest release <a href="https://github.com/trycua/lume/releases">https://github.com/trycua/lume/releases</a>, extract it, and install the package manually.<p>Local API Server: `lume` exposes a local HTTP API server that listens on `<a href="http://localhost:3000/lume" rel="nofollow">http://localhost:3000/lume</a>`, enabling automated management of VMs.<p>lume serve<p>For detailed API documentation, please refer to API Reference(<a href="https://github.com/trycua/lume/blob/main/docs/API-Reference.md">https://github.com/trycua/lume/blob/main/docs/API-Reference....</a>).<p>HN devs - would love raw feedback on the API design and whether this solves your Apple Silicon VM pain points. What would make you replace UTM/Multipass/Docker Desktop with this?<p>Repo: <a href="https://github.com/trycua/lume">https://github.com/trycua/lume</a> Python SDK: github.com/trycua/pylume Discord for direct feedback: <a href="https://discord.gg/8p56E2KJ" rel="nofollow">https://discord.gg/8p56E2KJ</a>

Show HN: Perforator – cluster-wide profiling tool for large data centers

Hey HN! We are happy to share Perforator – our internal cluster-wide profiler with great support for native languages and a built-in AutoFDO pipeline to simplify sPGO builds. Perforator allows you to profile most binaries without having to recompile or adjust the build process. We use it at Yandex to profile each pod inside a large cluster at modest speed (99Hz), collecting petabytes of profiles every day.<p>There's a blog post about it at <a href="https://medium.com/yandex/yandexs-high-performance-profiler-is-now-open-source-95e291df9d18" rel="nofollow">https://medium.com/yandex/yandexs-high-performance-profiler-...</a>.<p>Inspired by Google-Wide Profiling, we started continuous profiling years ago with simple tools like poormansprofiler.org. With the rise of eBPF, we came up with a simple and elegant solution providing detailed profiles without noticeable overhead. Pretty wild when you can see the guts of your production binaries in a flamegraph without them even noticing.<p>Some technical details:<p>- Our main contribution is infrastructure for continuous PGO using AutoFDO. Google and Meta have done tremendous work on building PGO infrastructure, and we made the last missing piece of the puzzle to make this work well and scalable.<p>- Native binaries are profiled through eh_frame analysis, interpreted/JIT-compiled languages are profiled through perf-pid.map or hardcoded structure offsets.<p>- We render profiles in multiple ways, the most common one is a fast implementation of FlameGraphs, rendering 1M frames in 100ms.<p>- We provide Helm charts to easily deploy Perforator on your k8s cluster.<p>- You can use Perforator in standalone mode as a replacement for perf record.<p>I'd love to answer your questions about the tool!

< 1 2 3 ... 69 70 71 72 73 ... 824 825 826 >