The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Click counter using iPhone volume buttons

Show HN: Smelt — an open source test runner for chip developers

Hey everyone, James from Silogy here.<p>We’re excited to open-source our test runner, Smelt. Smelt is a simple and extensible test runner optimized for chip development workflows. Smelt enables developers to:<p>* Programmatically define numerous test variants<p>* Execute these tests in parallel<p>* Easily analyze test results<p>As chip designs get more complex, the state space that needs to be explored in design verification is exploding. In chip development, it's common to run thousands of tests, each with multiple hyperparameters that result in even more variation. Smelt offers a straightforward approach to generating test variants and extracting valuable insights from your test runs. Smelt integrates seamlessly with most popular simulators and other chip design tools.<p>Key features:<p>* Procedural test generation: Programmatically generate tests with python<p>* Automatic rerun on failure: Describe the computation required re-run failing tests<p>* Analysis APIs: All of the data needed to track and reproduce tests<p>* Extensible: Define your tests with a simple python interface<p>Yves (<a href="https://github.com/silogy-io/yves">https://github.com/silogy-io/yves</a>) is a suite of directed performance tests that we brought up with smelt – check it out if you’d like to see smelt in action.<p>Repo: <a href="https://github.com/silogy-io/smelt">https://github.com/silogy-io/smelt</a><p>We built Smelt to streamline the testing process for chip developers. We're eager to hear your feedback and see how it performs in your projects!

Show HN: Smelt — an open source test runner for chip developers

Hey everyone, James from Silogy here.<p>We’re excited to open-source our test runner, Smelt. Smelt is a simple and extensible test runner optimized for chip development workflows. Smelt enables developers to:<p>* Programmatically define numerous test variants<p>* Execute these tests in parallel<p>* Easily analyze test results<p>As chip designs get more complex, the state space that needs to be explored in design verification is exploding. In chip development, it's common to run thousands of tests, each with multiple hyperparameters that result in even more variation. Smelt offers a straightforward approach to generating test variants and extracting valuable insights from your test runs. Smelt integrates seamlessly with most popular simulators and other chip design tools.<p>Key features:<p>* Procedural test generation: Programmatically generate tests with python<p>* Automatic rerun on failure: Describe the computation required re-run failing tests<p>* Analysis APIs: All of the data needed to track and reproduce tests<p>* Extensible: Define your tests with a simple python interface<p>Yves (<a href="https://github.com/silogy-io/yves">https://github.com/silogy-io/yves</a>) is a suite of directed performance tests that we brought up with smelt – check it out if you’d like to see smelt in action.<p>Repo: <a href="https://github.com/silogy-io/smelt">https://github.com/silogy-io/smelt</a><p>We built Smelt to streamline the testing process for chip developers. We're eager to hear your feedback and see how it performs in your projects!

Show HN: Smelt — an open source test runner for chip developers

Hey everyone, James from Silogy here.<p>We’re excited to open-source our test runner, Smelt. Smelt is a simple and extensible test runner optimized for chip development workflows. Smelt enables developers to:<p>* Programmatically define numerous test variants<p>* Execute these tests in parallel<p>* Easily analyze test results<p>As chip designs get more complex, the state space that needs to be explored in design verification is exploding. In chip development, it's common to run thousands of tests, each with multiple hyperparameters that result in even more variation. Smelt offers a straightforward approach to generating test variants and extracting valuable insights from your test runs. Smelt integrates seamlessly with most popular simulators and other chip design tools.<p>Key features:<p>* Procedural test generation: Programmatically generate tests with python<p>* Automatic rerun on failure: Describe the computation required re-run failing tests<p>* Analysis APIs: All of the data needed to track and reproduce tests<p>* Extensible: Define your tests with a simple python interface<p>Yves (<a href="https://github.com/silogy-io/yves">https://github.com/silogy-io/yves</a>) is a suite of directed performance tests that we brought up with smelt – check it out if you’d like to see smelt in action.<p>Repo: <a href="https://github.com/silogy-io/smelt">https://github.com/silogy-io/smelt</a><p>We built Smelt to streamline the testing process for chip developers. We're eager to hear your feedback and see how it performs in your projects!

Show HN: Open-source CLI coding framework using Claude

Show HN: Open-source CLI coding framework using Claude

Show HN: Open-source CLI coding framework using Claude

Show HN: Open-source CLI coding framework using Claude

Show HN: Dropbase AI – A Prompt-Based Python Web App Builder

Hey HN,<p>Dropbase is an AI-based Python web app builder.<p>To build this, we had to make significant changes from our original launch: <a href="https://news.ycombinator.com/item?id=38534920">https://news.ycombinator.com/item?id=38534920</a>. Now, any web app can be entirely defined using just two files: `properties.json` for the UI and `main.py` for the backend logic, which makes it significantly easier for GPT to work with.<p>In the latest version, developers can use natural language prompts to build apps. But instead of generating a black-box app or promising an AI software engineer we just generate simple Python code that is easily interpreted by our internal web framework. This allows developers to:<p>(1) See and understand the generated app code. We regenerate the `main.py` file and highlight changes in a diff viewer, allowing developers to understand what exactly was changed.<p>(2) Edit the app code: Developers can correct any errors, occasional hallucinations, or edit code to handle specific use cases. Once they like the code, they can commit changes and immediately preview the app.<p>Incidentally, if you’ve tried Anthropic’s Artifacts to create “apps”, our experience will feel familiar. Dropbase AI is like Claude Artifacts, but for fully functional apps: you can connect to your database, make external API calls, and deploy to servers.<p>Our goal is to create a universal, prompt-based app builder that’s highly customizable. Code should always be accessible and developers should be in control. We believe most apps will be built or prototyped this way, and we're taking the first steps towards that goal.<p>A fun fact is that model improvements were critical here: we could not achieve the consistent results we needed with any LLM prior to GPT-4o and Claude 3.5 Sonnet. In the future, we’ll allow users to modify the code to call their local GPT/LLM deployment via Ollama, rather than relying on OpenAI or Anthropic calls.<p>If you’re building admin panels, database editors, back-office tools, billing/customer dashboards, and internal dev tools that can fetch data and trigger actions across any database, internal/external service or API, please give Dropbase a shot!<p>We're excited to get your thoughts and questions!<p>Demos:<p>- Here’s a demo video: <a href="https://youtu.be/RaxHOjhy3hY" rel="nofollow">https://youtu.be/RaxHOjhy3hY</a><p>- We also introduced Charts (beta) in this version based on suggestions from cjohnson318 in our previous HN post: <a href="https://youtu.be/YWtdD7THTxE" rel="nofollow">https://youtu.be/YWtdD7THTxE</a><p>Useful links:<p>- Repo here: <a href="https://github.com/DropbaseHQ/dropbase">https://github.com/DropbaseHQ/dropbase</a>. To setup locally, follow the quickstart guide in our docs<p>- Docs: <a href="https://docs.dropbase.io">https://docs.dropbase.io</a><p>- Homepage: <a href="https://dropbase.io">https://dropbase.io</a>

Show HN: Dropbase AI – A Prompt-Based Python Web App Builder

Hey HN,<p>Dropbase is an AI-based Python web app builder.<p>To build this, we had to make significant changes from our original launch: <a href="https://news.ycombinator.com/item?id=38534920">https://news.ycombinator.com/item?id=38534920</a>. Now, any web app can be entirely defined using just two files: `properties.json` for the UI and `main.py` for the backend logic, which makes it significantly easier for GPT to work with.<p>In the latest version, developers can use natural language prompts to build apps. But instead of generating a black-box app or promising an AI software engineer we just generate simple Python code that is easily interpreted by our internal web framework. This allows developers to:<p>(1) See and understand the generated app code. We regenerate the `main.py` file and highlight changes in a diff viewer, allowing developers to understand what exactly was changed.<p>(2) Edit the app code: Developers can correct any errors, occasional hallucinations, or edit code to handle specific use cases. Once they like the code, they can commit changes and immediately preview the app.<p>Incidentally, if you’ve tried Anthropic’s Artifacts to create “apps”, our experience will feel familiar. Dropbase AI is like Claude Artifacts, but for fully functional apps: you can connect to your database, make external API calls, and deploy to servers.<p>Our goal is to create a universal, prompt-based app builder that’s highly customizable. Code should always be accessible and developers should be in control. We believe most apps will be built or prototyped this way, and we're taking the first steps towards that goal.<p>A fun fact is that model improvements were critical here: we could not achieve the consistent results we needed with any LLM prior to GPT-4o and Claude 3.5 Sonnet. In the future, we’ll allow users to modify the code to call their local GPT/LLM deployment via Ollama, rather than relying on OpenAI or Anthropic calls.<p>If you’re building admin panels, database editors, back-office tools, billing/customer dashboards, and internal dev tools that can fetch data and trigger actions across any database, internal/external service or API, please give Dropbase a shot!<p>We're excited to get your thoughts and questions!<p>Demos:<p>- Here’s a demo video: <a href="https://youtu.be/RaxHOjhy3hY" rel="nofollow">https://youtu.be/RaxHOjhy3hY</a><p>- We also introduced Charts (beta) in this version based on suggestions from cjohnson318 in our previous HN post: <a href="https://youtu.be/YWtdD7THTxE" rel="nofollow">https://youtu.be/YWtdD7THTxE</a><p>Useful links:<p>- Repo here: <a href="https://github.com/DropbaseHQ/dropbase">https://github.com/DropbaseHQ/dropbase</a>. To setup locally, follow the quickstart guide in our docs<p>- Docs: <a href="https://docs.dropbase.io">https://docs.dropbase.io</a><p>- Homepage: <a href="https://dropbase.io">https://dropbase.io</a>

Show HN: Dropbase AI – A Prompt-Based Python Web App Builder

Hey HN,<p>Dropbase is an AI-based Python web app builder.<p>To build this, we had to make significant changes from our original launch: <a href="https://news.ycombinator.com/item?id=38534920">https://news.ycombinator.com/item?id=38534920</a>. Now, any web app can be entirely defined using just two files: `properties.json` for the UI and `main.py` for the backend logic, which makes it significantly easier for GPT to work with.<p>In the latest version, developers can use natural language prompts to build apps. But instead of generating a black-box app or promising an AI software engineer we just generate simple Python code that is easily interpreted by our internal web framework. This allows developers to:<p>(1) See and understand the generated app code. We regenerate the `main.py` file and highlight changes in a diff viewer, allowing developers to understand what exactly was changed.<p>(2) Edit the app code: Developers can correct any errors, occasional hallucinations, or edit code to handle specific use cases. Once they like the code, they can commit changes and immediately preview the app.<p>Incidentally, if you’ve tried Anthropic’s Artifacts to create “apps”, our experience will feel familiar. Dropbase AI is like Claude Artifacts, but for fully functional apps: you can connect to your database, make external API calls, and deploy to servers.<p>Our goal is to create a universal, prompt-based app builder that’s highly customizable. Code should always be accessible and developers should be in control. We believe most apps will be built or prototyped this way, and we're taking the first steps towards that goal.<p>A fun fact is that model improvements were critical here: we could not achieve the consistent results we needed with any LLM prior to GPT-4o and Claude 3.5 Sonnet. In the future, we’ll allow users to modify the code to call their local GPT/LLM deployment via Ollama, rather than relying on OpenAI or Anthropic calls.<p>If you’re building admin panels, database editors, back-office tools, billing/customer dashboards, and internal dev tools that can fetch data and trigger actions across any database, internal/external service or API, please give Dropbase a shot!<p>We're excited to get your thoughts and questions!<p>Demos:<p>- Here’s a demo video: <a href="https://youtu.be/RaxHOjhy3hY" rel="nofollow">https://youtu.be/RaxHOjhy3hY</a><p>- We also introduced Charts (beta) in this version based on suggestions from cjohnson318 in our previous HN post: <a href="https://youtu.be/YWtdD7THTxE" rel="nofollow">https://youtu.be/YWtdD7THTxE</a><p>Useful links:<p>- Repo here: <a href="https://github.com/DropbaseHQ/dropbase">https://github.com/DropbaseHQ/dropbase</a>. To setup locally, follow the quickstart guide in our docs<p>- Docs: <a href="https://docs.dropbase.io">https://docs.dropbase.io</a><p>- Homepage: <a href="https://dropbase.io">https://dropbase.io</a>

Show HN: Daminik – An Open source digital asset manager

Hey guys! We just published our little open source project on GitHub and would love to get some feedback. It's a very early release (alpha) but we thought it would be better to do it earlier so that we could get direct feedback.<p>Over the last 10 years we have built small to large web-projects of all kind. Almost every project involved managing images and files in one way or another. How do we manage our images and where? What happens if an image or logo changes, does it get updated across all sites? Which Host/CDN to choose? Is everything we do GDPR-compliant? With Daminik, we are trying to build a simple solution to solve these problems. One DAM with an integrated CDN. That’s it.<p>You can check it out at: <a href="https://daminik.com/?ref=hackernews" rel="nofollow">https://daminik.com/?ref=hackernews</a> (hackernews being the invitation code)<p>Repo: <a href="https://github.com/daminikhq/daminik">https://github.com/daminikhq/daminik</a><p>We would love to get your guys feedback on our alpha.

Show HN: Daminik – An Open source digital asset manager

Hey guys! We just published our little open source project on GitHub and would love to get some feedback. It's a very early release (alpha) but we thought it would be better to do it earlier so that we could get direct feedback.<p>Over the last 10 years we have built small to large web-projects of all kind. Almost every project involved managing images and files in one way or another. How do we manage our images and where? What happens if an image or logo changes, does it get updated across all sites? Which Host/CDN to choose? Is everything we do GDPR-compliant? With Daminik, we are trying to build a simple solution to solve these problems. One DAM with an integrated CDN. That’s it.<p>You can check it out at: <a href="https://daminik.com/?ref=hackernews" rel="nofollow">https://daminik.com/?ref=hackernews</a> (hackernews being the invitation code)<p>Repo: <a href="https://github.com/daminikhq/daminik">https://github.com/daminikhq/daminik</a><p>We would love to get your guys feedback on our alpha.

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Personal website inspired by Apple notes

i stan apple notes, so i built a new personal website to look, feel, & work just like it. it's fast, fully interactive, & can be navigated entirely via keyboard shortcuts. it was a lot of fun to build. i wrote more about the implementation in the linked page.<p>check it out!

< 1 2 3 ... 303 304 305 306 307 ... 956 957 958 >