The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Dropbase AI – A Prompt-Based Python Web App Builder

Hey HN,<p>Dropbase is an AI-based Python web app builder.<p>To build this, we had to make significant changes from our original launch: <a href="https://news.ycombinator.com/item?id=38534920">https://news.ycombinator.com/item?id=38534920</a>. Now, any web app can be entirely defined using just two files: `properties.json` for the UI and `main.py` for the backend logic, which makes it significantly easier for GPT to work with.<p>In the latest version, developers can use natural language prompts to build apps. But instead of generating a black-box app or promising an AI software engineer we just generate simple Python code that is easily interpreted by our internal web framework. This allows developers to:<p>(1) See and understand the generated app code. We regenerate the `main.py` file and highlight changes in a diff viewer, allowing developers to understand what exactly was changed.<p>(2) Edit the app code: Developers can correct any errors, occasional hallucinations, or edit code to handle specific use cases. Once they like the code, they can commit changes and immediately preview the app.<p>Incidentally, if you’ve tried Anthropic’s Artifacts to create “apps”, our experience will feel familiar. Dropbase AI is like Claude Artifacts, but for fully functional apps: you can connect to your database, make external API calls, and deploy to servers.<p>Our goal is to create a universal, prompt-based app builder that’s highly customizable. Code should always be accessible and developers should be in control. We believe most apps will be built or prototyped this way, and we're taking the first steps towards that goal.<p>A fun fact is that model improvements were critical here: we could not achieve the consistent results we needed with any LLM prior to GPT-4o and Claude 3.5 Sonnet. In the future, we’ll allow users to modify the code to call their local GPT/LLM deployment via Ollama, rather than relying on OpenAI or Anthropic calls.<p>If you’re building admin panels, database editors, back-office tools, billing/customer dashboards, and internal dev tools that can fetch data and trigger actions across any database, internal/external service or API, please give Dropbase a shot!<p>We're excited to get your thoughts and questions!<p>Demos:<p>- Here’s a demo video: <a href="https://youtu.be/RaxHOjhy3hY" rel="nofollow">https://youtu.be/RaxHOjhy3hY</a><p>- We also introduced Charts (beta) in this version based on suggestions from cjohnson318 in our previous HN post: <a href="https://youtu.be/YWtdD7THTxE" rel="nofollow">https://youtu.be/YWtdD7THTxE</a><p>Useful links:<p>- Repo here: <a href="https://github.com/DropbaseHQ/dropbase">https://github.com/DropbaseHQ/dropbase</a>. To setup locally, follow the quickstart guide in our docs<p>- Docs: <a href="https://docs.dropbase.io">https://docs.dropbase.io</a><p>- Homepage: <a href="https://dropbase.io">https://dropbase.io</a>

Show HN: Daminik – An Open source digital asset manager

Hey guys! We just published our little open source project on GitHub and would love to get some feedback. It's a very early release (alpha) but we thought it would be better to do it earlier so that we could get direct feedback.<p>Over the last 10 years we have built small to large web-projects of all kind. Almost every project involved managing images and files in one way or another. How do we manage our images and where? What happens if an image or logo changes, does it get updated across all sites? Which Host/CDN to choose? Is everything we do GDPR-compliant? With Daminik, we are trying to build a simple solution to solve these problems. One DAM with an integrated CDN. That’s it.<p>You can check it out at: <a href="https://daminik.com/?ref=hackernews" rel="nofollow">https://daminik.com/?ref=hackernews</a> (hackernews being the invitation code)<p>Repo: <a href="https://github.com/daminikhq/daminik">https://github.com/daminikhq/daminik</a><p>We would love to get your guys feedback on our alpha.

Show HN: Daminik – An Open source digital asset manager

Hey guys! We just published our little open source project on GitHub and would love to get some feedback. It's a very early release (alpha) but we thought it would be better to do it earlier so that we could get direct feedback.<p>Over the last 10 years we have built small to large web-projects of all kind. Almost every project involved managing images and files in one way or another. How do we manage our images and where? What happens if an image or logo changes, does it get updated across all sites? Which Host/CDN to choose? Is everything we do GDPR-compliant? With Daminik, we are trying to build a simple solution to solve these problems. One DAM with an integrated CDN. That’s it.<p>You can check it out at: <a href="https://daminik.com/?ref=hackernews" rel="nofollow">https://daminik.com/?ref=hackernews</a> (hackernews being the invitation code)<p>Repo: <a href="https://github.com/daminikhq/daminik">https://github.com/daminikhq/daminik</a><p>We would love to get your guys feedback on our alpha.

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Mandala – Automatically save, query and version Python computations

`mandala` is a framework I wrote to automate tracking ML experiments for my research. It differs from other experiment tracking tools by making persistence, query and versioning logic a generic part of the programming language itself, as opposed to an external logging tool you must learn and adapt to. The goal is to be able to write expressive computational code without thinking about persistence (like in an interactive session), and still have the full benefits of a versioned, queriable storage afterwards.<p>Surprisingly, it turns out that this vision can pretty much be achieved with two generic tools:<p>1. a memoization+versioning decorator, `@op`, which tracks inputs, outputs, code and runtime dependencies (other functions called, or global variables accessed) every time a function is called. Essentially, this makes function calls replace logging: if you want something saved, you write a function that returns it. Using (a lot of) hashing, `@op` ensures that the same version of the function is never executed twice on the same inputs.<p>Importantly, the decorator encourages/enforces composition. Before a call, `@op` functions wrap their inputs in special objects, `Ref`s, and return `Ref`s in turn. Furthermore, data structures can be made transparent to `@op`s, so that an `@op` can be called on a list of outputs of other `@op`s, or on an element of the output of another `@op`. This creates an expressive "web" of `@op` calls over time.<p>2. a data structure, `ComputationFrame`, can automatically organize any such web of `@op` calls into a high-level view, by grouping calls with a similar role into "operations", and their inputs/outputs into "variables". It can detect "imperative" patterns - like feedback loops, branching/merging, and grouping multiple results in a single object - and surface them in the graph.<p>`ComputationFrame`s are a "synthesis" of computation graphs and relational databases, and can be automatically "exported" as dataframes, where columns are variables and operations in the graph, and rows contain values and calls for (possibly partial) executions of the graph. The upshot is that you can query the relationships between any variables in a project in one line, even in the presence of very heterogeneous patterns in the graph.<p>I'm very excited about this project - which is still in an alpha version being actively developed - and especially about the `ComputationFrame` data structure. I'd love to hear the feedback of the HN community.<p>Colab quickstart: <a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/tutorials/01_hello.ipynb" rel="nofollow">https://colab.research.google.com/github/amakelov/mandala/bl...</a><p>Blog post introducing `ComputationFrame`s (can be opened in Colab too): <a href="https://amakelov.github.io/mandala/blog/01_cf/" rel="nofollow">https://amakelov.github.io/mandala/blog/01_cf/</a><p>Docs: <a href="https://amakelov.github.io/mandala/" rel="nofollow">https://amakelov.github.io/mandala/</a>

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Dut – a fast Linux disk usage calculator

"dut" is a disk usage calculator that I wrote a couple months ago in C. It is multi-threaded, making it one of the fastest such programs. It beats normal "du" in all cases, and beats all other similar programs when Linux's caches are warm (so, not on the first run). I wrote "dut" as a challenge to beat similar programs that I used a lot, namely pdu[1] and dust[2].<p>"dut" displays a tree of the biggest things under your current directory, and it also shows the size of hard-links under each directory as well. The hard-link tallying was inspired by ncdu[3], but I don't like how unintuitive the readout is. Anyone have ideas for a better format?<p>There's installation instructions in the README. dut is a single source file, so you only need to download it and copy-paste the compiler command, and then copy somewhere on your path like /usr/local/bin.<p>I went through a few different approaches writing it, and you can see most of them in the git history. At the core of the program is a datastructure that holds the directories that still need to be traversed, and binary heaps to hold statted files and directories. I had started off using C++ std::queues with mutexes, but the performance was awful, so I took it as a learning opportunity and wrote all the datastructures from scratch. That was the hardest part of the program to get right.<p>These are the other techniques I used to improve performance:<p>* Using fstatat(2) with the parent directory's fd instead of lstat(2) with an absolute path. (10-15% performance increase)<p>* Using statx(2) instead of fstatat. (perf showed fstatat running statx code in the kernel). (10% performance increase)<p>* Using getdents(2) to get directory contents instead of opendir/readdir/closedir. (also around 10%)<p>* Limiting inter-thread communication. I originally had fs-traversal results accumulated in a shared binary heap, but giving each thread a binary-heap and then merging them all at the end was faster.<p>I couldn't find any information online about fstatat and statx being significantly faster than plain old stat, so maybe this info will help someone in the future.<p>[1]: <a href="https://github.com/KSXGitHub/parallel-disk-usage">https://github.com/KSXGitHub/parallel-disk-usage</a><p>[2]: <a href="https://github.com/bootandy/dust">https://github.com/bootandy/dust</a><p>[3]: <a href="https://dev.yorhel.nl/doc/ncdu2" rel="nofollow">https://dev.yorhel.nl/doc/ncdu2</a>, see "Shared Links"

Show HN: Personal website inspired by Apple notes

i stan apple notes, so i built a new personal website to look, feel, & work just like it. it's fast, fully interactive, & can be navigated entirely via keyboard shortcuts. it was a lot of fun to build. i wrote more about the implementation in the linked page.<p>check it out!

Show HN: Personal website inspired by Apple notes

i stan apple notes, so i built a new personal website to look, feel, & work just like it. it's fast, fully interactive, & can be navigated entirely via keyboard shortcuts. it was a lot of fun to build. i wrote more about the implementation in the linked page.<p>check it out!

Show HN: Maelstrom – A Hermetic, Clustered Test Runner for Python and Rust

Hi everyone,<p>Maelstrom is a suite of tools for running tests in hermetic micro-containers locally on your machine or distributed across arbitrarily large clusters. Maelstrom currently has test runners for Rust and Python, with more on the way. You might use Maelstrom to run your tests because:<p><pre><code> * It's easy. Maelstrom functions as a drop-in replacement for cargo test and pytest. In most cases, it just works with your existing tests with minimal configuration. * It's reliable. Maelstrom runs every test hermetically in its own lightweight container, eliminating confusing errors caused by inter-test or implicit test-environment dependencies. * It's scalable. Maelstrom can be run as a cluster. You can add more worker machines to linearly increase test throughput. * It's clean. Maelstrom has built a rootless container implementation (not relying on Docker or RunC) from scratch, in Rust, optimized to be low-overhead and start quickly. * It's fast. In most cases, Maelstrom is faster than cargo test, even without using clustering. Maelstrom’s test-per-process model is inherently slower than pytest’s shared-process model, but Maelstrom provides test isolation at a low performance cost. </code></pre> While our focus thus far has been on running tests, Maelstrom's underlying job execution system is general-purpose. We provide a command line utility to run arbitrary commands, as well a gRPC-based API and Rust bindings for programmatic access and control.<p>Feedback and questions are welcome! Thanks for giving it a whirl.

Show HN: Maelstrom – A Hermetic, Clustered Test Runner for Python and Rust

Hi everyone,<p>Maelstrom is a suite of tools for running tests in hermetic micro-containers locally on your machine or distributed across arbitrarily large clusters. Maelstrom currently has test runners for Rust and Python, with more on the way. You might use Maelstrom to run your tests because:<p><pre><code> * It's easy. Maelstrom functions as a drop-in replacement for cargo test and pytest. In most cases, it just works with your existing tests with minimal configuration. * It's reliable. Maelstrom runs every test hermetically in its own lightweight container, eliminating confusing errors caused by inter-test or implicit test-environment dependencies. * It's scalable. Maelstrom can be run as a cluster. You can add more worker machines to linearly increase test throughput. * It's clean. Maelstrom has built a rootless container implementation (not relying on Docker or RunC) from scratch, in Rust, optimized to be low-overhead and start quickly. * It's fast. In most cases, Maelstrom is faster than cargo test, even without using clustering. Maelstrom’s test-per-process model is inherently slower than pytest’s shared-process model, but Maelstrom provides test isolation at a low performance cost. </code></pre> While our focus thus far has been on running tests, Maelstrom's underlying job execution system is general-purpose. We provide a command line utility to run arbitrary commands, as well a gRPC-based API and Rust bindings for programmatic access and control.<p>Feedback and questions are welcome! Thanks for giving it a whirl.

Show HN: Maelstrom – A Hermetic, Clustered Test Runner for Python and Rust

Hi everyone,<p>Maelstrom is a suite of tools for running tests in hermetic micro-containers locally on your machine or distributed across arbitrarily large clusters. Maelstrom currently has test runners for Rust and Python, with more on the way. You might use Maelstrom to run your tests because:<p><pre><code> * It's easy. Maelstrom functions as a drop-in replacement for cargo test and pytest. In most cases, it just works with your existing tests with minimal configuration. * It's reliable. Maelstrom runs every test hermetically in its own lightweight container, eliminating confusing errors caused by inter-test or implicit test-environment dependencies. * It's scalable. Maelstrom can be run as a cluster. You can add more worker machines to linearly increase test throughput. * It's clean. Maelstrom has built a rootless container implementation (not relying on Docker or RunC) from scratch, in Rust, optimized to be low-overhead and start quickly. * It's fast. In most cases, Maelstrom is faster than cargo test, even without using clustering. Maelstrom’s test-per-process model is inherently slower than pytest’s shared-process model, but Maelstrom provides test isolation at a low performance cost. </code></pre> While our focus thus far has been on running tests, Maelstrom's underlying job execution system is general-purpose. We provide a command line utility to run arbitrary commands, as well a gRPC-based API and Rust bindings for programmatic access and control.<p>Feedback and questions are welcome! Thanks for giving it a whirl.

Show HN: Tailwind Template Directory

Show HN: Tailwind Template Directory

Show HN: Posting v1 – The modern HTTP client that lives in your terminal

Hi HN!<p>I just released version 1.0 of Posting, an open source terminal application I've been working on which you might find useful if you work with, test, or develop HTTP APIs!<p>Posting (<a href="https://github.com/darrenburns/posting">https://github.com/darrenburns/posting</a>) is an HTTP client, not unlike Postman and Insomnia. However, as a TUI application, it works over SSH and enables efficient keyboard-centric workflows. Your requests are stored locally in simple readable YAML files, meaning they're easy to read and version control.<p>Some other features include:<p>- "Jump mode" navigation - Environments/variables with autocompletion - Syntax highlighting powered by tree-sitter - Vim keys support in much of the UI - Various builtin themes - A config system - "Open in $EDITOR" - A fuzzy search command palette for quickly accessing functionality.<p>Posting is written in Python using the Textual[1] framework, which I also help maintain.<p>Although 1.0 has been released, it's not yet feature complete. I'd love to hear feedback from the community to make sure I'm on the right track and learn what's important to people.<p>So, if you have any thoughts, feature requests, or opinions, big or small, I'd love to hear them. At this early stage, your ideas can really help shape the roadmap of the project!<p>Thanks, Darren<p>[1] Textual: <a href="https://github.com/Textualize/textual">https://github.com/Textualize/textual</a>

Show HN: Posting v1 – The modern HTTP client that lives in your terminal

Hi HN!<p>I just released version 1.0 of Posting, an open source terminal application I've been working on which you might find useful if you work with, test, or develop HTTP APIs!<p>Posting (<a href="https://github.com/darrenburns/posting">https://github.com/darrenburns/posting</a>) is an HTTP client, not unlike Postman and Insomnia. However, as a TUI application, it works over SSH and enables efficient keyboard-centric workflows. Your requests are stored locally in simple readable YAML files, meaning they're easy to read and version control.<p>Some other features include:<p>- "Jump mode" navigation - Environments/variables with autocompletion - Syntax highlighting powered by tree-sitter - Vim keys support in much of the UI - Various builtin themes - A config system - "Open in $EDITOR" - A fuzzy search command palette for quickly accessing functionality.<p>Posting is written in Python using the Textual[1] framework, which I also help maintain.<p>Although 1.0 has been released, it's not yet feature complete. I'd love to hear feedback from the community to make sure I'm on the right track and learn what's important to people.<p>So, if you have any thoughts, feature requests, or opinions, big or small, I'd love to hear them. At this early stage, your ideas can really help shape the roadmap of the project!<p>Thanks, Darren<p>[1] Textual: <a href="https://github.com/Textualize/textual">https://github.com/Textualize/textual</a>

Show HN: Posting v1 – The modern HTTP client that lives in your terminal

Hi HN!<p>I just released version 1.0 of Posting, an open source terminal application I've been working on which you might find useful if you work with, test, or develop HTTP APIs!<p>Posting (<a href="https://github.com/darrenburns/posting">https://github.com/darrenburns/posting</a>) is an HTTP client, not unlike Postman and Insomnia. However, as a TUI application, it works over SSH and enables efficient keyboard-centric workflows. Your requests are stored locally in simple readable YAML files, meaning they're easy to read and version control.<p>Some other features include:<p>- "Jump mode" navigation - Environments/variables with autocompletion - Syntax highlighting powered by tree-sitter - Vim keys support in much of the UI - Various builtin themes - A config system - "Open in $EDITOR" - A fuzzy search command palette for quickly accessing functionality.<p>Posting is written in Python using the Textual[1] framework, which I also help maintain.<p>Although 1.0 has been released, it's not yet feature complete. I'd love to hear feedback from the community to make sure I'm on the right track and learn what's important to people.<p>So, if you have any thoughts, feature requests, or opinions, big or small, I'd love to hear them. At this early stage, your ideas can really help shape the roadmap of the project!<p>Thanks, Darren<p>[1] Textual: <a href="https://github.com/Textualize/textual">https://github.com/Textualize/textual</a>

ML Code Exercises

< 1 2 3 ... 66 67 68 69 70 ... 719 720 721 >