The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Glyph3D – A 3D text visualizer for macOS and iOS / iPadOS

Hello internet folks!<p>I'm happy to share the first TestFlight release of Glyph3D, a 3D text visualizer for macOS and iOS. It's free, supports pretty much any utf8 data, and is a pretty interesting way to navigate your repositories or data directories without the limitations of standard text windows!<p>Download and bookmark for macOS and iOS: <a href="https://github.com/tikimcfee/LookAtThat/">https://github.com/tikimcfee/LookAtThat/</a><p>- You should be able to download public repos from GitHub and render them.<p>- Play with opening and closing windows. I've disabled many of the in-flight features to keep user confusion down, and will be creating additional documentation about those features as they are implemented.<p>- Editing works but there's nowhere to put the data for now ;) The user can pull from the app's caches if they want for now, but there's no actual repo support - by design. Bringing in the git SDK, or accessing the CLI, is a bit more work and danger than I want to include in the app right now.<p>Any feedback is welcome, and if you'd like to support me or the project in any way, please reach out to me, and I'll be looking forward to that conversation.

Show HN: Automate your studio – mute a mixer channel to turn your PTZ camera

Seamlessly automate your audio-visual setup! This open-source framework uses the Open Sound Control protocol to integrate audio mixer consoles, OBS, PTZ cameras, and more. Perfect for live production enthusiasts, streamers, and tech tinkerers.<p>I have made it originally to meet our needs, then opensourced it: We needed to move a PTZ cam based on the stage/pulpit mute states on our X32, but it is capable for way more. Let me know what do you guys think!<p>Cheers!

Show HN: Automate your studio – mute a mixer channel to turn your PTZ camera

Seamlessly automate your audio-visual setup! This open-source framework uses the Open Sound Control protocol to integrate audio mixer consoles, OBS, PTZ cameras, and more. Perfect for live production enthusiasts, streamers, and tech tinkerers.<p>I have made it originally to meet our needs, then opensourced it: We needed to move a PTZ cam based on the stage/pulpit mute states on our X32, but it is capable for way more. Let me know what do you guys think!<p>Cheers!

Show HN: Automate your studio – mute a mixer channel to turn your PTZ camera

Seamlessly automate your audio-visual setup! This open-source framework uses the Open Sound Control protocol to integrate audio mixer consoles, OBS, PTZ cameras, and more. Perfect for live production enthusiasts, streamers, and tech tinkerers.<p>I have made it originally to meet our needs, then opensourced it: We needed to move a PTZ cam based on the stage/pulpit mute states on our X32, but it is capable for way more. Let me know what do you guys think!<p>Cheers!

Show HN: Steel.dev – An open-source browser API for AI agents and apps

Show HN: Flow – A dynamic task engine for building AI agents

I think graph is a wrong abstraction for building AI agents. Just look at how incredibly hard it is to make routing using LangGraph - conditional edges are a mess.<p>I built Laminar Flow to solve a common frustration with traditional workflow engines - the rigid need to predefine all node connections. Instead of static DAGs, Flow uses a dynamic task queue system that lets workflows evolve at runtime.<p>Flow is built on 3 core principles:<p>* Concurrent Execution - Tasks run in parallel automatically<p>* Dynamic Scheduling - Tasks can schedule new tasks at runtime<p>* Smart Dependencies - Tasks can await results from previous operations<p>All tasks share a thread-safe context for state management.<p>This architecture makes it surprisingly simple to implement complex patterns like map-reduce, streaming results, cycles, and self-modifying workflows. Perfect for AI agents that need to make runtime decisions about their next actions.<p>Flow is lightweight, elegantly written and has zero dependencies for the engine.<p>Behind the scenes it's a ThreadPoolExecutor, which is more than enough to handle concurrent execution considering majority of AI workflows are IO bound.<p>To make it possible to wait for the completion of previous tasks, I just added semaphore for the state value. Once the state is set, one permit is released for the semaphore.<p>The project also comes with built-in OpenTelemetry instrumentation for debugging and state reconstruction.<p>Give it a try here -> <a href="https://github.com/lmnr-ai/flow">https://github.com/lmnr-ai/flow</a>. Just do pip install lmnr-flow. (or uv add lmnr-flow). More examples are in the readme.<p>Looking forward to feedback from the HN community! Especially interested in hearing about your use cases for dynamic workflows.<p>Couple of things on the roadmap, so contributions are welcome!<p>* Async function support<p>* TS port<p>* Some consensus on how to handle task ids when the same tasks is spawned multiple times

Show HN: Flow – A dynamic task engine for building AI agents

I think graph is a wrong abstraction for building AI agents. Just look at how incredibly hard it is to make routing using LangGraph - conditional edges are a mess.<p>I built Laminar Flow to solve a common frustration with traditional workflow engines - the rigid need to predefine all node connections. Instead of static DAGs, Flow uses a dynamic task queue system that lets workflows evolve at runtime.<p>Flow is built on 3 core principles:<p>* Concurrent Execution - Tasks run in parallel automatically<p>* Dynamic Scheduling - Tasks can schedule new tasks at runtime<p>* Smart Dependencies - Tasks can await results from previous operations<p>All tasks share a thread-safe context for state management.<p>This architecture makes it surprisingly simple to implement complex patterns like map-reduce, streaming results, cycles, and self-modifying workflows. Perfect for AI agents that need to make runtime decisions about their next actions.<p>Flow is lightweight, elegantly written and has zero dependencies for the engine.<p>Behind the scenes it's a ThreadPoolExecutor, which is more than enough to handle concurrent execution considering majority of AI workflows are IO bound.<p>To make it possible to wait for the completion of previous tasks, I just added semaphore for the state value. Once the state is set, one permit is released for the semaphore.<p>The project also comes with built-in OpenTelemetry instrumentation for debugging and state reconstruction.<p>Give it a try here -> <a href="https://github.com/lmnr-ai/flow">https://github.com/lmnr-ai/flow</a>. Just do pip install lmnr-flow. (or uv add lmnr-flow). More examples are in the readme.<p>Looking forward to feedback from the HN community! Especially interested in hearing about your use cases for dynamic workflows.<p>Couple of things on the roadmap, so contributions are welcome!<p>* Async function support<p>* TS port<p>* Some consensus on how to handle task ids when the same tasks is spawned multiple times

Show HN: Flow – A dynamic task engine for building AI agents

I think graph is a wrong abstraction for building AI agents. Just look at how incredibly hard it is to make routing using LangGraph - conditional edges are a mess.<p>I built Laminar Flow to solve a common frustration with traditional workflow engines - the rigid need to predefine all node connections. Instead of static DAGs, Flow uses a dynamic task queue system that lets workflows evolve at runtime.<p>Flow is built on 3 core principles:<p>* Concurrent Execution - Tasks run in parallel automatically<p>* Dynamic Scheduling - Tasks can schedule new tasks at runtime<p>* Smart Dependencies - Tasks can await results from previous operations<p>All tasks share a thread-safe context for state management.<p>This architecture makes it surprisingly simple to implement complex patterns like map-reduce, streaming results, cycles, and self-modifying workflows. Perfect for AI agents that need to make runtime decisions about their next actions.<p>Flow is lightweight, elegantly written and has zero dependencies for the engine.<p>Behind the scenes it's a ThreadPoolExecutor, which is more than enough to handle concurrent execution considering majority of AI workflows are IO bound.<p>To make it possible to wait for the completion of previous tasks, I just added semaphore for the state value. Once the state is set, one permit is released for the semaphore.<p>The project also comes with built-in OpenTelemetry instrumentation for debugging and state reconstruction.<p>Give it a try here -> <a href="https://github.com/lmnr-ai/flow">https://github.com/lmnr-ai/flow</a>. Just do pip install lmnr-flow. (or uv add lmnr-flow). More examples are in the readme.<p>Looking forward to feedback from the HN community! Especially interested in hearing about your use cases for dynamic workflows.<p>Couple of things on the roadmap, so contributions are welcome!<p>* Async function support<p>* TS port<p>* Some consensus on how to handle task ids when the same tasks is spawned multiple times

Show HN: SeekStorm – open-source sub-millisecond search in Rust

Show HN: SeekStorm – open-source sub-millisecond search in Rust

Show HN: SeekStorm – open-source sub-millisecond search in Rust

Show HN: SeekStorm – open-source sub-millisecond search in Rust

Show HN: SeekStorm – open-source sub-millisecond search in Rust

Show HN: A terminal tool for Logseq journal entries

Creator here. I built lsq to solve a simple but annoying workflow problem: having to leave the terminal just to make quick notes in Logseq.<p>Technical details: - Written in Go using Bubble Tea for the TUI - Reads Logseq's config.edn for format preferences - Supports both external editor ($EDITOR) and TUI modes - Handles both Markdown and Org formats<p>Core design decisions: 1. Zero-config default installation (uses standard ~/Logseq path) 2. Single command to open today's journal (just 'lsq') 3. TUI mode for Logseq-specific features (TODO/priority cycling)<p>The project started as a simple editor launcher but evolved to include a TUI when I realized certain Logseq features couldn't be easily replicated in a standard text editor.<p>Code and installation instructions are in the repo. Feedback and contributions welcome.

Show HN: Jinbase – Multi-model transactional embedded database

Hi HN ! Alex here. I'm excited to show you Jinbase (<a href="https://github.com/pyrustic/jinbase">https://github.com/pyrustic/jinbase</a>), my multi-model transactional embedded database.<p>Almost a year ago, I introduced Paradict [1], my take on multi-format streaming serialization. Given its readability, the Paradict text format appears de facto as an interesting data format for config files. But using Paradict to manage config files would end up cluttering its programming interface and making it confusing for users who still have choices of alternative libraries (TOML, INI File, etc.) dedicated to config files. So I used Paradict as a dependency for KvF (Key-value file format) [2], a new project of mine that focuses on config files with sections.<p>With its compact binary format, I thought Paradict would be an efficient dependency for a new project that would rely on I/O functions (such as Open, Read, Write, Seek, Tell and Close) to implement a minimalistic yet reliable persistence solution. But that was before I learned that "files are hard" [3]. SQLite with its transactions, BLOB data type and incremental I/O for BLOBs seemed like the right giant to stand on for my new project.<p>Jinbase started small as a key-value store and ended up as a multi-model embedded database that pushes the boundaries of what we usually do with SQLite. The first transition to the second data model (the depot) happened when I realized that the key-value store was not well suited for cases where a unique identifier is supposed to be automatically generated for each new record, saving the user the burden of providing an identifier that could accidentally be subject to a collision and thus overwrite an existing record. After that, I implemented a search capability that accepts UID ranges for the depot store, timespans (records are automatically timestamped) for both the depot and key-value stores and GLOB patterns and number ranges for string and integer keys in the key-value store.<p>The queue and stack data models emerged as solutions for use cases where records must be consumed in a specific order. A typical record would be retrieved and deleted from the database in a single transaction unit.<p>Since SQLite is used as the storage engine, Jinbase supports the relational model de facto. For convenience, all tables related to Jinbase internals are prefixed with "jinbase_", making Jinbase a useful tool for opening legacy SQLite files to add new data models that will safely coexist with the ad hoc relational model.<p>All four main data models (key-value, depot, queue, stack) support Paradict-compatible data types, such as dictionaries, strings, binary data, integers, datetimes, etc. Under the hood, when the user initiates a write operation, Jinbase serializes (except for binary data), chunks, and stores the data iteratively. A record can be accessed not only in bulk, but also with two levels of partial access granularity: the byte-level and the field-level.<p>While SQLite's incremental I/O for BLOBs is designed to target an individual BLOB column in a row, Jinbase extends this so that for each record, incremental reads cover all chunks as if they were a single unified BLOB. For dictionary records only, Jinbase automatically creates and maintains a lightweight index consisting of pointers to root fields, which then allows extracting from an arbitrary record the contents of a field automatically deserialized before being returned.<p>The most obvious use cases for Jinbase are storing user preferences, persisting session data before exit, order-based processing of data streams, exposing data for other processes, upgrading legacy SQLite files with new data models and bespoke data persistence solutions.<p>Jinbase is written in Python, is available on PyPI and you can play with the examples on the README.<p>Let me know what you think about this project.<p>[1] <a href="https://news.ycombinator.com/item?id=38684724">https://news.ycombinator.com/item?id=38684724</a><p>[2] <a href="https://github.com/pyrustic/kvf">https://github.com/pyrustic/kvf</a><p>[3] <a href="https://news.ycombinator.com/item?id=10725859">https://news.ycombinator.com/item?id=10725859</a>

Show HN: Vicinity – Fast, Lightweight Nearest Neighbors with Flexible Back Ends

We’ve just open-sourced Vicinity, a lightweight approximate nearest neighbors (ANN) search package that allows for fast experimentation and comparison of a larger number of well known algorithms.<p>Main features:<p>- Lightweight: the base package only uses Numpy<p>- Unified interface: use any of the supported algorithms and backends with a single interface: HNSW, Annoy, FAISS, and many more algorithms and libraries are supported<p>- Easy evaluation: evaluate the performance of your backend with a simple function to measure queries per second vs recall<p>- Serialization: save and load your index for persistence<p>After working with a large number of ANN libraries over the years, we found it increasingly cumbersome to learn the interface, features, quirks, and limitations of every library. After writing custom evaluation code to measure the speed and performance for the 100th time to compare libraries, we decided to build this as a way to easily use a large number of algorithms and libraries with a unified, simple interface that allows for quick comparison and evaluation.<p>We are curious to hear your feedback! Are there any algorithms that are missing that you use? Any extra evaluation metrics that are useful?

Show HN: Vicinity – Fast, Lightweight Nearest Neighbors with Flexible Back Ends

We’ve just open-sourced Vicinity, a lightweight approximate nearest neighbors (ANN) search package that allows for fast experimentation and comparison of a larger number of well known algorithms.<p>Main features:<p>- Lightweight: the base package only uses Numpy<p>- Unified interface: use any of the supported algorithms and backends with a single interface: HNSW, Annoy, FAISS, and many more algorithms and libraries are supported<p>- Easy evaluation: evaluate the performance of your backend with a simple function to measure queries per second vs recall<p>- Serialization: save and load your index for persistence<p>After working with a large number of ANN libraries over the years, we found it increasingly cumbersome to learn the interface, features, quirks, and limitations of every library. After writing custom evaluation code to measure the speed and performance for the 100th time to compare libraries, we decided to build this as a way to easily use a large number of algorithms and libraries with a unified, simple interface that allows for quick comparison and evaluation.<p>We are curious to hear your feedback! Are there any algorithms that are missing that you use? Any extra evaluation metrics that are useful?

Show HN: OrioleDB Beta7

Show HN: Markwhen: Markdown for Timelines

Show HN: Markwhen: Markdown for Timelines

< 1 2 3 ... 102 103 104 105 106 ... 826 827 828 >