The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I wrote a modern Command Line Handbook

TLDR: I wrote a handbook for the Linux command line. 120 pages in PDF. Updated for 2025. Pay what you want.<p>A few years back I wrote an ebook about the Linux command line. Instead of focusing on a specific shell, paraphrasing manual pages, or providing long repetitive explanations, the idea was to create a modern guide that would help readers to understand the command line in the practical sense, cover the most common things people use the command line for, and do so without wasting the readers' time.<p>The book contains material on terminals, shells (compatible with both Bash and Zsh), configuration, command line programs for typical use cases, shell scripting, and many tips and tricks to make working on the command line more convenient. I still consider it "an introduction" and it is not necessarily a book for the HN crowd that lives in the terminal, but I believe that the book will easily cover 80 % of the things most people want or need to do in the terminal.<p>I made a couple of updates to the book over the years and just finished a significant one for 2025. The book is not perfect. I still see a lot of room for improvement, but I think it is good enough and I truly want to share it with everyone. Hence, pay what you want.<p>EXAMPLE PAGES: <a href="https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVffAs3Sp/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/1PkUcLv83Ib6nKYF88n3OBqeeVff...</a><p><a href="https://commandline.stribny.name/" rel="nofollow">https://commandline.stribny.name/</a>

Show HN: Loodio 2 – A Simple Rechargable Bathroom Privacy Device

Hey HN!<p>I posted here some years ago trying to raise money for a Kickstarter for a product I call Loodio.<p>Loodio is a motion activated music player for bathrooms that plays music during the bathroom visit to give users privacy during their sacred moments.<p>The kickstarter failed, but I managed to create a product eventually with a lot of effort.<p>I managed to sell 150 units of the first unit, mostly to United States but to all different parts of the world while working on the next version.<p>The problem with the first version was that it was running on a Raspberry Pi Zero W (that had to be wall connected) and it was pretty big, had crappy sound and took a minute to start since it had to boot a whole linux system. I was running it on a python script and unix services. To add music, people had to SSH into the unit so you can imagine how painful that was for some.<p>However customers loved it! But I knew I could do better. The most common request was battery operation.<p>Here are some reviews of version 1: <a href="https://loodio.com/pages/reviews" rel="nofollow">https://loodio.com/pages/reviews</a><p>I'm proud to say that Loodio 2 is finally here and is working like I imagined when I started working on it almost 5 years ago now.<p>Loodio 2 introduces battery operation with 1 week of battery life (~5 hours of active operation). It has great sound and an easy way to add your own music with SD card support (4GB included).<p>It doesn't require any app. Can be run without WiFi (however you lose some features like internet radio, time updates, software updates and weather)<p>Why does it have a display, you may ask? Because, I used to have an electric toothbrush that came with a display. That display showed how long you were brushing to make sure you did your 2 minutes per brush. When I wasn't brushing my teeth, it showed the current time. And I stopped using the electric tooth brush (because a dentist told me they are too harsh on your teeth) but kept the display for probably 5 years afterwards because I noticed I really want to know the time while getting ready for school/work in the morning. Another thing I noticed was that I always check the weather outside, so I could dress appropriately.<p>So, Loodio shows you the time and weather (optionally) as well as playing music during your visit. These features together with the lights, are features that I think people don't expect to use but with time becomes as important as the music. Customer interviews verify this.<p>I wasted a lot of money trying to outsource the development the first 18 months. I then decided to start doing it myself. The version I'm selling is actually the 25th(!) iteration of the product. The problem with hardware is that it takes you around a month to iterate a circuit (if you don't live next to the factory in Shenzhen) because of the cycle 'Designing->Order from China->Testing->Repeat'. And I had no experience of electronics when starting out.<p>The enclosure is made from empty PCBs to save money for injection tooling later. It looks pretty cool. But mainly, works great!<p>I want to give credit to Tadeusz Karpinski and Velimir Stoleski that ported my crappy python script to the ESP32 that is running Loodio 2.<p>You need to try it! I really think you're gonna like it! <a href="https://loodio.com" rel="nofollow">https://loodio.com</a>

Show HN: Voiden – a free, offline, Git-native API Client

Hey HN! Aldin here, a helping hand to Voiden (<a href="https://voiden.md" rel="nofollow">https://voiden.md</a>)<p>Voiden is a free, offline, Git-native API client. Your API definitions, docs, and tests all live together.<p>It came out of years of frustration: cloud sync lock-in, paywalled basics, bloated UIs, and lag on even simple requests. So the team built the opposite: an offline tool with no login, no telemetry, no lock-in. Just markdown and hotkeys.<p>It behaves like code: local files, git branches, no cloud nonsense. Terminal is built-in, so you can commit, diff, and push changes right from the app.<p>Docs stay close to your requests, so that API spec and what the API actually does never drift apart. No more scattered Postman, docs, and test files everywhere. A single source of truth.<p>A minimalist GET request looks something like this:<p><pre><code> GET https://dummyjson.com/posts </code></pre> Just hit /endpoint, paste the URL, and run it with Cmd/Ctrl + Enter.<p>Not OSS (yet), but 100% local and free. Optional plugins will be coming down the line, but the core stays free.<p>We'd love feedback from folks tired of overcomplicated and bloated API tooling.

Show HN: Tesseral – Open-Source Auth

Hi folks! I'm Ulysse, and Tesseral (<a href="https://github.com/tesseral-labs/tesseral">https://github.com/tesseral-labs/tesseral</a>) is open-source auth for B2B SaaS.<p>Early in my career, I worked on enterprise auth and security features at Segment. I've been obsessed with the subtle details of enterprise software ever since. For example, I wrote an implementation of SAML in the early days of the COVID pandemic because I thought it was fun.<p>Over the years, I've felt frustrated that too few people have seemed interested in making auth obvious for developers of business software. Auth really doesn't need to be so confusing.<p>We made Tesseral to help software engineers get B2B auth exactly right – and focus their energy on building the features that users want.<p>You can use Tesseral to stand up a login page, authenticate your users, and manage their access to resources. Think of it like Auth0 or Clerk, but open source and built specifically for B2B apps. Among other things, that means that it’s designed for B2B multi-tenancy and includes enterprise-ready features like single sign-on (SAML SSO), multi-factor authentication (MFA), SCIM provisioning, and role-based access control (RBAC).<p>For those who expose public APIs, you can use Tesseral to manage API keys for your customers. You can even limit the scope of API keys to specific actions by using our RBAC feature.<p>We've taken care to make Tesseral powerful and secure enough to power real enterprise software but still leave it simple enough for any software developer to use. You don't have to be a security expert to implement Tesseral. (By default, therefore, Tesseral imposes a few opinions. Let us know if you have a good reason to do something unusual, and we'll work something out.)<p>If you want to experiment with Tesseral, you can host it yourself or use our hosted service. The hosted service lives at <a href="https://console.tesseral.com">https://console.tesseral.com</a>. You can find documentation here: <a href="https://tesseral.com/docs">https://tesseral.com/docs</a>.<p>Here are a few simple demos:<p><a href="https://www.youtube.com/watch?v=IhYPzz3vB54" rel="nofollow">https://www.youtube.com/watch?v=IhYPzz3vB54</a><p><a href="https://www.youtube.com/watch?v=t-JJ8TNjqNU" rel="nofollow">https://www.youtube.com/watch?v=t-JJ8TNjqNU</a><p><a href="https://www.youtube.com/watch?v=mwthBIRZO8k" rel="nofollow">https://www.youtube.com/watch?v=mwthBIRZO8k</a><p>We're in the early stages of the project, so we still have some gaps. We have more features, bug fixes, SDKs, and documentation on the way.<p>What have we missed? What can we do better? We're eager to hear from the community!

Show HN: Tesseral – Open-Source Auth

Hi folks! I'm Ulysse, and Tesseral (<a href="https://github.com/tesseral-labs/tesseral">https://github.com/tesseral-labs/tesseral</a>) is open-source auth for B2B SaaS.<p>Early in my career, I worked on enterprise auth and security features at Segment. I've been obsessed with the subtle details of enterprise software ever since. For example, I wrote an implementation of SAML in the early days of the COVID pandemic because I thought it was fun.<p>Over the years, I've felt frustrated that too few people have seemed interested in making auth obvious for developers of business software. Auth really doesn't need to be so confusing.<p>We made Tesseral to help software engineers get B2B auth exactly right – and focus their energy on building the features that users want.<p>You can use Tesseral to stand up a login page, authenticate your users, and manage their access to resources. Think of it like Auth0 or Clerk, but open source and built specifically for B2B apps. Among other things, that means that it’s designed for B2B multi-tenancy and includes enterprise-ready features like single sign-on (SAML SSO), multi-factor authentication (MFA), SCIM provisioning, and role-based access control (RBAC).<p>For those who expose public APIs, you can use Tesseral to manage API keys for your customers. You can even limit the scope of API keys to specific actions by using our RBAC feature.<p>We've taken care to make Tesseral powerful and secure enough to power real enterprise software but still leave it simple enough for any software developer to use. You don't have to be a security expert to implement Tesseral. (By default, therefore, Tesseral imposes a few opinions. Let us know if you have a good reason to do something unusual, and we'll work something out.)<p>If you want to experiment with Tesseral, you can host it yourself or use our hosted service. The hosted service lives at <a href="https://console.tesseral.com">https://console.tesseral.com</a>. You can find documentation here: <a href="https://tesseral.com/docs">https://tesseral.com/docs</a>.<p>Here are a few simple demos:<p><a href="https://www.youtube.com/watch?v=IhYPzz3vB54" rel="nofollow">https://www.youtube.com/watch?v=IhYPzz3vB54</a><p><a href="https://www.youtube.com/watch?v=t-JJ8TNjqNU" rel="nofollow">https://www.youtube.com/watch?v=t-JJ8TNjqNU</a><p><a href="https://www.youtube.com/watch?v=mwthBIRZO8k" rel="nofollow">https://www.youtube.com/watch?v=mwthBIRZO8k</a><p>We're in the early stages of the project, so we still have some gaps. We have more features, bug fixes, SDKs, and documentation on the way.<p>What have we missed? What can we do better? We're eager to hear from the community!

Show HN: AutoThink – Boosts local LLM performance with adaptive reasoning

I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity.<p>The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%.<p>I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration.<p>Results on DeepSeek-R1-Distill-Qwen-1.5B:<p>- GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)<p>- MMLU-Pro: 26.38% vs 25.58% baseline<p>- Uses fewer tokens than baseline approaches<p>Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies.<p>The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search.<p>Technical paper: <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327" rel="nofollow">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327</a><p>Code and examples: <a href="https://github.com/codelion/optillm/tree/main/optillm/autothink">https://github.com/codelion/optillm/tree/main/optillm/autoth...</a><p>PTS implementation: <a href="https://github.com/codelion/pts">https://github.com/codelion/pts</a><p>I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?

Show HN: AutoThink – Boosts local LLM performance with adaptive reasoning

I built AutoThink, a technique that makes local LLMs reason more efficiently by adaptively allocating computational resources based on query complexity.<p>The core idea: instead of giving every query the same "thinking time," classify queries as HIGH or LOW complexity and allocate thinking tokens accordingly. Complex reasoning gets 70-90% of tokens, simple queries get 20-40%.<p>I also implemented steering vectors derived from Pivotal Token Search (originally from Microsoft's Phi-4 paper) that guide the model's reasoning patterns during generation. These vectors encourage behaviors like numerical accuracy, self-correction, and thorough exploration.<p>Results on DeepSeek-R1-Distill-Qwen-1.5B:<p>- GPQA-Diamond: 31.06% vs 21.72% baseline (+43% relative improvement)<p>- MMLU-Pro: 26.38% vs 25.58% baseline<p>- Uses fewer tokens than baseline approaches<p>Works with any local reasoning model - DeepSeek, Qwen, custom fine-tuned models. No API dependencies.<p>The technique builds on two things I developed: an adaptive classification framework that can learn new complexity categories without retraining, and an open source implementation of Pivotal Token Search.<p>Technical paper: <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327" rel="nofollow">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327</a><p>Code and examples: <a href="https://github.com/codelion/optillm/tree/main/optillm/autothink">https://github.com/codelion/optillm/tree/main/optillm/autoth...</a><p>PTS implementation: <a href="https://github.com/codelion/pts">https://github.com/codelion/pts</a><p>I'm curious about your thoughts on adaptive resource allocation for AI reasoning. Have you tried similar approaches with your local models?

Show HN: I rewrote my Mac Electron app in Rust

A year ago, my co-founder launched Desktop Docs here on HN. It's a Mac app we built with Electron that uses CLIP embeddings to search photos and videos locally with natural language. We got positive feedback from HN and our first paying customers, but the app was almost 1GB and clunky to use.<p>TLDR; rebuilding in Rust was the right move.<p>So we rewrote the app with Rust and Tauri and here are the results:<p>- App size is 83% smaller: 1GB → 172MB - DMG Installer is 70% smaller: 232MB → 69.5MB - Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes - Overall more stability (old app used to randomly crash)<p>The original version worked, but it didn't perform well when you tried indexing thousands of images or large videos. We lost a lot of time struggling to optimize Electron’s main-renderer process communication and ended up with a complex worker system to process large batches of media files.<p>For months we wrestled with indecision about continuing to optimize the Electron app vs. starting a full rebuild in Swift or Rust. The main thing holding us back was that we hadn’t coded in Swift in almost 10 years and we didn’t know Rust very well.<p>What finally broke us was when users complained the app crashed their video calls just running in background. I guess that’s what happens when you ship an app with Chromium that takes up 200mb before any application code.<p>Today the app still uses CLIP for embeddings and Redis for vector storage and search, except Rust now handles the image and video processing pipeline and all the file I/O to let users browse their entire machine, not just indexed files.<p>For the UI, we decided to rebuild it from scratch instead of porting over the old UI. This turned out well because it resulted in a cleaner, simpler UI after living with the complexity of the old version.<p>The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron. Bundling Redis into the app was a permissioning nightmare, but I think our solution with Rust handles this better than what we had with Electron.<p>All in, the rebuild took about two months and still needs some more work to be at total parity with its Electron version, but the core functionality of indexing and searching files is way more performant than before and that made it worth the time. Sometimes you gotta throw away working code to build the right thing.<p>AMA about Rust/Tauri migration, Redis bundling nightmares, how CLIP embeddings work for local semantic search, or why Electron isn't always the answer.

Show HN: I rewrote my Mac Electron app in Rust

A year ago, my co-founder launched Desktop Docs here on HN. It's a Mac app we built with Electron that uses CLIP embeddings to search photos and videos locally with natural language. We got positive feedback from HN and our first paying customers, but the app was almost 1GB and clunky to use.<p>TLDR; rebuilding in Rust was the right move.<p>So we rewrote the app with Rust and Tauri and here are the results:<p>- App size is 83% smaller: 1GB → 172MB - DMG Installer is 70% smaller: 232MB → 69.5MB - Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes - Overall more stability (old app used to randomly crash)<p>The original version worked, but it didn't perform well when you tried indexing thousands of images or large videos. We lost a lot of time struggling to optimize Electron’s main-renderer process communication and ended up with a complex worker system to process large batches of media files.<p>For months we wrestled with indecision about continuing to optimize the Electron app vs. starting a full rebuild in Swift or Rust. The main thing holding us back was that we hadn’t coded in Swift in almost 10 years and we didn’t know Rust very well.<p>What finally broke us was when users complained the app crashed their video calls just running in background. I guess that’s what happens when you ship an app with Chromium that takes up 200mb before any application code.<p>Today the app still uses CLIP for embeddings and Redis for vector storage and search, except Rust now handles the image and video processing pipeline and all the file I/O to let users browse their entire machine, not just indexed files.<p>For the UI, we decided to rebuild it from scratch instead of porting over the old UI. This turned out well because it resulted in a cleaner, simpler UI after living with the complexity of the old version.<p>The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron. Bundling Redis into the app was a permissioning nightmare, but I think our solution with Rust handles this better than what we had with Electron.<p>All in, the rebuild took about two months and still needs some more work to be at total parity with its Electron version, but the core functionality of indexing and searching files is way more performant than before and that made it worth the time. Sometimes you gotta throw away working code to build the right thing.<p>AMA about Rust/Tauri migration, Redis bundling nightmares, how CLIP embeddings work for local semantic search, or why Electron isn't always the answer.

Show HN: I rewrote my Mac Electron app in Rust

A year ago, my co-founder launched Desktop Docs here on HN. It's a Mac app we built with Electron that uses CLIP embeddings to search photos and videos locally with natural language. We got positive feedback from HN and our first paying customers, but the app was almost 1GB and clunky to use.<p>TLDR; rebuilding in Rust was the right move.<p>So we rewrote the app with Rust and Tauri and here are the results:<p>- App size is 83% smaller: 1GB → 172MB - DMG Installer is 70% smaller: 232MB → 69.5MB - Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes - Overall more stability (old app used to randomly crash)<p>The original version worked, but it didn't perform well when you tried indexing thousands of images or large videos. We lost a lot of time struggling to optimize Electron’s main-renderer process communication and ended up with a complex worker system to process large batches of media files.<p>For months we wrestled with indecision about continuing to optimize the Electron app vs. starting a full rebuild in Swift or Rust. The main thing holding us back was that we hadn’t coded in Swift in almost 10 years and we didn’t know Rust very well.<p>What finally broke us was when users complained the app crashed their video calls just running in background. I guess that’s what happens when you ship an app with Chromium that takes up 200mb before any application code.<p>Today the app still uses CLIP for embeddings and Redis for vector storage and search, except Rust now handles the image and video processing pipeline and all the file I/O to let users browse their entire machine, not just indexed files.<p>For the UI, we decided to rebuild it from scratch instead of porting over the old UI. This turned out well because it resulted in a cleaner, simpler UI after living with the complexity of the old version.<p>The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron. Bundling Redis into the app was a permissioning nightmare, but I think our solution with Rust handles this better than what we had with Electron.<p>All in, the rebuild took about two months and still needs some more work to be at total parity with its Electron version, but the core functionality of indexing and searching files is way more performant than before and that made it worth the time. Sometimes you gotta throw away working code to build the right thing.<p>AMA about Rust/Tauri migration, Redis bundling nightmares, how CLIP embeddings work for local semantic search, or why Electron isn't always the answer.

Show HN: Free mammogram analysis tool combining deep learning and vision LLM

I've built Neuralrad Mammo AI, a free research tool that combines deep learning object detection with vision language models to analyze mammograms. The goal is to provide researchers and medical professionals with a secondary analysis tool for investigation purposes.<p>Important Disclaimers: - NOT FDA 510(k) cleared - this is purely for research investigation - Not for clinical diagnosis - results should only be used as a secondary opinion - Completely free - no registration, no payment, no data retention<p>What it does: 1. Upload a mammogram image (JPEG/PNG) 2. AI identifies potential masses and calcifications 3. Vision LLM provides radiologist-style analysis 4. Interactive viewer with zoom/pan capabilities<p>You can try it with any mass / calcification mammo images, e.g. by searching Google: mammogram images mass<p>Key Features: - Detects and classifies masses (benign/malignant) - Identifies calcifications (benign/malignant) - Provides confidence scores and size assessments - Generates detailed analysis using vision LLM - No data storage - images processed and discarded<p>Use Cases: - Medical research and education - Second opinion for researchers - Algorithm comparison studies - Teaching tool for radiology training - Academic research validation<p>The implementation details include: 1. 1st stage object detection using PyTorch retinalnet training DDSM+Internal data set 2. 2nd stage fine tuned Qwen2.5 VL with labeled data + radiology report sets 3. Server is implemented with Flask, Client implemented using SvelteJS<p>The system is designed specifically for research investigation purposes and to complement (never replace) professional medical judgment. I'm hoping this can be useful for the medical AI research community and welcome feedback on the approach.<p>Address: <a href="http://mammo.neuralrad.com:5300" rel="nofollow">http://mammo.neuralrad.com:5300</a>

Show HN: I made a running app that turns your runs to a virtual garden

I started running 5 years ago and it completely changed me. Recently, I decided to train for my first marathon and needed something more than just tracking my runs to motivate me. I noticed all the apps currently available are just trackers and route finders, but nothing has solved the problem of consistency. So I thought of combining the basic idea of the Forest app with running and created a small app for myself which would reward me with plants and trees for my runs.<p>I am not a mobile developer but then i came across Capacitor. Since I knew Next.js and have been working with it for the past 2 years, I built the app with it. I started using it and showed it to some of my friends and they really liked it. The gamification gave me a mental boost of planting more trees and plants in my virtual garden and just seeing it flourish made me happy. Then I decided to publish it on Play Store and also built it for iOS devices as many of my friends had iPhones. It took weeks to build and release them after brutal and lengthy reviews from both the stores.<p>Running has taught me to keep moving further, one step at a time and the journey of building the app was just like that. I wanted to share the app with all the people who are just starting out running and with the experienced folks who want to make their runs mean something.<p>The app is Run&amp;Grow: <a href="https://runandgrow.com" rel="nofollow">https://runandgrow.com</a>

Show HN: Malai – securely share local TCP services (database/SSH) with others

malai is a peer to peer network, and is a dead simple to share your local development HTTP server, without setting up tunnels, dealing with firewalls, or relying on cloud services.<p>In malai 0.2.5, we have added TCP support, which means you can expose any TCP service to others using malai, without opening the TCP service related port to Internet. With malai installed on both ends, any TCP service can be securely tunneled over it.<p>It can be used to secure your SSH service, or securely share your database server.<p>GitHub: <a href="https://github.com/kulfi-project/kulfi">https://github.com/kulfi-project/kulfi</a> (star us!)<p>Would love feedback, questions, or ideas — thanks!<p>PS: We have also added `malai folder`, which lets you share (readonly) the content of a folder with others.

Show HN: Malai – securely share local TCP services (database/SSH) with others

malai is a peer to peer network, and is a dead simple to share your local development HTTP server, without setting up tunnels, dealing with firewalls, or relying on cloud services.<p>In malai 0.2.5, we have added TCP support, which means you can expose any TCP service to others using malai, without opening the TCP service related port to Internet. With malai installed on both ends, any TCP service can be securely tunneled over it.<p>It can be used to secure your SSH service, or securely share your database server.<p>GitHub: <a href="https://github.com/kulfi-project/kulfi">https://github.com/kulfi-project/kulfi</a> (star us!)<p>Would love feedback, questions, or ideas — thanks!<p>PS: We have also added `malai folder`, which lets you share (readonly) the content of a folder with others.

Show HN: Lazy Tetris

I made a tetris variant<p>Aims to remove all stress, and focus the game on what I like the best - stacking.<p>No timer, no score, no gravity. Move to the next piece when you are ready, and clear lines when you are ready.<p>Separate mobile + desktop controls

Show HN: Lazy Tetris

I made a tetris variant<p>Aims to remove all stress, and focus the game on what I like the best - stacking.<p>No timer, no score, no gravity. Move to the next piece when you are ready, and clear lines when you are ready.<p>Separate mobile + desktop controls

Show HN: Lazy Tetris

I made a tetris variant<p>Aims to remove all stress, and focus the game on what I like the best - stacking.<p>No timer, no score, no gravity. Move to the next piece when you are ready, and clear lines when you are ready.<p>Separate mobile + desktop controls

Show HN: Lazy Tetris

I made a tetris variant<p>Aims to remove all stress, and focus the game on what I like the best - stacking.<p>No timer, no score, no gravity. Move to the next piece when you are ready, and clear lines when you are ready.<p>Separate mobile + desktop controls

Show HN: My LLM CLI tool can run tools now, from Python code or plugins

Show HN: My LLM CLI tool can run tools now, from Python code or plugins

< 1 2 3 4 5 6 7 ... 815 816 817 >