The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Trolling SMS spammers with Ollama

I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses - Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.

Show HN: Trolling SMS spammers with Ollama

I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses - Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.

Show HN: Lightpanda, an open-source headless browser in Zig

We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?

Show HN: Lightpanda, an open-source headless browser in Zig

We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?

Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI

Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI

Show HN: I built an active community of trans people online

A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex.<p>Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation.<p>With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people.<p>It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog.<p>It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.

Show HN: I built an active community of trans people online

A year ago I surveyed the internet and noticed there was only one popular space for trans and gender-non-conforming people to meet; Lex.<p>Lex is not well liked by its users. Its software feels heavy and it is full of cash grabs and anti-patterns. It was recently acquired and is sure to only become more hostile to its users as it turns towards profit generation.<p>With this in mind I built t4t, an alternative specially designed for not only queer people, but specifically trans people.<p>It is an extremely lightweight service. I built it with my most ideal stack: Flutter, Svelte, Supabase, Posthog.<p>It has grown in the last year to about 4,000 monthly active users. I think it could grow way beyond that this year.

Show HN: Stratoshark, a sibling application to Wireshark

Hi all, I'm excited to announce Stratoshark, a sibling application to Wireshark that lets you capture and analyze process activity (system calls) and log messages in the same way that Wireshark lets you capture and analyze network packets. If you would like to try it out you can download installers for Windows and macOS and source code for all platforms at https://stratoshark.org.<p>AMA: I'm the goofball whose name is at the top of the "About" box in both applications, and I'll be happy to answer any questions you might have.

Show HN: I made an open-source laptop from scratch

Hello! I'm Byran. I spent the past ~6 months engineering a laptop from scratch. It's fully open-source on GH at: <a href="https://github.com/Hello9999901/laptop">https://github.com/Hello9999901/laptop</a>

Show HN: I made an open-source laptop from scratch

Hello! I'm Byran. I spent the past ~6 months engineering a laptop from scratch. It's fully open-source on GH at: <a href="https://github.com/Hello9999901/laptop">https://github.com/Hello9999901/laptop</a>

Show HN: I made an open-source laptop from scratch

Hello! I'm Byran. I spent the past ~6 months engineering a laptop from scratch. It's fully open-source on GH at: <a href="https://github.com/Hello9999901/laptop">https://github.com/Hello9999901/laptop</a>

Show HN: I made a app that uses NFC as a physical switch to block distractions

Hi HN!<p>Super proud to showcase Foqos! I wanted to create a way to physically block apps on my phone, always had a bunch of NFC tags, combined the 2 together over the holiday break and Foqos was born. You can create profiles, write them to NFC tags and track your weekly focus.<p>Its completely open source and will always be free! There is an affiliate link in the app for nfc tags and donations are completely optional<p>Link here: <a href="https://apps.apple.com/ca/app/foqos/id6736793117">https://apps.apple.com/ca/app/foqos/id6736793117</a>

Show HN: Interactive systemd – a better way to work with systemd units

I created a TUI for systemd/systemctl called isd (interactive systemd).<p>It provides a fuzzy search for units, auto-refreshing previews, smart sudo handling, and a fully customizable, keyboard-focused interface for power users and newcomers alike.<p>It is a more powerful (but heavier) version of sysz, which was the inspiration for the project.<p>This should be a huge timesaver for anybody who frequently interacts with or edits systemd units/services. And if not, please let me know why! :)

< 1 2 3 ... 6 7 8 9 10 ... 135 136 137 >