The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: A new native app for 20 year old OS X

A few of us here are probably familiar with the original Xbox modding scene and the iconic xbins FTP server. Recently, I came across an amazing tool called Pandora by Team Resurgent [0], which got me thinking about how incredible something like this would have been 20 years ago. Just to clarify, I had no involvement in creating Pandora—I’m just inspired by their work.<p>For those who aren’t familiar, getting access to xbins involves a rather dated process. You need to connect to a channel on an EFnet IRC server, message a bot for temporary credentials, then plug those credentials into your FTP client to access xbins. Pandora (and my app) simplifies this entire workflow into a single click.<p>Inspired by Pandora, I decided to build my own take on what this dream tool might have looked like back in the day. I wrote a native Mac app on original hardware—an Intel iMac (20-inch, 2007)—running a 20-year-old operating system, Mac OS X 10.4 Tiger.<p>This was my first foray into native Mac app development, though I’ve done some iOS development in the past. The result is Uppercut [1], and the source is available on GitHub [2].<p>For the development process, I used Claude to help with a lot of the coding, especially since I was constrained to Xcode 2.5 and the pre-“Objective-C 2.0” features available at the time. I had to be very specific in prompting Claude to avoid newer features that didn’t exist back then. Since the majority of Objective-C code out there comes from the era of iOS development (which relied heavily on Objective-C 2.0 until the arrival of Swift), this was a unique and challenging exercise in retro development.<p>[0] - <a href="https://github.com/Team-Resurgent/Pandora">https://github.com/Team-Resurgent/Pandora</a><p>[1] - <a href="https://uppercut.chadbibler.com" rel="nofollow">https://uppercut.chadbibler.com</a><p>[2] - <a href="https://github.com/chadkeck/Uppercut">https://github.com/chadkeck/Uppercut</a>

Show HN: A new native app for 20 year old OS X

A few of us here are probably familiar with the original Xbox modding scene and the iconic xbins FTP server. Recently, I came across an amazing tool called Pandora by Team Resurgent [0], which got me thinking about how incredible something like this would have been 20 years ago. Just to clarify, I had no involvement in creating Pandora—I’m just inspired by their work.<p>For those who aren’t familiar, getting access to xbins involves a rather dated process. You need to connect to a channel on an EFnet IRC server, message a bot for temporary credentials, then plug those credentials into your FTP client to access xbins. Pandora (and my app) simplifies this entire workflow into a single click.<p>Inspired by Pandora, I decided to build my own take on what this dream tool might have looked like back in the day. I wrote a native Mac app on original hardware—an Intel iMac (20-inch, 2007)—running a 20-year-old operating system, Mac OS X 10.4 Tiger.<p>This was my first foray into native Mac app development, though I’ve done some iOS development in the past. The result is Uppercut [1], and the source is available on GitHub [2].<p>For the development process, I used Claude to help with a lot of the coding, especially since I was constrained to Xcode 2.5 and the pre-“Objective-C 2.0” features available at the time. I had to be very specific in prompting Claude to avoid newer features that didn’t exist back then. Since the majority of Objective-C code out there comes from the era of iOS development (which relied heavily on Objective-C 2.0 until the arrival of Swift), this was a unique and challenging exercise in retro development.<p>[0] - <a href="https://github.com/Team-Resurgent/Pandora">https://github.com/Team-Resurgent/Pandora</a><p>[1] - <a href="https://uppercut.chadbibler.com" rel="nofollow">https://uppercut.chadbibler.com</a><p>[2] - <a href="https://github.com/chadkeck/Uppercut">https://github.com/chadkeck/Uppercut</a>

Show HN: 3D printing giant things with a Python jigsaw generator

Show HN: 3D printing giant things with a Python jigsaw generator

Show HN: 3D printing giant things with a Python jigsaw generator

Show HN: 3D printing giant things with a Python jigsaw generator

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: DeepSeek My User Agent

Show HN: Habby – A straightforward bullet journal with habit tracking

Heya HN,<p>I started journaling about a year ago and recently landed in my perfect setup, a simple sentence to remember each day, combined with a flexible habit tracker. Over the holidays I turned this approach into a free expo app that I want to share.<p>Basically, you get: - One sentence per day, enough to remember each day. - Flexible habit tracking (still working on finding the best UX here) - No backend, no tracking, all in device (just sentry for errors) - Clean, minimal interface - Export your data - Stats to track your progress<p>I built this primarily for my own use, but I'd love to hear what the HN community thinks about it. Android is still in closed testing, but will be available soon.<p>Thanks!

Show HN: I built a DIY plane spotting system at home

Show HN: I built a DIY plane spotting system at home

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos

I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini

Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website: www.getonit.ai<p>Or build directly from the source code: <a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe: Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models) Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval Customizability: Onit is your assistant. You should be able to configure it to your liking Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include: Local mode - chat with any model running locally on Ollama! No internet connection required Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI File upload - add images or files for context (bonus: Drag & drop works too!) History - revisit prior chats through the history view or with a simple up/down arrow shortcut Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting? Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode? For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a> Onit gets a list of your local models through Ollama’s API.<p>Which models do you support? For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision). For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under? We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization? We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows? Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we? We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes: Autocontext - automatically pull context from computer, rather than having to repeatedly upload. Local-RAG - let users index and create context from their files without uploading anything. Local-typeahead - i.e. Cursor Tab but for everywhere Additional support - add Linux/Windows, Mistral/Deepseek etc etc. (maybe) Bundle Ollama to avoid double-download And lot’s more!

Show HN: Trolling SMS spammers with Ollama

I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses - Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.

< 1 2 3 ... 43 44 45 46 47 ... 794 795 796 >