The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Habby – A straightforward bullet journal with habit tracking
Heya HN,<p>I started journaling about a year ago and recently landed in my perfect setup, a simple sentence to remember each day, combined with a flexible habit tracker. Over the holidays I turned this approach into a free expo app that I want to share.<p>Basically, you get:
- One sentence per day, enough to remember each day.
- Flexible habit tracking (still working on finding the best UX here)
- No backend, no tracking, all in device (just sentry for errors)
- Clean, minimal interface
- Export your data
- Stats to track your progress<p>I built this primarily for my own use, but I'd love to hear what the HN community thinks about it. Android is still in closed testing, but will be available soon.<p>Thanks!
Show HN: I built a DIY plane spotting system at home
Show HN: I built a DIY plane spotting system at home
Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos
I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.
Show HN: Using YOLO to Detect Office Chairs in 40M Hotel Photos
I used the YOLO object detection library from Ultralytics to scan over 40 million hotel photos and identify images with office chairs. This helped me create a map showing hotels suitable for remote work.<p>Map: <a href="https://www.tripoffice.com/maps" rel="nofollow">https://www.tripoffice.com/maps</a><p>Yolo: <a href="https://www.ultralytics.com/yolo" rel="nofollow">https://www.ultralytics.com/yolo</a><p>The whole process was done on a home Mac without the use of any LLMs. It's based on traditional object detection technology.
Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini
Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website:
www.getonit.ai<p>Or build directly from the source code:
<a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe:
Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps
Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models)
Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval
Customizability: Onit is your assistant. You should be able to configure it to your liking
Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include:
Local mode - chat with any model running locally on Ollama! No internet connection required
Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI
File upload - add images or files for context (bonus: Drag & drop works too!)
History - revisit prior chats through the history view or with a simple up/down arrow shortcut
Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting?
Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode?
For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a>
Onit gets a list of your local models through Ollama’s API.<p>Which models do you support?
For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision).
For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under?
We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization?
We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows?
Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we?
We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes:
Autocontext - automatically pull context from computer, rather than having to repeatedly upload.
Local-RAG - let users index and create context from their files without uploading anything.
Local-typeahead - i.e. Cursor Tab but for everywhere
Additional support - add Linux/Windows, Mistral/Deepseek etc etc.
(maybe) Bundle Ollama to avoid double-download
And lot’s more!
Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini
Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website:
www.getonit.ai<p>Or build directly from the source code:
<a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe:
Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps
Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models)
Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval
Customizability: Onit is your assistant. You should be able to configure it to your liking
Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include:
Local mode - chat with any model running locally on Ollama! No internet connection required
Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI
File upload - add images or files for context (bonus: Drag & drop works too!)
History - revisit prior chats through the history view or with a simple up/down arrow shortcut
Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting?
Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode?
For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a>
Onit gets a list of your local models through Ollama’s API.<p>Which models do you support?
For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision).
For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under?
We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization?
We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows?
Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we?
We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes:
Autocontext - automatically pull context from computer, rather than having to repeatedly upload.
Local-RAG - let users index and create context from their files without uploading anything.
Local-typeahead - i.e. Cursor Tab but for everywhere
Additional support - add Linux/Windows, Mistral/Deepseek etc etc.
(maybe) Bundle Ollama to avoid double-download
And lot’s more!
Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini
Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website:
www.getonit.ai<p>Or build directly from the source code:
<a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe:
Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps
Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models)
Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval
Customizability: Onit is your assistant. You should be able to configure it to your liking
Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include:
Local mode - chat with any model running locally on Ollama! No internet connection required
Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI
File upload - add images or files for context (bonus: Drag & drop works too!)
History - revisit prior chats through the history view or with a simple up/down arrow shortcut
Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting?
Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode?
For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a>
Onit gets a list of your local models through Ollama’s API.<p>Which models do you support?
For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision).
For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under?
We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization?
We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows?
Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we?
We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes:
Autocontext - automatically pull context from computer, rather than having to repeatedly upload.
Local-RAG - let users index and create context from their files without uploading anything.
Local-typeahead - i.e. Cursor Tab but for everywhere
Additional support - add Linux/Windows, Mistral/Deepseek etc etc.
(maybe) Bundle Ollama to avoid double-download
And lot’s more!
Show HN: Onit – Source-available ChatGPT Desktop with local mode, Claude, Gemini
Hey Hackernews- it’s Tim Lenardo and I’m launching v1 of Onit today!<p>Onit is ChatGPT Desktop, but with local mode and support for other model providers (Anthropic, GoogleAI, etc). It's also like Cursor Chat, but everywhere on your computer - not just in your IDE!<p>Onit is open-source! You can download a pre-built version from our website:
www.getonit.ai<p>Or build directly from the source code:
<a href="https://github.com/synth-inc/onit">https://github.com/synth-inc/onit</a><p>We built this because we believe:
Universal Access: AI assistants should be accessible from anywhere on my computer, not just in the browser or in specific apps
Provider Freedom: Consumers should have the choice between providers (anthropic, openAI, etc.) not be locked into a single one (ChatGPT desktop only has OpenAI models)
Local first: AI is more useful with access to your data. But that doesn't count for much if you have to upload personal files to an untrusted server. Onit will always provide options for local processing. No personal data leaves your computer without approval
Customizability: Onit is your assistant. You should be able to configure it to your liking
Extensibility: Onit should allow the community to build and share extensions, making it more useful for everyone.<p>The features for V1 include:
Local mode - chat with any model running locally on Ollama! No internet connection required
Multi-provider support - Top models for OpenAI, Anthropic, xAI, and GoogleAI
File upload - add images or files for context (bonus: Drag & drop works too!)
History - revisit prior chats through the history view or with a simple up/down arrow shortcut
Customizable Shortcut - you pick your hotkey to launch the chat window. (Command+zero by default)<p>Anticipated questions:<p>What data are you collecting?
Onit V1 does not have a server. Local requests are handled locally, and remote requests are sent to model providers directly from the client. We collect crash reports through Firebase and a single "chat sent" event through PostHog analytics. We don't store your prompts or responses.<p>How to does Onit support local mode?
For use local mode, run Ollama. You can get Ollama here: <a href="https://ollama.com/">https://ollama.com/</a>
Onit gets a list of your local models through Ollama’s API.<p>Which models do you support?
For remote models, Onit V1 supports Anthropic, OpenAI, xAI and GoogleAI. Default models include (o1, o1-mini, GPT-4o, Claude3.5 Sonnet, Claude3.5 Haiku, Gemini 2.0, Grok 2, Grok 2 Vision).
For local mode, Onit supports any models you can run locally on Ollama!<p>What license is Onit under?
We’re releasing V1 available on a Creative Commons Non-Commercial license. We believe the transparency of open-source is critical. We also want to make sure individuals can customize Onit to their needs (please submit PRs!). However, we don’t want people to sell the code as their own.<p>Where is the monetization?
We’re not monetizing V1. In the future we may add paid premium features. Local chat will- of course- always remain free. If you disagree with a monetized feature, you can always build from source!<p>Why not Linux or Windows?
Gotta start somewhere! If the reception is positive, we’ll work hard to add further support.<p>Who are we?
We are Synth, Inc, a small team of developers in San Francisco building at the frontier of AI progress. Other projects include Checkbin (www.checkbin.dev) and Alias (deprecated - www.alias.inc).<p>We’d love to hear from you! Feel free to reach out at contact@getonit dot ai.<p>Future roadmap includes:
Autocontext - automatically pull context from computer, rather than having to repeatedly upload.
Local-RAG - let users index and create context from their files without uploading anything.
Local-typeahead - i.e. Cursor Tab but for everywhere
Additional support - add Linux/Windows, Mistral/Deepseek etc etc.
(maybe) Bundle Ollama to avoid double-download
And lot’s more!
Show HN: Trolling SMS spammers with Ollama
I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses
- Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.
Show HN: Trolling SMS spammers with Ollama
I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses
- Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.
Show HN: Trolling SMS spammers with Ollama
I've been working on a side project to generate responses to spam with various funny LLM personas, such as a millenial gym bro and a 19th century British gentleman. By request, I've made a write-up on my website which has some humorous screenshots and made the code available on Github for others to try out [0].<p>A brief outline of the system:<p>- Android app listens for incoming SMS events and forwards them over MQTT to a server running Ollama which generates responses
- Conversations are whitelisted and manually assigned a persona. The LLM has access to the last N messages of the conversation for additional context.<p>[0]: <a href="https://github.com/evidlo/sms_llm">https://github.com/evidlo/sms_llm</a><p>I'm aware that replying can encourage/allow the sender to send more spam. Hopefully reporting the numbers after the conversation is a reasonable compromise.
Show HN: Lightpanda, an open-source headless browser in Zig
We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?
Show HN: Lightpanda, an open-source headless browser in Zig
We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?
Show HN: Lightpanda, an open-source headless browser in Zig
We’re Francis and Pierre, and we're excited to share Lightpanda (<a href="https://lightpanda.io" rel="nofollow">https://lightpanda.io</a>), an open-source headless browser we’ve been building for the past 2 years from scratch in Zig (not dependent on Chromium or Firefox). It’s a faster and lighter alternative for headless operations without any graphical rendering.<p>Why start over? We’ve worked a lot with Chrome headless at our previous company, scraping millions of web pages per day. While it’s powerful, it’s also heavy on CPU and memory usage. For scraping at scale, building AI agents, or automating websites, the overheads are high. So we asked ourselves: what if we built a browser that only did what’s absolutely necessary for headless automation?<p>Our browser is made of the following main components:<p>- an HTTP loader<p>- an HTML parser and DOM tree (based on Netsurf libs)<p>- a Javascript runtime (v8)<p>- partial web APIs support (currently DOM and XHR/Fetch)<p>- and a CDP (Chrome Debug Protocol) server to allow plug & play connection with existing scripts (Puppeteer, Playwright, etc).<p>The main idea is to avoid any graphical rendering and just work with data manipulation, which in our experience covers a wide range of headless use cases (excluding some, like screenshot generation).<p>In our current test case Lightpanda is roughly 10x faster than Chrome headless while using 10x less memory.<p>It's a work in progress, there are hundreds of Web APIs, and for now we just support some of them. It's a beta version, so expect most websites to fail or crash. The plan is to increase coverage over time.<p>We chose Zig for its seamless integration with C libs and its <i>comptime</i> feature that allow us to generate bi-directional Native to JS APIs (see our zig-js-runtime lib <a href="https://github.com/lightpanda-io/zig-js-runtime">https://github.com/lightpanda-io/zig-js-runtime</a>). And of course for its performance :)<p>As a company, our business model is based on a Managed Cloud, browser as a service. Currently, this is primarily powered by Chrome, but as we integrate more web APIs it will gradually transition to Lightpanda.<p>We would love to hear your thoughts and feedback. Where should we focus our efforts next to support your use cases?
Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI
Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI
Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI
Show HN: BrowserAI – Run LLMs directly in browser using WebGPU (open source)
Check out this impressive project that enables running LLMs entirely in the browser using WebGPU.<p>Key features:
- Zero token costs, no cloud infrastructure required
- Complete data privacy through local processing
- Simple 3-line code integration
- Built on MLC and Transformer.js<p>The benchmarks show smaller models can effectively handle many common tasks.<p>Currently the project roadmap includes:
- No-code AI pipeline builder
- Browser-based RAG for document chat
- Analytics/logging
- Model fine-tuning interface
Show HN: I organized Bluesky feeds by categories and growth rankings
I’ve curated and organized Bluesky feeds into 50+ categories, now with growth rankings for the past day and week! Check it out and share your thoughts!