The best Hacker News stories from Show from the past week
Latest posts:
Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real
Show HN: Open source alternative to ChatGPT and ChatPDF-like AI tools
Hey everyone,<p>We have been building SecureAI Tools -- an open-source application layer for ChatGPT and ChatPDF-like AI tools.<p>It works with locally running LLMs as well as with OpenAI-compatible APIs. For local LLMs, it supports Ollama which supports all the gguf/ggml models.<p>Currently, it has two features: Chat-with-LLM, and Chat-with-PDFs. It is optimized for self-hosting use cases and comes with basic user management features.<p>Here are some quick demos:<p><pre><code> * Chat with documents using OpenAI's GPT3.5 model: https://www.youtube.com/watch?v=Br2D3G9O47s
* Chat with documents using a locally running Mistral model (M2 MacBook): https://www.youtube.com/watch?v=UvRHL6f_w74
</code></pre>
Hope you all like it :)
Show HN: Open-source alternatives to tools You pay for
hey makers,
I've spent the whole night to compile this list out of
> winners of Product Hunt
> best dev tools on DevHunt
> recently active on GitHub
> most internet backlinks
> most mentions as "alternative to .."<p>Let me know if I should add anything there.
Show HN: CopilotKit- Build in-app AI chatbots and AI-powered textareas
Show HN: Dropbase – Build internal web apps with just Python
Hey HN, I’m Jimmy, co-founder of Dropbase (<a href="https://www.dropbase.io">https://www.dropbase.io</a>). We are an internal tools builder for Python developers. All you have to do is import any Python scripts/libraries, declare UI components, and layer app permissions so you can share them with others.<p>We’re a middle ground between Airplane and Retool—simpler UI creation than Airplane, more code-centered than Retool. UI building is declarative and you can bind Python scripts/functions to UI components. You can write Python scripts/functions using our App Studio with support from a Python Language Server Protocol (LSP) for linting. Since the self-hosted worker directly references .py or .sql files in the filesystem, you can even write them on VSCode directly or import any other Python script or library.<p>Our app layout is highly opinionated to speed up app building. Instead of an open canvas for UI building, we just give you a main table view and a widget sidebar. This approach significantly reduces app-building time while still covering what most tools need: see some data and take actions based on it. It’s not flexible enough to do absolutely anything, but that’s the point—there’s a tradeoff between flexibility and speed. Dropbase gives you most of what you need, plus speed!<p>A neat feature we are experimenting with to build admin panels fast is “Smart Tables”. We convert any SQL SELECT statement (even across multiple joins and filters) into an inline editable table, like spreadsheets, without any additional code.<p>We have a hybrid hosting model that combines a self-hosted client and a worker server, with a backend API for app/component definitions hosted by us to simplify pushing feature updates. The worker server sits in your machines so your sensitive data doesn’t leave your infra.<p>We’re Python-centric for now, but plan to add support for Rust, Go, and others later.<p>We made a few demo videos building common tools:
- Customer approval tool: <a href="https://youtu.be/A1MIIRNkv3Q" rel="nofollow noreferrer">https://youtu.be/A1MIIRNkv3Q</a>
- Data editing tool (with Smart Table): <a href="https://youtu.be/R1cHO9lMRXo" rel="nofollow noreferrer">https://youtu.be/R1cHO9lMRXo</a><p>To try Dropbase, create an account at <a href="https://app.dropbase.io">https://app.dropbase.io</a> and generate a token, then follow these instructions for local setup: <a href="https://docs.dropbase.io/setup/developer">https://docs.dropbase.io/setup/developer</a>.<p>We are very early so we're really excited to get your feedback, especially on our approach to tools building with Python! My co-founder Ayazhan and some of our teammates will be around to answer questions.
Show HN: How did your computer reach my server?
Show HN: Beeper Mini – iMessage client for Android
Hi HN! I’m proud to share that we have built a real 3rd party iMessage client for Android. We did it by reverse engineering the iMessage protocol and encryption system. It's available to download today (no waitlist): <a href="https://play.google.com/store/apps/details?id=com.beeper.ima">https://play.google.com/store/apps/details?id=com.beeper.ima</a> and there's a technical writeup here: <a href="https://blog.beeper.com/p/how-beeper-mini-works">https://blog.beeper.com/p/how-beeper-mini-works</a>.<p>Unlike every other attempt to build an iMessage app for Android (including our first gen app), Beeper Mini does not use a Mac server relay in the cloud. The app connects directly to Apple servers to send and receive end-to-end encrypted messages. Encryption keys never leave your device. No Apple ID is required. Beeper does not have access to your Apple account.<p>With Beeper Mini, your Android phone number is registered on iMessage. You show up as a ‘blue bubble’ when iPhone friends text you, and can join real iMessage group chats. All chat features like typing status, read receipts, full resolution images/video, emoji reactions, voice notes, editing/unsending, stickers etc are supported.<p>This is all unprecedented, so I imagine you may have a lot of questions. We’ve written a detailed technical blog post about how Beeper Mini works: <a href="https://blog.beeper.com/p/how-beeper-mini-works">https://blog.beeper.com/p/how-beeper-mini-works</a>. A team member has published an open source Python iMessage protocol PoC on Github: <a href="https://github.com/JJTech0130/pypush">https://github.com/JJTech0130/pypush</a>. You can try it yourself on any Mac/Windows/Linux computer and see how iMessage works. My cofounder and I are also here to answer questions in the comments.<p>Our long term vision is to build a universal chat app (<a href="https://blog.beeper.com/p/were-building-the-best-chat-app-on">https://blog.beeper.com/p/were-building-the-best-chat-app-on</a>). Over the next few months, we will be adding support for SMS/RCS, WhatsApp, Signal and 12 other chat networks into Beeper Mini. At that point, we’ll drop the `Mini` postfix. We’re also rebuilding our Beeper Desktop and iOS apps to support our new ‘client-side bridge’ architecture that preserves full end-to-end encryption. We’re also renaming our first gen apps to ‘Beeper Cloud’ to more clearly differentiate them from Beeper Mini.<p>Side note: many people always ask ‘what do you think Apple is going to do about this?’ To be honest, I am shocked that everyone is so shocked by the sheer existence of a 3rd party iMessage client. The internet has always had 3rd party clients! It’s almost like people have forgotten that iChat (the app that iMessage grew out of) was itself a multi-protocol chat app! It supported AIM, Jabber and Google talk. Here’s a blast from the past: <a href="https://i.imgur.com/k6rmOgq.png" rel="nofollow noreferrer">https://i.imgur.com/k6rmOgq.png</a>.
Show HN: Audio plugin for circuit-bent MP3 compression sounds
I made MAIM, an open-source audio plugin that uses real MP3 encoders to distort the sound. I've also added knobs that let you "circuit bend" the encoders, changing parameters that would normally be inaccessible to the user to get strange glitchy sounds.<p>The plugin lets you switch between two MP3 encoders, since under the MP3 standard, the specifics of what to lose in MP3 lossy compression is left up to the encoder. The encoders are LAME, the gold standard for open-source MP3 encoders, and BladeEnc, an old open-source MP3 encoder that has a really bubbly sound and was fun to work with.<p>I'd love any feedback, and I'll be around to answer questions!
Show HN: Onsites.fyi - Curated Big Tech Interview Experiences
Hi HN!<p>While Glassdoor and other employment discussion boards offer valuable interview experience data, navigating it to prepare for interviews can be difficult.<p>Onsites.fyi curates interview experiences and insights from Big Tech hiring across various positions and levels.<p>Our collection currently includes interview experiences from top-tier companies like Apple, Google, Meta, Microsoft, and Amazon.<p>Reviewing real interview experiences (rounds, questions, format) of others could be an invaluable preparation tool for interviews.<p>Try it out and please share any feedback!
Show HN: Simulate 3D plants in the browser
Show HN: Simulate 3D plants in the browser
Report Phone Spam – Shut down robocallers and text spammers
Do you receive unsolicited phone calls or SMS/text spam? I made a free public service site explaining how to find the telecom carrier that is responsible for the spammer's (real) phone number and report the abuse to them – so the carrier can terminate their service.<p>It works, and it feels like magic.<p>Background: Earlier this year, I wrote an HN comment[1] explaining how to find the telecom carrier responsible for a robocall or SMS spam campaign. Those steps aren't documented anywhere else, even though they're actually pretty easy.<p>This info deserved to be much more visible, so now it is: <a href="https://reportphonespam.org/" rel="nofollow noreferrer">https://reportphonespam.org/</a><p>As my site says, most reputable telecom carriers don't want unsolicited messages on their network or phone numbers. In order to disconnect their abusive customers, they need to hear about the abuse. That's where you come in. In a few minutes, you can report abuse to the responsible carrier, who will investigate and often shut off the spammer's phone number(s).<p>[1]: <a href="https://news.ycombinator.com/item?id=34570065#34570835">https://news.ycombinator.com/item?id=34570065#34570835</a>
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
Hi HN! I'm just sharing a project I've been working on during the LLM Efficiency Challenge - you can now finetune Llama with QLoRA 5x faster than Huggingface's original implementation on your own local GPU. Some highlights:<p>1. Manual autograd engine - hand derived backprop steps.<p>2. QLoRA / LoRA 80% faster, 50% less memory.<p>3. All kernels written in OpenAI's Triton language.<p>4. 0% loss in accuracy - no approximation methods - all exact.<p>5. No change of hardware necessary. Supports NVIDIA GPUs since 2018+. CUDA 7.5+.<p>6. Flash Attention support via Xformers.<p>7. Supports 4bit and 16bit LoRA finetuning.<p>8. Train Slim Orca fully locally in 260 hours from 1301 hours (5x faster).<p>9. Open source version trains 5x faster or you can check out Unsloth Pro and Max codepaths for 30x faster training!<p><a href="https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/" rel="nofollow noreferrer">https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_fast...</a> has more info about Unsloth!<p>Hopefully you can try it out! Wrote a blog post at <a href="https://unsloth.ai/introducing" rel="nofollow noreferrer">https://unsloth.ai/introducing</a> if you want to learn more about our manual hand derived backprop or Triton kernels and stuff! Thanks once again!
Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe
Show HN: Play a pen-and-paper game that is almost unknown in the US and Europe
Show HN: Bi-directional sync between Postgres and SQLite
Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback!
- Ralf, Conrad, Kobie, Phillip and team
Show HN: Bi-directional sync between Postgres and SQLite
Hi HN,<p>Today we’re launching PowerSync, a Postgres<>SQLite bi-directional sync engine that enables an offline-first app architecture. It currently supports Flutter, React Native and web (JavaScript) using Wasm SQLite in the browser, with more client SDKs on the way.<p>Conrad and I (Ralf) have been working on our sync engine since 2009, originally as part of a full-stack app platform. That version of the system is still used in production worldwide and we’ve learnt a lot from its use cases and scaling. About a year ago we started on spinning off PowerSync as a standalone product that is designed to be stack-agnostic.<p>If you’d like to see a simple demo, check out the pebbles widget on the landing page here: <a href="https://www.powersync.com/" rel="nofollow noreferrer">https://www.powersync.com/</a><p>We wrote about our architecture and design philosophy here: <a href="https://www.powersync.com/blog/introducing-powersync-v1-0-postgres-sqlite-sync-layer" rel="nofollow noreferrer">https://www.powersync.com/blog/introducing-powersync-v1-0-po...</a><p>This covers amongst other things how we designed the system for scalable dynamic partial replication, why we use a server authority architecture based on an event log instead of CRDTs for merging changes, and the approach to consistency.<p>Our docs can be found here: <a href="https://docs.powersync.com/" rel="nofollow noreferrer">https://docs.powersync.com/</a><p>We would love to hear your feedback!
- Ralf, Conrad, Kobie, Phillip and team
Show HN: Dobb·E – towards home robots with an open-source platform
Hi HN! Proud to share our open-source robot platform, Dobb·E, a home robot system that needs just 5 minutes of human teaching to learn new tasks. We've already taken Dobb·E to 10 different homes in New York, taught it 100+ tasks, and we are just getting started! I would love to hear your thoughts about this.<p>Here are some more details, below (or see a Twitter thread with attached media: <a href="https://twitter.com/i/status/1729515379892826211" rel="nofollow noreferrer">https://twitter.com/i/status/1729515379892826211</a> or <a href="https://nitter.net/i/status/1729515379892826211" rel="nofollow noreferrer">https://nitter.net/i/status/1729515379892826211</a>):<p>We engineered Dobb·E to maximize efficiency, safety, and user comfort. As a system, it is composed of four parts: a data collection tool, a home dataset, a pretrained vision model, and a policy fine-tuning recipe.<p>We teach our robots with imitation learning, and for data collection, we created the “Stick”, a tool made out of $25 of hardware and an iPhone.<p>Then, using the Stick, we collected a 13 hour dataset in 22 New York homes, called Homes of New York (HoNY). HoNY has 1.5M frames collected over 216 different "environments" which is an order of magnitude larger compared to similar open source datasets.<p>Then we trained a foundational vision model that we can fine-tune fast (15 minutes!) on a new task with only 5 minutes (human time)/ 90 seconds (demo time) of data. So from start to finish, it takes about 20 minutes to teach the robot a new task.<p>Over a month, we visited 10 homes, tried 109 tasks, and got 81% success rate in simple household tasks. We also found a line of challenges, from mirrors to heavy objects, that we must overcome if we are to get a general purpose home robot.<p>We open-sourced our entire system because our primary goal is to get more robotics and AI researchers, engineers, and enthusiasts to go beyond constrained lab environments and start getting into homes!<p>So here is how you can get started:<p>1. Code and STL files: <a href="https://github.com/notmahi/dobb-e/">https://github.com/notmahi/dobb-e/</a><p>2. Technical documentation: <a href="https://docs.dobb-e.com/" rel="nofollow noreferrer">https://docs.dobb-e.com/</a><p>3. Paper: <a href="https://arxiv.org/abs/2311.16098" rel="nofollow noreferrer">https://arxiv.org/abs/2311.16098</a><p>4. More videos and the dataset: <a href="https://dobb-e.com" rel="nofollow noreferrer">https://dobb-e.com</a><p>5. Robot we used: <a href="https://hello-robot.com" rel="nofollow noreferrer">https://hello-robot.com</a>
Show HN: Dobb·E – towards home robots with an open-source platform
Hi HN! Proud to share our open-source robot platform, Dobb·E, a home robot system that needs just 5 minutes of human teaching to learn new tasks. We've already taken Dobb·E to 10 different homes in New York, taught it 100+ tasks, and we are just getting started! I would love to hear your thoughts about this.<p>Here are some more details, below (or see a Twitter thread with attached media: <a href="https://twitter.com/i/status/1729515379892826211" rel="nofollow noreferrer">https://twitter.com/i/status/1729515379892826211</a> or <a href="https://nitter.net/i/status/1729515379892826211" rel="nofollow noreferrer">https://nitter.net/i/status/1729515379892826211</a>):<p>We engineered Dobb·E to maximize efficiency, safety, and user comfort. As a system, it is composed of four parts: a data collection tool, a home dataset, a pretrained vision model, and a policy fine-tuning recipe.<p>We teach our robots with imitation learning, and for data collection, we created the “Stick”, a tool made out of $25 of hardware and an iPhone.<p>Then, using the Stick, we collected a 13 hour dataset in 22 New York homes, called Homes of New York (HoNY). HoNY has 1.5M frames collected over 216 different "environments" which is an order of magnitude larger compared to similar open source datasets.<p>Then we trained a foundational vision model that we can fine-tune fast (15 minutes!) on a new task with only 5 minutes (human time)/ 90 seconds (demo time) of data. So from start to finish, it takes about 20 minutes to teach the robot a new task.<p>Over a month, we visited 10 homes, tried 109 tasks, and got 81% success rate in simple household tasks. We also found a line of challenges, from mirrors to heavy objects, that we must overcome if we are to get a general purpose home robot.<p>We open-sourced our entire system because our primary goal is to get more robotics and AI researchers, engineers, and enthusiasts to go beyond constrained lab environments and start getting into homes!<p>So here is how you can get started:<p>1. Code and STL files: <a href="https://github.com/notmahi/dobb-e/">https://github.com/notmahi/dobb-e/</a><p>2. Technical documentation: <a href="https://docs.dobb-e.com/" rel="nofollow noreferrer">https://docs.dobb-e.com/</a><p>3. Paper: <a href="https://arxiv.org/abs/2311.16098" rel="nofollow noreferrer">https://arxiv.org/abs/2311.16098</a><p>4. More videos and the dataset: <a href="https://dobb-e.com" rel="nofollow noreferrer">https://dobb-e.com</a><p>5. Robot we used: <a href="https://hello-robot.com" rel="nofollow noreferrer">https://hello-robot.com</a>
Show HN: A Dalle-3 and GPT4-Vision feedback loop
I used to enjoy Translation Party, and over the weekend I realized that we can build the same feedback loop with DALLE-3 and GPT4-Vision. Start with a text prompt, let DALLE-3 generate an image, then GPT-4 Vision turns that image back into a text prompt, DALLE-3 creates another image, and so on.<p>You need to bring your own OpenAI API key (costs about $0.10/run)<p>Some prompts are very stable, others go wild. If you bias GPT4's prompting by telling it to "make it weird" you can get crazy results.<p>Here's a few of my favorites:<p>- Gnomes: <a href="https://dalle.party/?party=k4eeMQ6I" rel="nofollow noreferrer">https://dalle.party/?party=k4eeMQ6I</a><p>- Start with a sailboat but bias GPT4V to "replace everything with cats": <a href="https://dalle.party/?party=0uKfJjQn" rel="nofollow noreferrer">https://dalle.party/?party=0uKfJjQn</a><p>- A more stable one (but everyone is always an actor): <a href="https://dalle.party/?party=oxpeZKh5" rel="nofollow noreferrer">https://dalle.party/?party=oxpeZKh5</a>