The best Hacker News stories from All from the past day

Go back

Latest posts:

Adfree Cities

Polish Hackers that repaired DRM trains threatened by train company

You Don't Batch Cook When You're Suicidal (2020)

Tesla FSD Timeline

SMERF: Streamable Memory Efficient Radiance Fields

We built SMERF, a new way for exploring NeRFs in real-time in your web browser. Try it out yourself!<p>Over the last few months, my collaborators and I have put together a new, real-time method that makes NeRF models accessible from smartphones, laptops, and low-power desktops, and we think we’ve done a pretty stellar job! SMERF, as we like to call it, distills a large, high quality NeRF into a real-time, streaming-ready representation that’s easily deployed to devices as small as a smartphone via the web browser.<p>On top of that, our models look great! Compared to other real-time methods, SMERF has higher accuracy than ever before. On large multi-room scenes, SMERF renders are nearly indistinguishable from state-of-the-art offline models like Zip-NeRF and a solid leap ahead of other approaches.<p>The best part: you can try it out yourself! Check out our project website for demos and more.<p>If you have any questions or feedback, don’t hesitate to reach out by email (smerf@google.com) or Twitter (@duck).

SMERF: Streamable Memory Efficient Radiance Fields

We built SMERF, a new way for exploring NeRFs in real-time in your web browser. Try it out yourself!<p>Over the last few months, my collaborators and I have put together a new, real-time method that makes NeRF models accessible from smartphones, laptops, and low-power desktops, and we think we’ve done a pretty stellar job! SMERF, as we like to call it, distills a large, high quality NeRF into a real-time, streaming-ready representation that’s easily deployed to devices as small as a smartphone via the web browser.<p>On top of that, our models look great! Compared to other real-time methods, SMERF has higher accuracy than ever before. On large multi-room scenes, SMERF renders are nearly indistinguishable from state-of-the-art offline models like Zip-NeRF and a solid leap ahead of other approaches.<p>The best part: you can try it out yourself! Check out our project website for demos and more.<p>If you have any questions or feedback, don’t hesitate to reach out by email (smerf@google.com) or Twitter (@duck).

Modern iOS Navigation Patterns

Google Promises Unlimited Storage; Cancels; Tells Journalist Life's Work Deleted

Show HN: Open-source macOS AI copilot using vision and voice

Heeey! I built a macOS copilot that has been useful to me, so I open sourced it in case others would find it useful too.<p>It's pretty simple:<p>- Use a keyboard shortcut to take a screenshot of your active macOS window and start recording the microphone.<p>- Speak your question, then press the keyboard shortcut again to send your question + screenshot off to OpenAI Vision<p>- The Vision response is presented in-context/overlayed over the active window, and spoken to you as audio.<p>- The app keeps running in the background, only taking a screenshot/listening when activated by keyboard shortcut.<p>It's built with NodeJS/Electron, and uses OpenAI Whisper, Vision and TTS APIs under the hood (BYO API key).<p>There's a simple demo and a longer walk-through in the GH readme <a href="https://github.com/elfvingralf/macOSpilot-ai-assistant">https://github.com/elfvingralf/macOSpilot-ai-assistant</a>, and I also posted a different demo on Twitter: <a href="https://twitter.com/ralfelfving/status/1732044723630805212" rel="nofollow noreferrer">https://twitter.com/ralfelfving/status/1732044723630805212</a>

Show HN: Open-source macOS AI copilot using vision and voice

Heeey! I built a macOS copilot that has been useful to me, so I open sourced it in case others would find it useful too.<p>It's pretty simple:<p>- Use a keyboard shortcut to take a screenshot of your active macOS window and start recording the microphone.<p>- Speak your question, then press the keyboard shortcut again to send your question + screenshot off to OpenAI Vision<p>- The Vision response is presented in-context/overlayed over the active window, and spoken to you as audio.<p>- The app keeps running in the background, only taking a screenshot/listening when activated by keyboard shortcut.<p>It's built with NodeJS/Electron, and uses OpenAI Whisper, Vision and TTS APIs under the hood (BYO API key).<p>There's a simple demo and a longer walk-through in the GH readme <a href="https://github.com/elfvingralf/macOSpilot-ai-assistant">https://github.com/elfvingralf/macOSpilot-ai-assistant</a>, and I also posted a different demo on Twitter: <a href="https://twitter.com/ralfelfving/status/1732044723630805212" rel="nofollow noreferrer">https://twitter.com/ralfelfving/status/1732044723630805212</a>

The Case for Memory Safe Roadmaps

Telecom Industry Is Mad Because the FCC Might Examine High Broadband Prices

FFmpeg lands CLI multi-threading as its "most complex refactoring" in decades

AI’s big rift is like a religious schism

AI’s big rift is like a religious schism

YouTube doesn't want to take down scam ads

23andMe changed its terms of service to prevent hacked customers from suing

Epic vs. Google: Google Loses

How many lines of C it takes to execute a + b in Python

John Carmack and John Romero reunited to talk DOOM on its 30th Anniversary

< 1 2 3 ... 268 269 270 271 272 ... 818 819 820 >