The best Hacker News stories from All from the past day

Go back

Latest posts:

Stanford to continue legacy admissions and withdraw from Cal Grants

Show HN: The current sky at your approximate location, as a CSS gradient

For HTML Day 2025 [1], I made a web service that displays the current sky at your approximate location as a CSS gradient. Colours are simulated on-demand using atmospheric absorption and scattering coefficients. Updates every minute, without the use of client-side JavaScript.<p>Source code and additional information is available on GitHub: <a href="https://github.com/dnlzro/horizon" rel="nofollow">https://github.com/dnlzro/horizon</a><p>[1] <a href="https://html.energy/html-day/2025/index.html" rel="nofollow">https://html.energy/html-day/2025/index.html</a>

Show HN: The current sky at your approximate location, as a CSS gradient

For HTML Day 2025 [1], I made a web service that displays the current sky at your approximate location as a CSS gradient. Colours are simulated on-demand using atmospheric absorption and scattering coefficients. Updates every minute, without the use of client-side JavaScript.<p>Source code and additional information is available on GitHub: <a href="https://github.com/dnlzro/horizon" rel="nofollow">https://github.com/dnlzro/horizon</a><p>[1] <a href="https://html.energy/html-day/2025/index.html" rel="nofollow">https://html.energy/html-day/2025/index.html</a>

Debian 13 “Trixie”

Debian 13 “Trixie”

Jim Lovell, Apollo 13 commander, has died

Jim Lovell, Apollo 13 commander, has died

The surprise deprecation of GPT-4o for ChatGPT consumers

The surprise deprecation of GPT-4o for ChatGPT consumers

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or painfully slow speeds.<p>Sure, they have huge GPU clusters, but there must be more going on - model optimizations, sharding, custom hardware, clever load balancing, etc.<p>What engineering tricks make this possible at such massive scale while keeping latency low?<p>Curious to hear insights from people who've built large-scale ML systems.

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or painfully slow speeds.<p>Sure, they have huge GPU clusters, but there must be more going on - model optimizations, sharding, custom hardware, clever load balancing, etc.<p>What engineering tricks make this possible at such massive scale while keeping latency low?<p>Curious to hear insights from people who've built large-scale ML systems.

Food, housing, & health care costs are a source of major stress for many people

I want everything local – Building my offline AI workspace

I want everything local – Building my offline AI workspace

Ultrathin business card runs a fluid simulation

Ultrathin business card runs a fluid simulation

Windows XP Professional

Flipper Zero dark web firmware bypasses rolling code security

flipper zero implementation of a variant [1] of the rolljam [2] attack<p>[1] <a href="https://arxiv.org/abs/2210.11923" rel="nofollow">https://arxiv.org/abs/2210.11923</a><p>[2] <a href="https://news.ycombinator.com/item?id=10018934">https://news.ycombinator.com/item?id=10018934</a>

Project Hyperion: Interstellar ship design competition

GPT-5 for Developers

< 1 2 3 ... 16 17 18 19 20 ... 858 859 860 >