The best Hacker News stories from All from the past day

Go back

Latest posts:

GPT-OSS vs. Qwen3 and a detailed look how things evolved since GPT-2

1910: The year the modern world lost its mind

How I code with AI on a budget/free

Try and

Try and

Fight Chat Control

Fight Chat Control

OpenFreeMap survived 100k requests per second

OpenFreeMap survived 100k requests per second

Getting good results from Claude Code

Stanford to continue legacy admissions and withdraw from Cal Grants

Show HN: The current sky at your approximate location, as a CSS gradient

For HTML Day 2025 [1], I made a web service that displays the current sky at your approximate location as a CSS gradient. Colours are simulated on-demand using atmospheric absorption and scattering coefficients. Updates every minute, without the use of client-side JavaScript.<p>Source code and additional information is available on GitHub: <a href="https://github.com/dnlzro/horizon" rel="nofollow">https://github.com/dnlzro/horizon</a><p>[1] <a href="https://html.energy/html-day/2025/index.html" rel="nofollow">https://html.energy/html-day/2025/index.html</a>

Show HN: The current sky at your approximate location, as a CSS gradient

For HTML Day 2025 [1], I made a web service that displays the current sky at your approximate location as a CSS gradient. Colours are simulated on-demand using atmospheric absorption and scattering coefficients. Updates every minute, without the use of client-side JavaScript.<p>Source code and additional information is available on GitHub: <a href="https://github.com/dnlzro/horizon" rel="nofollow">https://github.com/dnlzro/horizon</a><p>[1] <a href="https://html.energy/html-day/2025/index.html" rel="nofollow">https://html.energy/html-day/2025/index.html</a>

Debian 13 “Trixie”

Debian 13 “Trixie”

Jim Lovell, Apollo 13 commander, has died

Jim Lovell, Apollo 13 commander, has died

The surprise deprecation of GPT-4o for ChatGPT consumers

The surprise deprecation of GPT-4o for ChatGPT consumers

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or painfully slow speeds.<p>Sure, they have huge GPU clusters, but there must be more going on - model optimizations, sharding, custom hardware, clever load balancing, etc.<p>What engineering tricks make this possible at such massive scale while keeping latency low?<p>Curious to hear insights from people who've built large-scale ML systems.

< 1 2 3 4 5 6 ... 845 846 847 >