Elon Musk says X’s new Grok 4-powered feed will need 20,000 GPUs and could slow load times—here’s why users may still love the upgrade.
Elon Musk just lit up X with a single tweet: a brand-new algorithm powered by Grok 4 Mini is in the lab and headed for your timeline. The catch? It will chew through roughly 20,000 high-end GPUs and may add a few extra milliseconds before posts appear. Early testers swear the payoff is worth the wait—smarter replies, eerily accurate topic picks and far fewer random thirst-trap ads.
Behind the scenes, Grok 4 Mini is a distilled version of the model that powers X’s “Trend Genius” summaries. Instead of ranking tweets by raw engagement, the new code scores every post on relevance to you, using a rolling 30-day snapshot of your likes, bookmarks and muted words. The result is a feed that feels curated by a friend who actually reads your mind.
What the delay looks like in real life: on a 5G connection, the extra GPU crunch adds ~120 ms—roughly the time it takes to blink. On slower networks, the lag can stretch to half a second. Power users who tested the alpha say the trade-off feels like switching from cable TV to Netflix: a brief buffer, then a much better show.
Money talk: renting 20,000 NVIDIA H100s costs north of $20 million per month at current cloud rates. Musk has hinted that a tiered rollout will start with Blue subscribers, then trickle down to free accounts once GPU supply loosens in late 2025.
Slower load times or not, your X timeline is about to get eerily good at reading the room. Keep an eye on your @mentions—soon they might feel like they were hand-picked by the algorithm itself.