In January 2026, Elon Musk stood on the stage at the World Economic Forum in Davos and said, plainly, what many in the AI industry believe but rarely state out loud: by 2030 to 2031, artificial intelligence will surpass the collective intelligence of all of humanity combined. Not one human. Not a team of researchers. All of us, together, at once. He had already said that AI might be smarter than any single human by the end of 2025 or early 2026. We are, in other words, already there — or nearly.

We at AI Doomsday are not in the habit of agreeing with Elon Musk on everything. But on this, we agree completely. And we think the implications deserve more than a news cycle.

The clock on our homepage is not a metaphor

If you visit the front page of this site, you'll find a countdown timer set to January 1, 2030. We built it deliberately. Not as a gimmick, not as clickbait — as a statement of editorial conviction. We believe the date is real. We believe the transition is coming. And we believe that the vast majority of people alive today have not begun to reckon with what that actually means.

Musk's timeline aligns with our own. It aligns with the forecasts of some of the most serious AI safety researchers on the planet. It aligns with the current trajectory of model capabilities — which have not slowed, and show no signs of slowing. GPT-4 could pass a bar exam. The systems being trained now are substantially more capable. The systems that will be trained in 2028 and 2029 will be more capable still.

"By 2030–2031, AI will reach a level exceeding the collective intelligence of all humanity."

— Elon Musk, World Economic Forum, Davos, January 2026

What "surpassing humanity" actually means

The phrase is easy to dismiss as hyperbole. It sounds like science fiction. It triggers the reflex to reach for reassuring counterarguments: AI can't feel, AI doesn't understand, AI is just pattern matching. These objections are not wrong — but they are also not the point.

An AI that surpasses human collective intelligence does not need to feel or to understand in any philosophical sense. It needs to be better than us at the things that determine outcomes: scientific research, economic planning, strategic decision-making, the design of systems that govern our lives. When that happens — when the machines are better at these things than we are — the question of whether they are "truly" intelligent becomes academic. The consequences will be real regardless.

Consider what Musk also said in Davos: billions of humanoid robots, each person with a personal robotic assistant. This is not the vision of a man who thinks the pace of change will slow. This is a man who believes the transition will be total, fast, and irreversible — and who is building infrastructure to profit from it.

The abundance argument and its shadow

Musk, to his credit, framed part of this as a promise: a future of abundance, where AI and robots produce enough goods and services to lift global living standards. We do not dismiss this possibility. If the transition is managed well — if the gains are distributed rather than captured, if safety standards are upheld, if human agency is preserved in some meaningful form — then the future Musk describes could be genuinely good.

But that is an enormous "if." Historically, the gains from technological revolutions have not been distributed automatically or equitably. They have been captured by those who own the infrastructure, who write the code, who hold the patents. There is no natural law that says AI-driven abundance flows to everyone. There is every historical precedent suggesting it won't — unless we collectively decide otherwise, and soon.

Our position

We share Musk's timeline. We do not share his confidence that it will be fine. The difference between a good 2030 and a catastrophic one is not the technology — it is the decisions being made right now, by a handful of people, with almost no democratic accountability.

2025: already smarter than any individual human

Musk's more immediate prediction — that AI would be smarter than any single human by the end of 2025 or early 2026 — is one that many AI researchers consider already achieved, or nearly so, depending on how you define the terms. Current frontier models outperform humans on a wide range of cognitive benchmarks: mathematics, coding, scientific reasoning, reading comprehension.

The limitations that remain — reasoning reliability, grounding in physical reality, genuine causal understanding — are real. But they are shrinking with every model generation. The researchers who study this most closely are not reassured by the gaps that remain. They are alarmed by the speed at which those gaps are closing.

What you should do with this information

We are not writing this to induce paralysis. We are writing it because we believe that the first step toward navigating what is coming is understanding it clearly — without the false comfort of "it won't be that extreme" or "someone will figure it out."

The people building these systems believe they are building something that will surpass human intelligence within this decade. Elon Musk believes it. The researchers at Anthropic, at DeepMind, at OpenAI believe it — it is the founding premise of the entire AI safety field, which exists precisely because the people closest to this technology think it is real and imminent.

The countdown on our homepage reaches zero on January 1, 2030. We chose that date carefully. We stand behind it. What happens between now and then is the most important question of our time. We intend to cover it — honestly, rigorously, without looking away.