On 24 March 2026, Jensen Huang sat across from Lex Fridman and said it plainly: "I think it's now. I think we've achieved AGI." No caveat. No timeline. No hedge. Just a declarative sentence from the CEO of the company that sells the infrastructure on which every major AI system runs. The statement spread instantly. It deserved a serious response.
This is ours. We disagree. Not because we want to be contrarian, and not because we underestimate how much has changed in the last three years. We disagree because when you look at what AGI actually requires, and at the specific technical problems that remain unsolved today, the claim does not hold. Not yet. Possibly soon. But not now.
Key Points
- Jensen Huang declared AGI achieved on 24 March 2026 using a "functional" definition: an AI that can autonomously build a billion-dollar business.
- Sam Altman has been ambiguous: "basically built AGI, or very close," then walked the statement back as metaphorical.
- Satya Nadella is unambiguous: "We are nowhere close to AGI."
- Three unresolved problems make genuine AGI structurally impossible today: data degradation, hallucinations, and the absence of persistent memory.
- Huang has a direct financial interest in AGI being declared present. That interest is not a reason to dismiss him, but it is a reason to read him carefully.
Redefining AGI to fit what current systems can already do is not a breakthrough. It is a reclassification. The goalpost moved. The ball did not.
The Claim, and the Definition That Makes It Work
Huang's statement did not come from nowhere. It came with a definition attached. When asked what AGI means to him, he was specific: an artificial intelligence that can build and manage an autonomous business generating a billion dollars. Not cognition at human level. Not general reasoning across arbitrary domains. Not self-directed learning. A revenue milestone.
This is a functional definition, and it has a certain coherence. Huang is not making a philosophical claim about consciousness or understanding. He is saying that today's AI can perform enough economically valuable tasks, autonomously enough, to qualify as generally intelligent in any sense that matters for business. By that standard, he argues, the threshold has already been crossed.
The problem is that this definition is purpose-built to be achievable. It excludes precisely the dimensions of general intelligence that remain out of reach: stable long-term reasoning, self-directed goal formation, coherent operation without supervision across genuinely novel environments. What Huang describes is not the traditional definition of AGI as understood by the researchers who coined and developed the concept. It is a commercially convenient reformulation that makes the declaration possible without the underlying technical reality changing.
That distinction matters. The word "AGI" carries weight: regulatory, contractual, investment, and existential weight. Using it to describe "AI that generates significant revenue" is not neutral. It is a category shift with consequences.
Three Voices, Three Different Positions
Huang is not alone in pushing on this question, but the other major voices in the field are considerably less definitive. Their positions are worth placing alongside his.
Sam Altman has been the most oscillating. In one recent exchange, he said "we basically have built AGI, or very close to it," a statement that attracted immediate attention. He subsequently clarified that the comment was intended "spiritually," not technically, and added that "many medium-sized breakthroughs are still needed." That qualification is doing significant work. A statement that is true only spiritually is not a technical claim about the state of the field. It is a sentiment. Altman knows the difference.
Satya Nadella is the clearest of the three. He has said, without ambiguity: "We are nowhere close to AGI." He added that "it's not about Sam or me declaring it," meaning that the naming of AGI does not constitute its arrival. This is the correct epistemological position. The technology either has the properties the definition requires, or it does not. A CEO announcement is not an empirical event.
The distribution is telling: one voice says yes, one says almost-but-spiritually, one says nowhere close. That spread does not describe a settled question. It describes an active disagreement among the people with the most direct knowledge of what these systems can and cannot do.
What We Actually Have, and What We Don't
The case against AGI having arrived is not philosophical hand-waving. It rests on three specific, documented technical problems that remain unresolved. Each one represents not a minor limitation but a structural constraint on what current systems can reliably do.
Data Degradation and Model Collapse
The training data problem is more serious than the public debate reflects. An increasing proportion of the text and content that exists online is generated by AI systems: articles, summaries, forum responses, social media copy, product descriptions. Future models trained on this data are, to a growing degree, being trained on AI output rather than human output.
The documented consequence of this recursion is called model collapse: a gradual degradation in the diversity, accuracy, and reliability of outputs as the synthetic training signal compounds across generations. Quality narrows. Biases amplify. The model's representation of the world contracts toward what previous models produced, not toward what the world actually contains.
A genuine AGI must, by definition, be capable of improving autonomously over time. A system on a trajectory of model collapse is doing the opposite. It is degrading as the environment it trains on becomes more synthetic. This is not a bug that will be patched in the next release. It is a structural consequence of how large language models are trained, and it has no clean solution while the web continues to be populated primarily by AI-generated content.
The Hallucination Problem
Hallucinations are not edge cases in the current generation of language models. They are properties of the architecture. These systems do not have a truth-tracking mechanism. They have a coherence-maximising mechanism. The output is optimised to be fluent, plausible, and contextually consistent, not accurate. When accuracy and coherence align, the output is correct. When they diverge, the system produces something that sounds correct but is not.
This is not a matter of fine-tuning or RLHF. It is a consequence of what language models are: predictors of probable token sequences, not reasoners about states of the world. The model does not know when it does not know. It generates. The epistemic distinction between true and false does not exist inside the mechanism. It is imposed from outside, imperfectly, through reinforcement.
A system that cannot reliably distinguish what is true from what is plausible is not generally intelligent. It is generally fluent. The gap between fluency and intelligence is precisely the gap that AGI, by any serious definition, must close. Current systems have not closed it.
Memory and Cognitive Continuity
The third problem is perhaps the least discussed and the most fundamental. A general intelligence, the kind that can "run a company autonomously" as Huang describes, requires something current AI systems do not have: a coherent, persistent model of the world that accumulates and updates over time.
Today's large language models have no persistent memory in any meaningful sense. Each context window is a fresh instantiation. What passes for memory in current AI agents is retrieval from external storage, engineered around the model rather than embedded within it. The model itself does not learn from the interaction. It does not update its parameters based on what happened yesterday. It does not build a continuous representation of the environment it is operating in.
An agent that forgets everything between sessions, that has no accumulating understanding of its own performance, its own errors, or the evolving state of the tasks it is supposed to manage, is not autonomous in any deep sense. It is a very capable stateless function. Stateless functions do not run companies. They process requests.
The Core Problem
Impressive capabilities are not the same as general intelligence. The current generation of AI can do many things. It cannot do them reliably, continuously, or autonomously in environments it has not been specifically prepared for.
Why Huang Says This, and Why It Matters
Jensen Huang runs the company that manufactures the hardware AGI requires. Every H100, every GB200, every NVL72 cluster sold to a hyperscaler is sold into the bet that AI systems will grow more capable, require more compute, and justify the infrastructure investment. The declaration that AGI has arrived is, from Nvidia's position, the best possible market signal. It means the infrastructure build is not speculative. It is foundational to something that already exists.
This does not make Huang wrong. Smart people with financial interests in a position can still be right about the position. But it is a reason to apply scrutiny to the definition he uses, the threshold he selects, and the specific capabilities he points to as evidence. When the person declaring the milestone sells the picks and shovels, the declaration deserves to be read as a declaration of interest, not only as a technical assessment.
Contrast this with the position Satya Nadella occupies. Microsoft has invested more than any other company in OpenAI. Its financial interest in AGI arriving is arguably greater than Nvidia's. Yet Nadella says: nowhere close. That divergence is informative. It suggests the disagreement is not simply about incentive structures. It is about a genuine technical judgment, and that judgment remains genuinely open.
It is also worth noting what a survey of 475 AI researchers found when asked whether scaling up current systems is likely to produce AGI: 76 percent said it was "unlikely or very unlikely." These are not commentators. They are the people building the systems. Their collective judgment is not infallible. Experts have been wrong about timelines before, in both directions. But it is not a signal that can be dismissed in favour of a CEO podcast appearance.
What We Believe Instead
We are not arguing that AGI will not arrive. We have written elsewhere about why we take the long-term trajectory seriously. The question here is narrower and more specific: has it arrived today, in March 2026, in the form that current large language model-based systems represent?
Our answer is no, and the reasons are not vague. Data degradation is a documented and worsening problem. Hallucinations are architectural, not incidental. Persistent memory and genuine cognitive continuity do not exist in any production system. These are not philosophical objections to a category. They are specific descriptions of things the current generation of systems cannot do, derived from how those systems work.
What we have today is something genuinely unprecedented: AI systems of extraordinary breadth and usefulness, capable of performing tasks that would have required expert human time a decade ago. That is not a small thing. It is worth being precise about what it is: a class of highly capable, domain-adaptable systems that require supervision, fail unpredictably, and cannot sustain autonomous operation across novel, unstructured environments.
That is not AGI. It may become AGI. The limitations described here are not permanent in principle. They are engineering problems, and engineering problems get solved. But claiming the milestone before the problems are solved is not a statement of confidence in the future. It is a misrepresentation of the present.
Jensen Huang knows what his chips can do. He is right that they are extraordinary. He is wrong that extraordinary is the same as general. The gap between those two words is still, in March 2026, where all the hard work remains to be done.