Three letters, no definition. AGI has become the most
consequential acronym in technology, and nobody can tell you
what it means. Not the researchers building toward it, not the
companies staking billions on it, not the policymakers trying
to regulate it. The term floats through earnings calls,
congressional hearings, and arXiv papers with the confidence
of something settled. It is not settled. It is not close to
settled.
OpenAI defines AGI as "a highly autonomous system that
outperforms humans at most economically valuable work." Their
partnership agreement with Microsoft reportedly
defines it differently:
AI systems generating at least $100 billion in profits. One
definition is about capability. The other is about revenue.
They use the same two words. Sam Altman has called AGI "a very
sloppy term," which is a strange thing to say about the stated
mission of your company.
Google DeepMind took the most serious shot at resolving this
in late 2023, when Meredith Ringel Morris, Shane Legg, and
colleagues
published a taxonomy
surveying nine existing AGI definitions and finding all of
them inadequate. Their proposed replacement is a matrix: five
performance levels (Emerging through Superhuman) crossed with
breadth of generality. Under this framework, current large
language models qualify as "Level 1 Emerging AGI." Which tells
you more about the framework than about the models.
Dario Amodei at Anthropic rejects the term entirely. He has
called AGI "a marketing term"
and prefers "powerful AI," which he defines as AI smarter than
a Nobel Prize winner across most relevant fields, capable of
running autonomously for days. That is a definition with teeth.
It is also nothing like the other two.
So we have the three leading AI labs working toward something
they cannot collectively name. This is not a minor semantic
quibble. Definitions determine timelines, shape investment
decisions, trigger contractual clauses, and inform regulation.
When someone says AGI is two years away and someone else says
it is twenty, they are frequently not disagreeing about
progress. They are
disagreeing about the destination.
The pattern has a name. Larry Tesler identified it in 1979:
"Intelligence is whatever machines have not done yet." Every
time AI clears a bar previously considered definitive, the bar
moves. The ARC-AGI benchmark went from 0% in 2023 to 85% by
December 2024. The response was not celebration but harder
benchmarks. GPT-4.5 passed the Turing test in 2025 and it
barely made the news. Coding tasks that would have seemed
impossible to most researchers
five years ago are now routine. The finish line retreats at the
speed of approach.
In December 2025, this tension went public in the most
entertaining way possible. Yann LeCun declared on a podcast
that "there is no such thing as general intelligence" and
called predictions of near-term AGI "completely delusional."
Within hours, Demis Hassabis
fired back,
accusing LeCun of confusing general intelligence with
universal intelligence. These are arguably the two most
qualified people alive to have this argument, and they cannot
agree on whether the concept itself makes sense.
Michael Timothy Bennett captured the frustration in an academic
paper titled, bluntly,
"What the F*ck Is Artificial General Intelligence?".
His survey of AGI definitions found them varying on scope,
metrics, feasibility assumptions, and whether human parity is
even the right target. His conclusion: discussions about AGI
risks, timelines, and policy rest on fundamentally incompatible
premises.
I think the $100 billion definition is the most revealing one.
Not because it is good, but because it is honest. It exists
because AGI triggers a contractual clause: if OpenAI achieves
it, Microsoft loses access to certain technology. The
definition has nothing to do with cognition or capability. It
is a legal instrument wearing a lab coat. And yet it governs
the most consequential AI partnership in the world. That a
financial threshold can sit alongside Turing tests and
capability benchmarks
under the same label tells you everything about how degraded
the term has become.
There is a version of this argument that says none of it
matters, that the capabilities are real regardless of what we
call them. I have some sympathy for that position. The models
are genuinely useful. They write code, summarise research,
generate images that would have taken a studio two weeks to
produce. Whether that constitutes "general intelligence" is, in
some practical sense, beside the point for anyone using the
tools today. But the label is not beside the point for the
people
setting expectations,
raising capital, and writing legislation. When a company says
it is building AGI, it is making a claim. When that claim has
no stable referent, it cannot be falsified. And a target that
cannot be falsified is not an engineering goal. It is
marketing.
AGI is the only engineering target where the people building
it, funding it, and regulating it cannot agree on what it is.
We would not accept this in any other domain. Imagine a
pharmaceutical company announcing it had cured cancer, but
defining cancer as whatever diseases its drug happened to
treat. The FDA would have questions. AI has no equivalent
authority, no shared specification, no acceptance criteria. It
has a phrase that means different things in different rooms and
adapts to suit
whoever is speaking.
Maybe that is the point. Maybe a fuzzy target serves everyone
just well enough: researchers get funding, companies get
valuations, politicians get something to regulate, and the
public gets a story about the future. The ambiguity is not a
bug. It is the product.
Sources: