Skip to content
Code & Context logoCode&Context

We Already Crossed AGI. You Just Didn't Notice.

Everyone is waiting for the AGI announcement. But the real shift already happened — reasoning became infrastructure, thinking stopped being scarce, and the token became its currency.

Saurabh Prakash

Author

Mar 22, 202610 min read
Share:

I think we are asking the wrong question.

Most people hear "AGI" and imagine a cinematic moment: one clean announcement, one universally accepted benchmark, one day when everyone agrees the machines have crossed the line. Real technology shifts almost never happen like that. They arrive unevenly, then all at once — in hindsight.

So when people ask whether we are living in the AGI age, my answer is: maybe not in the officially certified sense, but very likely in the practical sense.

That distinction matters.


The Definition Is Still Unstable

Before we can answer whether we are living in the AGI age, we need to grapple with a deeper problem: nobody fully agrees on what AGI is.

AGI (Artificial General Intelligence): An AI that generalizes across all intellectual domains the way humans do — though researchers disagree on whether that's a threshold, a spectrum, or even achievable with current architectures.

Anthropic talks about it as a spectrum. OpenAI treats it as a contractual threshold (more on that shortly). DeepMind frames it in terms of capability levels. The definitional fog is not a bug — it is load-bearing. Because whoever gets to define AGI gets to decide when we have arrived[1].

The core problem:

We're measuring AI against a romanticized version of human cognition — one that doesn't actually exist. Humans hallucinate, reason inconsistently, and rely on tools constantly.

Rather than waiting for a consensus, let's do what engineers and scientists do best: go by the evidence.


What the Brightest Minds Are Saying

The most credible minds in computing are expressing something between shock and disorientation. This isn't hype — it's pattern recognition from people who've spent decades calibrating their judgment.

When people at this level start sounding disoriented, it's worth paying attention.


Terence Tao — "Things We Took for Granted May Not Hold Anymore"

Few people are more credentialed to comment on intellectual disruption than Terence Tao — the UCLA mathematician who won the Fields Medal at 31, mathematics' equivalent of the Nobel Prize.

In a recent conversation with Dwarkesh Patel, Tao delivered one of the most quietly unsettling assessments of the current moment:

"We live in a time of change… Things that we've taken for granted for centuries may not hold anymore. The way we do everything, and not just mathematics, will change."Terence Tao[2]

This is remarkable precisely because of who is saying it. Tao is not prone to hype. As recently as late 2024, he posted that he doubted anything resembling "genuine AGI" was within reach of current tools — preferring the term "artificial general cleverness" for what he observed: the ability to solve broad classes of complex problems via somewhat ad hoc means, stochastic and ungrounded, yet increasingly capable[3].

But the pace of change has moved even Tao. His November 2025 paper, "Mathematical Exploration and Discovery at Scale", co-authored with Google DeepMind's AlphaEvolve team, showed that AI could exhaustively explore mathematical search spaces in days — tasks that would take humans centuries.

The search space of ideas can now be explored computationally. That sentence deserves to sit with you for a moment.


Andrej Karpathy — "A Powerful Alien Tool Without a Manual"

On December 27, 2025, Andrej Karpathy — co-founder of OpenAI, former head of Tesla Autopilot, and the person who coined "vibe coding" — posted something that shook Silicon Valley to its core. He admitted publicly that he had never felt so behind as a programmer.

"I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and far between… Clearly some powerful alien tool was handed around except it comes with no manual, and everyone has to figure out how to hold it and operate it, while the resulting magnitude-9 earthquake is rocking the profession."Andrej Karpathy[4]

The tweet was retweeted over 10,000 times. What made it resonate wasn't panic — it was honesty. If one of the world's foremost AI researchers feels structurally behind, the rate of change is not linear anymore.


Donald Knuth — "I'll Have to Revise My Opinions"

Donald Knuth — the author of The Art of Computer Programming, the man who created TeX, and arguably the most influential computer scientist of the 20th century — is not someone given to hyperbole.

His recent paper titled, "Claude Cycles", contains one of the most understated bombshells in recent AI commentary:

"Shock! Shock! I learned yesterday that an open problem I'd been working on for several weeks had just been solved by Claude Opus 4.6…"Donald Knuth[5]

For a man who writes books measured in decades, that is a seismic admission.

The key detail is easy to miss. This wasn't a benchmark problem or a demo prompt. It was a real research problem inside Knuth's own workflow — something he had been thinking about for weeks. And the model produced a valid solution.


Sam Altman — "Intelligence Is a Utility"

On March 11, 2026, Sam Altman spoke at BlackRock's US Infrastructure Summit in Washington, D.C., and made a statement that triggered days of internet argument:

"We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter."Sam Altman[6]

He elaborated that OpenAI's business will fundamentally revolve around selling tokens from AI models of varying capability levels — and that the goal is to eventually make intelligence "too cheap to meter", borrowing a phrase from the 1950s nuclear energy industry.

Strip away the controversy and the business logic lands hard:

Two irreversible truths:

  1. Thinking is now a commodity. Tokens are its currency.
  2. Idea exploration is now computational. Not human-limited.

The Threshold: December 2025

Something changed around December 2025.

It wasn't a single model release or a benchmark. It was the maturation of autonomous AI agents — systems that can independently scope a problem, plan a research path, carry out derivations, and verify their own results. The horizon for autonomously-operating AI expanded substantially, quietly, and almost without ceremony.

  • Previously: AI as a powerful autocomplete. You drive, it assists.
  • After December 2025: AI as an autonomous agent. You describe the destination, it navigates.

The consequence: the capacity to think — to reason, plan, hypothesize, and verify — became something you could purchase in bulk, on demand, at scale.


The Search Space of Ideas Is Now Computationally Explorable

This is perhaps the most profound structural change of the current era — and it goes underappreciated.

Human intellectual progress has always been constrained by the number of minds available, the hours in a day, and the finite stamina of researchers. Ideas exist in a vast combinatorial space. Most of it has never been explored — not because the ideas are impossible, but because there aren't enough humans to look.

AI changes this equation fundamentally.


Example 1: The World's First Autonomous Mathematician

Harmonic AI's Aristotle Agent, launched in March 2026, is described as the world's first autonomous mathematician[7]. It can:

  • Interpret mathematical problems written in plain English
  • Convert them into formal proofs
  • Work continuously for up to 24 hours without human intervention
  • Formally verify its own results using Lean4, eliminating hallucinations

Aristotle Agent ranks #1 in formal mathematics on ProofBench, outperforming its closest competitor by 15%. Its predecessor model achieved gold-medal performance at the 2025 International Mathematical Olympiad — the world's most prestigious mathematics competition — with full formal verification.

We are no longer building calculators. We are building systems that attempt discovery.


Example 2: Get Physics Done (GPD)

Physical Superintelligence PBC released Get Physics Done (GPD) — the first open-source agentic AI physicist[8]. GPD can:

  • Scope a physics problem autonomously
  • Plan the research approach
  • Carry out symbolic derivations and numerical checks
  • Verify its own results against the constraints that nature actually imposes

That's not "AI assistance." That's proto-scientific agency.


The AGI Declaration Problem: Business Convenience Over Truth

Here is where things get politically interesting.

Major AI companies are not free to declare AGI whenever they like. Their declaration is legally and financially consequential.

The OpenAI–Microsoft AGI Clause

The original 2019 partnership between OpenAI and Microsoft included what became known as the "AGI clause": if OpenAI's board declared AGI had been achieved, Microsoft's access to future technologies would be significantly restricted[9].

OpenAI's internal documents reportedly tied AGI declaration to a $100 billion annual profit threshold — meaning the company had a structural incentive to delay declaring AGI until the business implications were favorable.

Microsoft CEO Satya Nadella called unilateral AGI declarations "nonsensical benchmark hacking."

In October 2025, the two companies renegotiated the deal. The critical update[10]:

"Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel."

Microsoft confirmed in February 2026[11]:

"AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same."

The takeaway:

Big companies are bound by their AGI declaration clauses. They may declare — or not declare — AGI at their own business convenience. Whether we are technically in the AGI age and whether any company will formally announce it are two different questions.


So — Are We in the AGI Age?

The honest answer is: it depends on your definition, and that's not a cop-out.

If AGI means a system that generalizes across all intellectual domains the way humans do — with true understanding, grounded intuition, and the messy, embarrassing trial-and-error process of genuine discovery — then no, we are not there yet. Tao himself noted that AI is trained on published results, not on the hidden, embarrassing failures that are the engine of real human discovery.

If AGI means the capacity to reason autonomously across domains, explore idea spaces computationally, produce formally verified mathematical proofs, scope and execute physics research, and write code at a 10x multiplier over expert humans — then something threshold-like happened in late 2025, and we are living in its aftermath.

The most grounded framing:

We are not in the AGI age declared by any institution. But we are unmistakably in the age where reasoning became a commodity — and that distinction may end up mattering less than we think.

Thinking is now available on demand. The token is its currency. The question is what we build with it.


References

[1]: McKinsey, "What is Artificial General Intelligence (AGI)?" — mckinsey.com

[2]: Terence Tao on the Dwarkesh Podcast, March 2026 — tweet | dwarkesh.com

[3]: Terence Tao's AI research archive — terrytao.wordpress.com

[4]: Andrej Karpathy, "magnitude 9 earthquake" — tweet

[5]: Donald Knuth, "Claude's Cycles" — paper (PDF)

[6]: Sam Altman at BlackRock U.S. Infrastructure Summit, March 2026 — tweet (via @TheChiefNerd)

[7]: Harmonic AI — Aristotle Agent, "world's first autonomous mathematician" — tweet | harmonic.fun/news

[8]: Physical Superintelligence PBC — Get Physics Done — GitHub

[9]: OpenAI–Microsoft AGI clause — OpenAI blog | TechRadar analysis

[10]: OpenAI, "The Next Chapter of the Microsoft–OpenAI Partnership" — openai.com

[11]: Microsoft, "Joint Statement on Continuing Partnership" — blogs.microsoft.com