Select Page

In the previous post three AI research and development leaders were noted as being pessimistic about how far the LLM models can take us toward to AGI. And/or how long it would take to get there.

Since all AI projections and predictions start from a specific world picture of the current AI status quo, including a broad library of research and white papers,  it is likely helpful to know where Ilya Sutskever, Andrej Karpathy, and Richard Sutton are “coming from”.

Perhaps appropriately, (or perhaps not) GPT 5.2 was asked about this, and replied as below:

Ilya SutskeverEpistemic humility from the inside. Believes AGI is possible, but deeply unsure we understand the path or the risks.
Andrej KarpathyEngineer’s realism. LLMs are powerful tools, not minds. Intelligence ≠ agency.
Richard SuttonLong-horizon theorist. Current AI is short-sighted; real intelligence requires embodied, reward-driven learning over time.

1. Of the three you named, Ilya Sutskever is the one who clearly overlaps with the “AI could kill us” / x-risk camp in public: he signed the 2023 CAIS statement explicitly framing “risk of extinction from AI” as a top-tier global priority. ([Center for AI Safety][1])

2. Karpathy and Sutton are skeptical that “today’s LLM paradigm, by itself” is the straight path to AGI, but that’s a different axis than “doomerism.” Sutton in particular critiques LLMs as missing goals/ground-truth/continual learning.

3. Their “bias” looks less like “pro vs anti AI,” and more like three different theories of what intelligence is and what scaling can/can’t buy you.

A) “Will current LLM-style systems get us to AGI?” (capability pathway skepticism)
B) “If we do get AGI, is there a serious chance it goes catastrophically wrong?” (x-risk / alignment concern)

It’s very possible to be pessimistic about A but not a “doomer” about B (or the reverse).

 

2. *Ilya Sutskever: closest to ‘doomer’ in the public record, but also strongly pro-AGI*.

On the “doomer” / x-risk side:

* The CAIS statement says: “Mitigating the risk of extinction from AI should be a global priority…” and Ilya Sutskever is listed among the signatories.
That is about as “officially in the x-risk concern coalition” as it gets (at least for that 2023 moment).

On the “LLMs / scaling aren’t the whole story” side:

* In a 2025 interview, he explicitly frames a shift “from the age of scaling to the age of research,” and points to a fundamental generalization gap: “These models somehow just generalize dramatically worse than people.

* He also discusses the weird mismatch between eval success and real-world brittleness (“repeat itself twice… introduce a second bug… bring back the first bug”), and offers hypotheses about RL training and “reward hacking” at the level of researchers optimizing for evals.

How I’d characterize his “bias”:

* He’s not “anti-AI.” He’s “pro-AGI with seriousness about downside.” Signing the extinction-risk statement is a very strong tell that he puts non-trivial probability on catastrophic failure modes, even while believing strongly in continued progress.

 

3. *Richard Sutton: not a ‘doomer’ vibe; more of a paradigm critic (goals + experience)*.
Sutton’s skepticism is largely about the conceptual foundation of the LLM approach:

* He says the “large language model perspective” is “not a good perspective,” arguing it lacks goals and lacks a notion of “ground truth” feedback needed for continual learning.
* He pushes the idea that LLMs are “mimicking people” rather than building a world model that predicts what will happen, and he emphasizes learning from experience as “basic AI.”

How I’d characterize his “bias”:

* Sutton’s “pessimism” is basically: “LLM imitation isn’t the right core; you need agents that learn from experience with goals.” That’s very different from “AI will turn on humans,” and it doesn’t inherently map to the classic “doomer vs optimist” split. It’s closer to “different school of thought about intelligence.”

 

4. *Andrej Karpathy: skeptical of ‘LLMs = AGI’ but generally not a public ‘x-risk doomer’ figure*.
Karpathy’s public posture (in the mainstream) reads as:

* Strong builder/engineer orientation: “What can we ship, what actually works, how do we scaffold systems into agents/tools?”
* Skeptical that current LLMs, as they exist, are already “the thing.” Even the framing of his long interview—“AGI is still a decade away”—signals a “not-yet, not-by-magic” stance.

Important honesty note:

* I can cite Ilya clearly as an extinction-risk statement signatory.
* For Karpathy, I don’t have similarly strong evidence (in the sources I pulled) that he publicly joined the 2023 “extinction-risk statement / pause letter” activism cohort. So I would not label him a “doomer” based on what I can support here.

 

5. *So, were any of them “doomers” in the old sense (AI turns on humans / extinction framing)?*

* Ilya: Yes, he publicly aligned with that framing via the CAIS extinction-risk statement.
* Sutton: Not in the same way; his critique is primarily “LLMs don’t have the ingredients (goals, feedback, experience) for real intelligence.”
* Karpathy: Not evidenced here as part of that “doomer coalition,” and his public brand is more “pragmatic capability roadmap” than “extinction campaigning.”

 

6. *About your observation that “doomers have less credibility now”*.
Conventional-wisdom take (what many observers seem to have shifted toward):

* The public conversation moved from speculative endgames (“extinction”) toward near-term harms:

fraud, disinfo, job disruption, IP issues, concentration of power,

….partly because those are already visible and litigable.

 

* A lot of 2023 rhetoric was packaged in dramatic, moralizing tones; some of it aged poorly when the next 12–24 months looked more like “messy deployment + incremental capability jumps” than “overnight runaway.”

42’s take

* The credibility didn’t vanish as much as it bifurcated:

a) “Doom tomorrow” lost steam.
b) “Non-trivial tail risk over a longer horizon” still motivates serious people—including folks like Ilya—because the cost of being wrong (if risk is real) is enormous.

In other words: the *timeline* got questioned more than the *possibility space*.

 

7. *Where this likely goes over time (useful forecasting)*.

Next 6–18 months:

* More “agentic scaffolding” around LLMs (tools, memory, planning loops, environment feedback) will keep blurring the line between “LLM” and “learning system,” which is exactly the fault line Sutton is pointing at: goals + ground-truth feedback + continual learning.

* Expect more emphasis on “evaluation that matches reality” (Ilya’s point about eval/real-world disconnect and brittleness).

Next 3–5 years:

* A likely convergence: LLMs stay as the interface/knowledge prior, but “experience-driven learning” becomes the differentiator (RL-in-the-world, self-play, tool-use in real environments). That’s basically the Sutton worldview gaining institutional form.

* Safety discourse probably becomes less “blog wars” and more “engineering + governance”: audits, incident reporting, model access controls, and capability thresholds (even if people argue about the thresholds).