Why there is no AGI
The silver-bullet promise of AGI does not survive when it meets human life.
I posted a question on LinkedIn. What is your mundane AGI. The frame was deliberately provocative. AGI is the most grandiose term in the industry. Mundane is the opposite.
I asked because the term AGI is being dismantled at the top of the industry that has spent five years pursuing it. On April 27, the Microsoft and OpenAI deal was rewritten to remove the AGI clause that had defined their relationship since 2019.¹ The original gave OpenAI’s board the unilateral right to declare AGI achieved and pull Microsoft’s IP rights when it did. That clause is gone, replaced with a fixed calendar date in 2032. Sam Altman has called it “not a super-useful term.”²
AGI is the promise of a threshold moment. A specific point in time at which artificial intelligence becomes powerful enough to solve what humans cannot. The silver bullet, arriving on a date. That promise is what is being walked back.
So the term is up for grabs. What would AGI look like if we defined it for our own benefit, in the near-term mundane ways it would provide outsize benefit to us today.
Understandably, the question stumped most people. The question was designed to provoke, not for clean answers. But Banani Saha Singh played along with the provocation. Her idea of mundane AGI is an AI that runs in her mind while she manages kids, home, personal life. It catches the half-formed thoughts and ideas she does not have time to sit with. She wants to “tell my mind something” and have it draft quickly. Threads acted on in real time, in parallel. Tagging along the mind. Disappearing when asked. A silver bullet for the recurring problems of an ordinary day.
It sounded magical. It also did not feel grounded in the reality we live in. Not because the technology is not there. In fact, the technology is nearly there. I have seen paraplegic patients making art with their minds through chips in their brains. What stops Banani’s vision from working are the realities that arrive with the technology. The invasiveness of brain access, and the questions that follow once technology is inside the skull. Who gets access to it. On whose terms. With what reversibility. The trust that brain access at scale would require, beyond what any current system has earned. The ownership of what the system produces on Banani’s behalf, when “her” thoughts have been routed through someone else’s infrastructure. The incentives of whoever builds it, which the brain is exquisitely vulnerable to. The social consequences: if some people have this and others do not, those who have it will outpace those who do not. The regulations that do not exist yet. And underneath all of these, the ethical question of whether this would be good for us, or if we’d want it at all.
The magical version Banani described will not happen. We can feel that ourselves, because we are close to our own minds, our own thinking, our own daily lives. We can run the case and see where it breaks.
Cancer is the case we cannot feel our way through. Most of us are not experts in biology, trial design, or drug development. When the labs say superintelligence will cure cancer, we do not have the instinct to push back. But we have the wanting. Which is why the cure-cancer promise survives the dismantling that has taken apart the larger AGI term. It survives on our distance from where the work happens.
For that promise to be examined, the examining has to be done by someone with the instinct we lack. Dr. Emilia Javorsky has been doing exactly that.³ What follows draws closely on her case.
The premise of the cure-cancer promise is that intelligence is the bottleneck. Cross the threshold and the cure follows. There are three reasons the premise does not hold.
The first is that biology does not run on first principles the way physics and mathematics do. Pure acceleration of intelligence has produced real results in math and physics — AI now solves mathematics problems that had defeated humans for decades.⁴ Biology has no equivalent first principles. Even simulating one minute of human biology, Javorsky has said, would require more GPUs than exist on the planet.
The second is that even where AI can model usefully, the testing cannot be accelerated by intelligence. Most cancer cures that work in mice fail in humans. Whether a treatment works has to be measured in biomarkers in actual human bodies, over the years biology takes to reveal whether something is working. Clinical trials require human patients. Human patients cannot be scaled. Tumour specimens cannot be scaled. Human lives cannot be sped up. Biology takes the time it takes. Even Dario Amodei, the CEO of Anthropic and one of the most optimistic voices building powerful AI, names this constraint directly.⁵ In his most bullish case for what powerful AI can do, he writes that very capable intelligence is heavily bottlenecked by what he calls “the speed of the outside world.” Cells and animals run at a fixed speed. Developing a cancer cure, he writes, has a minimum timescale that cannot be reduced even as intelligence continues to increase.
The third is that intelligence is not what is lacking now. The doubling rate of medical knowledge has fallen from fifty years to seventy-three days. We have an oversupply of scientists relative to the lab benches and infrastructure that would let them do the work. Knowledge is accelerating faster than the systems that turn it into therapies for actual patients. The binding constraint is no longer intelligence. It is the system the science meets the patient through, and that system is chronically under-resourced.
AI acceleration has its place. It does not look like the generalised one the labs have been selling. It looks like AI applied to the specific bottlenecks that make up the wider problem. Discovering new biomarkers - measurable signals in the body that show whether a treatment is doing what it claims, or detect disease earlier than we can now. Bringing down the cost of personalized therapies so they reach the people who need them. AI is genuinely part of both of these. Then there is the part AI cannot do for us. Agreeing as a society on where to put the resources: the money, the attention, the talent, so that the work that actually moves the needle on cure is properly backed. That is a problem only humans can solve.
This is what progress looks like. AI inside the work, not in place of it.
The threshold moment was never coming. Believing it was has been keeping us from the work that is. The longer the cure-cancer AGI narrative survives, the more capital and attention go to acceleration instead of to the things that would actually help. It is time to walk that back too.
Footnotes
Microsoft and OpenAI restructure their partnership, April 27, 2026. the-decoder.com
Sam Altman, CNBC Squawk Box, August 2025. cnbc.com
Dr. Emilia Javorsky, physician and director of the Futures Program at the Future of Life Institute. “How AI Can and Can’t Cure Cancer,” 2026, and conversation with Tristan Harris, Center for Humane Technology, April 2026. curecancer.ai · Center for Humane Technology
DeepMind, AlphaProof and AlphaGeometry 2, July 2024. deepmind.google
Dario Amodei, “Machines of Loving Grace,” October 2024. darioamodei.com

