Why AGI Won't Happen by 2037: The Hard Limits of Data & Energy
True human-level AI is unlikely by 2037 due to hard limits in energy, data scarcity, and economics. Discover why the AGI hype curve is flattening.
The uncomfortable question nobody likes to ask
It has become a modern ritual to declare that artificial intelligence is racing toward human level cognition. The louder the promises grow, the more uncomfortable the counterquestion becomes: what if the curve is not rising endlessly but flattening right now? In an era soaked in confident predictions, the most contrarian observation is also the simplest. We are twelve years away from 2037, and the evidence suggests that real artificial intelligence, the kind that matches or surpasses general human reasoning, is still nowhere near materializing.
This tension defines today’s debate. It is made more urgent as the world builds policy, infrastructure and billion dollar strategies on assumptions that may never be met, even in a period marked by rapid innovation. In other words, our plans are accelerating faster than the underlying reality.
The persistent myth of unstoppable acceleration
Each technological era carries its own emotional gravity, and ours is rooted in the belief that progress will continue at the dizzying pace of the past decade. Large Language Models have dazzled with their language fluency, summarization abilities and productivity gains. For many observers, it feels almost inevitable that these systems will soon think as broadly and as flexibly as humans do.
But the assumption of straight line progress masks the true shape of technological revolutions. Historically, breakthroughs rise quickly, peak, then slope into plateaus determined by physics, cost and complexity. Modern AI is following that familiar pattern. The recent boom has not been the start of a limitless curve but the steep part of an S curve already approaching its top. Understanding that shift is essential for evaluating the realistic odds that general intelligence will emerge by 2037, rather than assuming it as a given.
The most critical constraints on AI are not mysterious. They are mathematical, physical and economic. They do not bend to ambition. They define it.
Data is the fuel, and we are running out
The world’s supply of clean, human written text is approaching its end. Studies from Epoch AI estimate that between 2026 and 2028, we will reach the limit of high quality public data suitable for training frontier models. Every major model released to date has been trained on most of the public internet. The next generation would require vastly more text to sustain performance improvements, yet that text simply does not exist in the required quantity.
The proposed workaround, synthetic data, introduces risks that extend far beyond technical inconvenience. Research from Shumailov and colleagues demonstrates how repeated training on model generated text creates a loss of variance, a statistical narrowing that amplifies errors and erases rare but important patterns. Once the training ecosystem is dominated by machine written content, models begin learning from distorted versions of their own output. The result, known as Model Collapse, is not improved intelligence but a slow drift into homogeneity.
This is not merely a research challenge. As AI generated articles, summaries and posts flood the internet, the signal becomes contaminated. High quality human text becomes increasingly scarce and harder to isolate. The irony is striking. Just as the world becomes obsessed with building human level intelligence, the raw material required to support it begins to evaporate in plain sight.
The physical ceiling of energy and compute
Even if the data problem were solved, the physical limitations of computing present another wall. The energy demands of training frontier models are reaching levels once reserved for national infrastructure. The proposed next generation clusters from leading AI labs are projected to require around five gigawatts of power. That is the equivalent of several nuclear power plants dedicated to a single training run. In practical terms, such facilities cannot be built quickly. They require environmental approvals, grid upgrades and years of engineering work that does not care about hype cycles.
Meanwhile, hardware progress has slowed sharply. Moore’s Law has weakened, and the cost per transistor no longer reliably falls with each new node. Memory bandwidth, not processing power, has become the bottleneck. Chips can compute faster than they can be fed with data. Improvements in architecture are not keeping pace with the scale required for robust leaps in model capability.
The industry often frames scaling as a choice. In reality, it is a contest against physics. Physics usually wins.
The economic bubble forming beneath the optimism
There is a widening gap between the capital flowing into AI infrastructure and the revenue needed to sustain it. According to analyses referenced by Sequoia Capital, the annual revenue required to justify current investment levels sits near six hundred billion dollars. Actual revenues are far below that threshold. Investors are betting on future products that have not yet appeared and may not appear quickly enough to cover the cost of the hardware being deployed ahead of them.
This situation mirrors earlier speculative eras, particularly the early 2000s when fiber networks were built far faster than usage could justify. The difference today is scale. The numbers involved in AI are larger, timelines are tighter and the risks are global. If the expected breakthrough to true artificial intelligence lags past 2037, the financial correction could be severe, with cascading automation consequences across industries that have reshaped their strategies around unrealistic timelines.

The cognitive wall that refuses to move
Beyond the data, hardware and economics lies the deepest obstacle of all. Today’s AI systems do not think. They predict. Their intelligence is statistical, not conceptual. They do not build internal causal models of the world. They do not understand physics, intention or meaning. They excel at patterns but falter at reasoning.
This limitation becomes visible in out of distribution cases. When confronted with unusual problems that fall outside their training experience, models guess. They may guess confidently, but they still guess. In critical fields such as healthcare or scientific discovery, guessing is unacceptable.
The divide between linguistic fluency and grounded understanding remains profound. Large Language Models operate entirely through text. They have no sensory experience, no embodiment. They cannot feel weight, observe motion or understand a scene except as vectors. Researchers like Yann LeCun argue for the development of robust world models, but even optimists acknowledge that such systems are many years away, and the path toward them is unclear and fragmented.
By 2037, these cognitive gaps will not magically close. They require architectural transformations, not merely larger versions of today’s models. This point rarely surfaces in corporate forecasts, but it quietly shapes the real limits of what artificial intelligence can become in the next twelve years.
Looking forward from 2037 instead of toward it
When we reason backward from 2037, a clearer picture emerges. The probability of real, general artificial intelligence appearing by then is not high. Most leading researchers estimate the odds somewhere between ten and fifteen percent. This is not a dismissal of progress. The next decade will deliver extraordinary specialized systems that surpass human experts in countless domains. Scientific discovery will accelerate. Medicine will become more predictive. Knowledge work will transform in ways that are profound and irreversible.
But none of these developments require general intelligence. They require focused intelligence, the kind that excels within boundaries. The public imagination often merges the two, yet the difference is vast. Narrow AI can reshape industries without approaching human level cognition. General intelligence demands grounding, reasoning, memory, abstraction and reliable behavior in unfamiliar situations. It is not simply a bigger version of what we have today. It is a different category altogether.
This distinction shapes everything from regulation to economic planning to societal expectations. If we expect general intelligence by 2037, policy will be misaligned. If we expect powerful but non general systems, the conversation shifts toward oversight, transparency and responsible usability design. The challenge is not that intelligence will not grow. The challenge is that it will grow into forms that require new governance and careful handling rather than blind faith.
The story behind the hype
The race toward 2037 is not a sprint to inevitable superintelligence but a negotiation with constraints that do not care about ambition. Data, energy, hardware, economics and cognition form a braided set of limits that shape the real boundaries of progress. When those limits are acknowledged, the future becomes clearer, not darker.
We will live in a world filled with extraordinary tools but not machines that think like us. We will gain capabilities that reshape industries but not the emergence of a synthetic mind. That reality is not disappointing. It is grounding. It places responsibility back into human hands, where ethics, policy and long term planning matter more than speculative promises.
The hard truth is that by 2037, the world will not see real artificial intelligence in the strict sense. It will see something more complex, more unpredictable and more human dependent. And that may be exactly why the decisions we make today carry so much impact.
Frequently Asked Questions
Will artificial intelligence ever truly think like a human?
Probably not with today’s technology. Current systems predict patterns rather than understand the world the way humans do.
Is AI becoming too dependent on synthetic data?
Yes, and that creates risks. Too much AI-generated training data can reduce accuracy and distort model behavior.
Why does AI still make basic reasoning mistakes?
Because it has no real understanding. It works statistically, not logically, so unfamiliar problems often lead to confident but incorrect guesses.
How much energy will future AI systems really need?
A lot. Training powerful models may require power levels comparable to industrial infrastructure, raising real concerns about sustainability.
References
Shumailov I, Carlton J, Wang X. The curse of recursion: training on generated data makes models forget. Nature. 2024. Nature
Epoch AI. Forecasting the limits of data for scaling language models. Epoch Research. 2024. Epoch AI
Kaplan J, McCandlish S, Henighan T et al. Scaling laws for neural language models. OpenAI. 2020. arXiv
LeCun Y. A path towards autonomous machine intelligence. Meta AI Research. 2023. Meta AI