The result of circumstances
The accidents that produced AI, and the futures they foreclosed
There is a way of talking about AI that has become so common we have stopped noticing it. AI “is happening”. We are “in the AI era.” The technology “arrived.” The trillions of dollars now flowing into data centers and foundation models are described as if they are flowing toward AI the way water flows downhill because of some underlying force that makes this the obvious place for them to go. AI is treated as a natural phenomenon, like weather, that we are responding to.
This framing is wrong. The current AI moment did not happen because of any deep necessity. It happened because of a specific set of circumstances, most of them contingent, several of them accidents.
The four conditions that made AI possible
The first is the 2017 Transformer paper. Eight Google Brain researchers published “Attention Is All You Need” in June 2017, originally as an attempt to improve machine translation. Their architectural choice - dispensing with recurrence and convolution entirely, relying on attention mechanisms that could be parallelized across GPUs - was not the obvious one. Recurrent neural networks had dominated sequence modeling for a decade. The Transformer was a research bet that could have produced a marginally better translation system and been forgotten. Instead it became the architecture underlying every major language model since. All eight of the original authors have since left Google.
The second is the existence of internet-scale text. The training data that made large language models work was a one-time accident of the past twenty-five years. A generation of humans wrote into an open public web before there was any commercial reason to lock that text behind paywalls or licensing terms. Reddit, Wikipedia, Stack Overflow, GitHub - the corpus that GPT-3 and its descendants were trained on existed because of cultural and commercial choices made between roughly 2000 and 2020 that nobody at the time understood would later become the substrate of a new technology. That window is now closing, with major content publishers litigating, licensing, or blocking AI training.
The third is GPU maturation, which itself was contingent on the gaming industry. Nvidia’s parallel processors were originally built to render 3D graphics for video games. The chips happened to be useful for parallel scientific computing, which Nvidia recognized in 2006 with CUDA. The connection to deep learning came in 2010 after Andrew Ng demonstrated that GPUs could accelerate deep learning by roughly a hundred times. The chips that power the current AI boom were created for the purpose of better-looking video games.
The fourth is the post-COVID interest rate environment. From 2020 through 2022, interest rates were at historic lows, making it cheap to finance new emerging technologies.
These are the four contingencies that show up in any thoughtful account of how AI became possible. Take any one of them away and the technology either does not exist, exists in a much weaker form or exists but cannot be deployed at scale. Architecture, data, compute, financing. The standard analytical history of AI ends here, with the conclusion that a specific alignment of conditions made the technology possible. The technology is real; the alignment was not destiny; we should be more humble about how it happened.
This account is correct as far as it goes. It is also incomplete. It explains how AI became possible. It does not explain why trillions of dollars flowed into AI specifically, on this timeline, concentrated in a handful of public companies, at a pace the bond market is now pricing as risky. For that, you have to look at two more circumstances.
The two conditions that explain the capital cycle
The fifth circumstance is the failure of self-driving cars to deliver on 2018 timelines. Between 2014 and 2018, autonomous vehicles attracted tens of billions of dollars in venture capital and corporate investment, with expectations of rapid progress toward full autonomy. That progress slowed. As timelines stretched, some of the capital and talent committed to autonomy began looking for adjacent areas where machine learning progress was more immediately visible. Part of that shift contributed to the generative AI boom that followed.
The sixth is the collapse of crypto as a dominant investment narrative. The 2022 crash - FTX, Terra/Luna, and the broader downturn - ended a multi-year period in which speculative tech capital had been concentrated in blockchain-related bets. That capital did not disappear. It needed a new narrative. The release of ChatGPT in late 2022 arrived at a moment when investors and engineers were already repositioning, and AI became a natural place for some of that attention and capital to move.
Notice what just changed. The first four contingencies were technological and economic preconditions - things that had to be true for AI to exist as a working technology. The last two are not preconditions. They are capital reallocation events. The technology had been available since at least 2020 with the release of GPT-3. The capital surge began in 2023, after the crypto collapse and after the autonomy plateau matured into evident disappointment. The first four contingencies made AI possible. The last two help explain why AI became the available story when speculative capital was already looking for one.
The dot-com parallel
This is a familiar shape. In 1995, the internet was real. It was useful. It was going to change the economy. None of that was wrong. What was wrong was the specific scale and speed of capital deployment between 1998 and 2000, which was driven less by the underlying technology’s near-term economic value than by the speculative narrative the technology supported. Bad businesses got funded as long as the narrative held.
The technology vindicated itself eventually. The internet did reshape the economy, on roughly the timeline that sober analysts in 1995 had predicted. But the capital cycle that bet on the faster timeline destroyed roughly five trillion dollars in market value when it unwound. The technology was right; the capital cycle around it was a separate phenomenon, driven by separate forces, and it failed for reasons that had little to do with whether the underlying technology was good.
This is the structural insight worth holding on to: a technology can be real, valuable, and eventually transformative, and the capital cycle around it can still be a separate phenomenon - a place for capital looking for a story to land, with the story attached to a real technology so the eventual unwinding cannot be cleanly attributed to fraud. AI is in this position now. The technology is real. Foundation models work. They will reshape some industries meaningfully over the coming decades. What is in question is not the technology. It is whether the current scale and speed of capital deployment reflects the technology’s near-term economic value or the financial system’s need for somewhere to put speculative capital after the previous places stopped working.
Taking the counter-position seriously
It is worth taking the strongest version of the opposing case seriously. Capital markets are not stupid. AI may be the most rational allocation available given current constraints. It is scalable, measurable, closer to monetization than the alternatives, and produces real outputs that real customers pay for. Many of the people deploying capital into AI are sophisticated allocators who have personal incentive to be right and personal cost to being wrong. The argument here is not that they are mistaken on the merits of AI specifically. The argument is that the structure of how capital is allocated systematically prefers technologies with certain properties - measurable outputs, visible progress, narratives that can be sustained over short reporting cycles - and that the preference operates whether or not it produces the best outcomes for the populations the capital is supposed to serve.
Why this technology and not the alternatives
Once the structural preference is named, the question of why this technology and not the alternatives becomes answerable. AI fits the preference unusually well. The alternatives do not. Fusion energy operates on development cycles too long for quarterly milestones. Biotech operates on FDA timelines that resist short-cycle storytelling, with binary outcomes that are bad for narrative momentum. Materials science breakthroughs are slow and difficult to demo. Space infrastructure has long payback periods that resist short-cycle framing.
None of these alternatives is obviously inferior to AI in terms of long-term societal impact. Some, by various reasonable measures, could plausibly deliver greater benefits per dollar invested over decades. But they cannot perform the structural role that AI can in the current capital environment. They cannot sustain a cycle of this shape because they do not produce the kind of recurring inputs that public-company investors and venture capital funds are organized to reward. The trillions did not flow into AI solely because it was the technology most in need of capital. They flowed because AI combined real technical progress with characteristics that made it unusually compatible with how capital is allocated today.
The biotech case
The cleanest test of this argument is the one most people lived through. Between 2020 and 2021, biotech venture funding reached unprecedented levels, peaking at roughly $53.9 billion in 2021. The world had just lived through a pandemic that killed seven million people, disrupted the global economy for two years, and revealed in unmistakable terms that biological infrastructure - surveillance systems, vaccine platforms, therapeutic pipelines, manufacturing capacity - was the foundation on which everything else depended. The rational response would have been sustained, increasing investment in biotech and pandemic preparedness. The capital was there. The lesson was fresh. The case was self-evident.
That is not what happened. Biotech VC funding fell sharply in 2022 and 2023, dropping to roughly $24 billion in 2023 - less than half the 2021 peak. By 2024-2025, biotech’s share of US startup investment had fallen well below its post-pandemic peak, while AI absorbed an ever-larger share. In Q1 2026, AI startups captured roughly 80% of all global venture funding.
The capital allocators who pulled back from biotech and increased their exposure to AI between 2022 and 2026 were not unaware of the pandemic. They had just lived through it. The shift is consistent with the pattern described above: capital flowing toward technologies with faster feedback loops and shorter reporting cycles, regardless of where the case for societal value is strongest.
The cost of that shift is not yet visible because biotech development cycles are long. Some of the systems and capabilities that might have been built under sustained funding are simply not in place. If they are needed in the future, their absence will only become clear at that point. The bet may still pay off; AI may produce biomedical breakthroughs that compensate for the underfunding of biotech directly. But that is the bet, and it is being made on behalf of populations who will absorb the cost if it turns out wrong.
Why the framing matters
There are two reasons it matters whether we describe the AI capital cycle as a response to technological inevitability or as a cycle shaped by structural preferences in how capital is allocated. The first is intellectual honesty. The framing of “AI is happening” obscures the fact that the trillions are flowing because of specific decisions made by specific actors under specific incentives, not because of a force beyond anyone’s control.
The second is that the inevitability framing forecloses the question of what else we could be doing. If AI is just “happening,” there is nothing to discuss. If AI is being chosen because the capital allocation system prefers technologies of a certain shape, the question - what would have to change for the alternatives to receive comparable capital - becomes available, and once available, it is harder to dismiss.
AI will likely follow a familiar pattern: the technology will continue to develop and eventually find durable economic uses, while the capital cycle around it may peak and normalize earlier. The technology and the capital cycle are related, but they are not the same thing.
What the cycle costs
The cost of a speculative cycle is not the capital it deploys. The capital is real and most of it produces something - some real infrastructure, some real research, some real products. The cost is the capital it does not deploy elsewhere. Biotech treatments that would have existed and do not. Climate infrastructure that would have been built and is not. Materials science breakthroughs that would have been funded and are not. Fusion research, which has not attracted capital at a scale comparable to the current AI cycle despite decades of theoretical promise. Each of these is an opportunity cost, distributed across the populations that would have benefited from them, invisible because the thing not built leaves no trace.
This is the cost the inevitability framing prevents us from seeing. By describing the AI moment as a response to forces beyond our control, we avoid noticing that the forces are not beyond our control. They are the result of how the financial system is structured, what kinds of stories it can use to allocate capital, and what kinds of bets it can sustain. Those features are not fixed. They are choices.
The current AI moment is the result of circumstances. Some of those circumstances are accidents of research and technology. Two of them are the residue of previous capital cycles ending and looking for somewhere to land. All of them are contingent. The question worth asking is not whether AI is a bubble. AI is real, and the technology will outlast the cycle around it. The question is whether the system that allocated trillions of dollars into a single technology is the system we want allocating the next trillions.
Sources. The Transformer paper history is from Vaswani et al. (2017) “Attention Is All You Need” and subsequent retrospectives. The GPU and CUDA origin story draws on Communications of the ACM’s 2024 retrospective “The Origins of GPU Computing.” Hyperscaler capex figures come from Goldman Sachs and Bank of America tracking (2024-2026). Q1 2026 venture funding figures are from Crunchbase’s April 2026 reports. Biotech venture funding figures draw on PitchBook’s annual analyses (2020-2025). The history of US fusion funding draws on Margraf’s 2021 review for Stanford and the MIT Energy Initiative’s 2024 reporting. The dot-com comparison draws on standard accounts of the 1995-2002 cycle.


