The AI Boom, The Bubble, and What Comes Next

September 22, 2025

The AI Boom, The Bubble, and What Comes Next

Artificial Intelligence (AI) has gone from niche academic curiosity to the hottest buzzword in technology, business, and even politics. Over the past few years, we’ve witnessed an explosion in machine learning, deep learning, natural language processing (NLP), computer vision, and generative AI systems like large language models (LLMs). The hype has been so strong that trillions of dollars are flowing into chips, data centers, and AI startups. But here’s the big question: is this sustainable, or are we inflating an AI bubble destined to pop?

Even Sam Altman, CEO of OpenAI, has acknowledged that investors are "overexcited" and that AI valuations resemble the dot-com era — a crash that erased roughly $5 trillion in market value between March 2000 and October 2002.1 At the same time, AI safety researchers like Dr. Roman Yampolskiy are warning that beyond economics, humanity is woefully unprepared for the risks of superintelligent systems that could reshape our societies, economies, and even survival.

In this deep dive, we’ll unpack what’s happening in AI right now — the technologies driving the boom, the cracks starting to show, the risks of collapse, and the few things likely to endure if the bubble bursts. Along the way, we’ll look at real-world use cases, energy and infrastructure challenges, and the looming ethical dilemmas of AI-driven change.


The Shape of the AI Boom

AI has been around for decades, but recent breakthroughs in deep learning and generative models have changed the game. Let’s break down the key areas fueling the current boom:

Machine Learning and Deep Learning

  • Machine Learning (ML): Algorithms that learn patterns from data and improve performance without being explicitly programmed.
  • Deep Learning: A subset of ML using neural networks with many layers, enabling breakthroughs in image recognition, speech processing, and more.

These techniques underpin everything from self-driving cars to recommendation systems.

Generative AI and LLMs

  • Generative AI: Models that don’t just classify or predict, but create new content — text, images, video, code.
  • Large Language Models (LLMs): Systems like GPT-4 that can produce human-like text, answer questions, and even write software.

This is where the hype has exploded. Suddenly, AI feels creative, not just analytical.

Computer Vision

  • AI systems capable of interpreting visual data: facial recognition, medical imaging, autonomous navigation.
  • Fueled by convolutional neural networks (CNNs) and, more recently, transformer-based architectures.

Natural Language Processing (NLP)

  • Tools for understanding and generating human language.
  • Powers everything from chatbots and translation services to search engines and voice assistants.

Voice Technology

  • Voice recognition and synthesis systems creating conversational experiences.
  • Integration with LLMs is making digital assistants smarter and more lifelike.

Together, these areas create the sense of a technological revolution. But revolutions attract hype, and hype attracts money.


The Signs of a Bubble

Sam Altman’s warning shouldn’t be taken lightly. Here are the factors suggesting that AI investment may have inflated into bubble territory:

1. Stocks Run on Vibes

Investors are pouring money into any company with “AI” in its pitch deck. Stock prices often reflect vibes and storytelling more than actual performance. We’ve seen this before during the dot-com era.

2. Trillions Burned on Chips

The demand for GPUs and AI-specialized hardware is astronomical. The four largest hyperscalers — Amazon, Google, Microsoft, and Meta — collectively spent roughly $413 billion on data centers and AI infrastructure in 2025 alone, more than double their combined 2023 spend, and McKinsey projects $5.2 trillion in cumulative AI infrastructure investment through 2030.2 If those returns don’t materialize, it could mean an enormous amount of misallocated capital.

3. The Energy Wall

Training large AI models consumes staggering amounts of energy. Training GPT-3 alone consumed an estimated 1,287 MWh of electricity — roughly the annual usage of around 120 average U.S. households,3 and frontier model training has scaled significantly since. The IEA projects that data centers, driven heavily by AI, could account for a meaningful share of global electricity demand growth this decade. This raises both environmental and economic sustainability concerns.

4. Brittle Technology

Despite their apparent intelligence, today’s AI systems are brittle. They hallucinate, fail at basic reasoning, and can be tricked by adversarial inputs. Relying too heavily on such systems can lead to catastrophic failures.

5. The Psychology Trap

Humans are prone to overestimating new technologies. The hype cycle leads to inflated expectations, which eventually collapse into disillusionment when reality doesn’t match.

6. Venture Bubble Mechanics

Venture capital is flooding into AI startups, many of which have no sustainable business model. When easy money dries up, many will vanish, leaving behind only a few survivors.


The Safety Warnings

Economic bubbles are one thing. Existential risks are another. Dr. Roman Yampolskiy, a leading AI safety expert, warns that we’re playing with fire.

The Risk of Superintelligence

  • Prediction: Yampolskiy expects artificial general intelligence (AGI) by 2027, with superintelligence following shortly after.4
  • Dangers: A system more intelligent than humans could act in ways we can’t predict or control.
  • Comparison: Yampolskiy argues superintelligent AI is more dangerous than nuclear weapons because, unlike nuclear arsenals, it would not be under human control.4

Job Displacement

  • Claim: Yampolskiy predicts that up to 99% of jobs could be automated by 2030 — a forecast far more aggressive than mainstream AI researchers, who tend to project major (but partial) labor disruption over a longer horizon.5
  • Remaining Jobs: Yampolskiy expects only a handful of roles to survive — those where people specifically prefer a human (e.g., therapy, caregiving), plus oversight and creative leadership.
  • Implication: Mass unemployment and social upheaval could follow.

Opacity and Control

  • We don’t truly understand what’s happening inside large models.
  • “Unplugging” isn’t a realistic solution once systems are deeply integrated into infrastructure.

Existential Threats

  • AI could be misused to help design biological weapons or other catastrophic tools.
  • Superintelligence could trigger geopolitical instability or, in worst-case scenarios outlined by Yampolskiy, threaten human extinction.
  • Yampolskiy is also a proponent of the simulation hypothesis, and has speculated that advanced AI could probe or destabilize the boundaries of any such simulation — a far more contested claim than the safety risks above.

Where the Tech Is Fragile

Let’s get more concrete about the technical brittleness of current AI models.

Hallucinations

LLMs often produce false information with high confidence. That makes them unreliable for critical applications.

Energy Inefficiency

Training a cutting-edge model requires petaflops of compute and megawatt-hours of electricity. Scaling this indefinitely is unsustainable.

Lack of True Understanding

Despite their outputs, LLMs don’t “understand” language — they predict patterns. This leads to shallow reasoning and logical errors.

Security Risks

Adversarial examples can fool computer vision systems into misclassifying images — a dangerous vulnerability for autonomous vehicles or medical diagnostics.


What Happens When the Bubble Pops?

Just like the dot-com crash, most AI startups may not survive. But not everything will disappear. Here’s what’s likely to remain:

Survivors

  • Infrastructure: The chips, data centers, and cloud platforms will remain valuable.
  • Core Use Cases: AI in healthcare, logistics, and enterprise productivity may deliver sustainable returns.
  • Open Source Models: Communities around open models are likely to thrive even without VC cash.

Casualties

  • Hype Startups: Companies raising money on vague AI promises without real products.
  • Overhyped Applications: Tools that don’t solve meaningful problems or can’t overcome brittleness.

Lessons from Dot-Com

The dot-com bubble wiped out countless startups, but survivors like Amazon and Google became the backbone of the modern internet. Expect a similar pattern here.


Demo: Using AI Safely in Practice

Given the risks, how can developers responsibly use AI today? Here’s a practical example: using an LLM for text summarization, but with guardrails to catch hallucinations.

from openai import OpenAI

client = OpenAI()  # reads OPENAI_API_KEY from environment

prompt = "Summarize the following article in 5 bullet points, and only use direct quotes from the text. Do not add extra facts."

article_text = """
Sam Altman, CEO of OpenAI, has raised concerns that AI hype may represent a bubble similar to the dot-com crash...
"""

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a summarization assistant."},
        {"role": "user", "content": f"{prompt}\n{article_text}"}
    ],
    temperature=0
)

summary = response.choices[0].message.content
print(summary)

Why this matters:

  • The temperature=0 setting reduces randomness, minimizing hallucinations.
  • Instructing the model to use only direct quotes limits fabrication.

This illustrates how developers can apply AI responsibly — not blindly trusting outputs, but designing systems with constraints.


The Human Side: Jobs and Society

AI’s economic and societal impact may rival or exceed the Industrial Revolution.

The Job Landscape

  • Likely to survive: Roles involving deep human empathy (e.g., therapy, caregiving), creativity (original art, leadership), and oversight of AI systems.
  • Likely to vanish: Routine cognitive and manual jobs.

Psychological Impact

Humans derive identity and purpose from work. Mass unemployment could trigger crises of meaning, not just income.

Possible Futures

  • Collapse: Societal breakdown if unemployment and inequality spiral.
  • Restructuring: New social contracts, maybe universal basic income.
  • Acceleration: Humans collaborating with AI, augmenting rather than replacing roles.

How We Can Prepare

For Developers

  • Build responsibly: add guardrails, test against adversarial inputs.
  • Prioritize transparency: log decisions, explain limitations.

For Companies

  • Avoid hype-driven strategies. Focus on real problems.
  • Invest in energy efficiency and sustainable infrastructure.

For Policymakers

  • Regulate AI safety research.
  • Develop frameworks for job transitions.
  • Monitor concentration of power in AI companies.

For Individuals

  • Upskill in areas AI can’t easily replace.
  • Stay informed about AI’s risks and potential.
  • Advocate for responsible AI development.

Conclusion

AI is extraordinary, but it’s not magic. The current boom has the hallmarks of a bubble, and when it pops, many companies and investors will be burned. But the underlying technologies — from machine learning to generative AI — will continue to reshape the world. The survivors will be those who focus on real value, responsible use, and long-term sustainability.

We’re standing at a crossroads: AI could usher in a new golden age of productivity and creativity, or it could destabilize economies and even threaten humanity’s survival. The difference will come down to whether we take safety, ethics, and sustainability seriously.

If you care about the future of AI, now is the time to pay attention. Don’t just ride the hype wave; prepare for what comes after it crashes.


References


Footnotes

  1. Sam Altman, in interviews reported by The Verge and CNBC in August 2025, said: "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes." He compared market conditions to the dot-com era. The Nasdaq fell roughly 78% from its March 2000 peak to October 2002, erasing approximately $5 trillion in market capitalization. See: CNBC, Wikipedia: Dot-com bubble.

  2. McKinsey's 2025 analysis projects $5.2 trillion in AI infrastructure investment through 2030, with chips representing roughly 60% of that spend. See: McKinsey: The cost of compute.

  3. GPT-3 training is estimated at 1,287 MWh, per published research summarized by ADaSci and other sources. See: ADaSci: LLM Energy Consumption.

  4. Roman Yampolskiy has stated in multiple 2025 interviews — including The Diary Of A CEO podcast — that he expects AGI by around 2027, with superintelligence shortly after, and considers it more dangerous than nuclear weapons because it would not be under human control. See: University of Louisville Q&A. 2

  5. Yampolskiy's 99%-by-2030 forecast is an outlier among researchers. Anthropic CEO Dario Amodei has said roughly half of entry-level office jobs could vanish in five years; researcher Adam Dorr forecasts mass displacement closer to 2045; Geoffrey Hinton expects "mundane intellectual labor" to be automated. See: Entrepreneur.


FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.