
AI Bubble? Understanding Real Value Amidst Market Hype
Last Updated on September 4, 2025 by Editorial Team
Author(s): Manbir T
Originally published on Towards AI.

A $515B paradox hiding in plain sight
A friend texted me a stat that made me spit out my coffee: roughly $560 billion poured into AI in the last couple of years, yet only about $45 billion in incremental revenue shows up on earnings reports- a yawning gap near $515 billion. Markets can ignore gravity for a while. Math usually doesn’t.
Call it what it is: an AI bubble. Not “AI is fake” (it’s very real), but valuations and capital flows detaching from near‑term cash generation. When prices and private marks assume flawless execution, never‑ending efficiency gains, and frictionless adoption, you’re no longer investing. You’re praying at the altar of momentum.
Dan Buckley puts it plainly: “We’re seeing record capital inflows, sky-high valuations, one-sided sentiment, and investing driven by FOMO before common sense.” It’s the classic cocktail. You don’t need a PhD in market history to spot the mix.
Why this matters right now:
- Capital is stampeding into AI investment-models, chips, data centers, and startups-at record pace.
- Market speculation is one‑sided; dissenters are treated like party poopers rather than risk managers.
- Executives feel pressure to “announce AI” before they can prove ROI.
A gap that big doesn’t quietly disappear. It closes one of two ways: revenue catches up, or valuations reset.
We’ll unpack the numbers, how speculation inflates technological valuation, what might pop the bubble, and practical frameworks to separate durable value from hype. If you’re allocating capital-or defending a budget-you’ll want a sturdier map than vibes.
The data in plain sight: AI investment vs. measurable revenue
Start with the headline figures. Public and private sources peg AI-related capex and equity infusions at roughly $560 billion over a recent multi‑year window. Meanwhile, incremental reported revenue explicitly tied to AI products and services sits near £35 billion. Translate that at roughly 1.28–1.32 USD/GBP and you’re looking at ~$45 billion. That gives us a gap in the $515-$525 billion zone, depending on the timing and FX assumption.
Two points often get lost in the shouting:
- The lag is real. Infrastructure-heavy cycles (chips, power, networking) front-load spending and back-load revenue. Think railroads, then cloud. The build comes first.
- The lag isn’t infinite. If the payoff doesn’t show up by reasonable time horizons, capital gets repriced-fast.
Eric Schmidt rings a different bell: “AI is infrastructure for a new industrial era, not just a passing tech fad.” That can be true and still coexist with a repricing. Railroads changed the world-and bankrupted speculators who bought at the peak.
What the numbers do tell us: the market is pricing in enormous future earnings, compressed into tight timeframes. What they don’t: whether current spend is good or bad. Spend is a means; durable cash flow is the end. The danger is when spend becomes the thesis.
How speculation and FOMO inflate valuations
Markets love a story. AI is the best story in tech since the smartphone. The mechanisms that turn a good story into a frenzy are well-known.
- Narrative-driven capital allocation: Budgets approved because “AI” appears in the deck, not because unit economics work;
- Growth-at-all-costs: Land users now, figure out margins later. The later rarely arrives on schedule.
- Herd behavior: Once a handful of mega-caps signal “AI-first,” everyone else follows, whether or not they have a path to monetisation.
Add the accelerants: Day trading and options flows amplify short-term price moves, a hot headline becomes a mini-bubble in an afternoon, social media turns cherry-picked wins into universal truths, misses get buried, a few names become proxies for the entire theme.
Consider sentiment asymmetry. Nvidia mints real cash on GPUs. Microsoft turns AI into stickier enterprise relationships and higher ARPU. Meanwhile, second-tier entrants with no moat enjoy “halo pricing” simply because they said “LLM” on the earnings call. That’s how technological valuation disconnects from near-term earnings: investors pay for the dream and ignore the clock.
None of this makes AI bad. It makes pricing fragile. When a trade rests on faith rather than cash flow, tiny disappointments can lead to giant air pockets.
Distinguishing genuine AI investment from hype
You don’t have to guess. There are crisp signals that separate durable AI investment from marketing fluff.
What to treat as real:
- Defensible IP and data advantage: Proprietary datasets with consent and legal clarity, unique labeling pipelines, or model weights improved through hard-to-replicate feedback loops.
- Evidence of customer pull: Shortening sales cycles, pilots converting to multi-year contracts, expanding contract values without unsustainable incentives.
- Sustainable unit economics: COGS per inference falling with scale; gross margins accreting despite growing usage.
- Operational moats: Specialized tooling, MLOps, deployment pipelines, and fine-tuning systems that competitors can’t copy on a long weekend.
What to question:
- Disproportionate spend with flat revenue: If compute costs and headcount balloon while revenue stays flat, that’s not “investing”-that’s hoping.
- Promise-heavy roadmaps: Announcements of “AGI soon” without stepwise milestones or customer references.
- Churn hidden in expansion: Net retention looks fine, but it’s fueled by a few whales while the long tail quietly walks.
- Unclear benchmarks: Cherry-picked leaderboards that don’t correlate with customer outcomes.
AI right now looks a lot like the 1849 gold rush. Selling picks and shovels (chips, cloud credits) is profitable. Prospecting is riskier-some strike it rich, many don’t. The winners are the ones who either own the mine (data + distribution) or own the toll road (infrastructure with scale advantages).
Technological valuation: frameworks to value AI projects and companies
How do you price this without resorting to vibes? Use multiple lenses and force them to disagree with each other.
Short-term lenses:
- Revenue multiples: Reasonable for companies with visible AI revenue and stable gross margins; sanity-check against non-AI comps.
- Gross profit per inference: Track unit economics at the workload level; discounts vaporize if margins don’t scale.
- Payback periods: For enterprise AI, measure months to breakeven on deployment and retraining.
Long-term lenses:
- DCF with scenario trees: Model base, bull, and bear paths for adoption, gross margins, and compute costs. Treat model retraining as recurring capex.
- Option value: Price the future flexibility of platform plays-APIs, fine-tuning ecosystems, and model marketplaces-using conservative probabilities.
- Cost curve dynamics: Incorporate expected declines in compute cost versus rising model size and inference complexity; the net effect is what matters.
AI-specific adjustments:
- Data moat quality: Assign value to proprietary data with legal rights. No rights? Haircut the valuation.
- Retraining cadence and cost: If you must retrain quarterly to stay competitive, it’s a structural tax on free cash flow.
- Compute intensity and supply constraints: Capacity bottlenecks cap growth; lock-in contracts and energy access deserve a premium.
- Inference margins and caching: Engineering choices (quantization, distillation, retrieval) directly move gross margin. Reward teams that design for margin, not just model accuracy.
- Market speculation factor: Explicitly haircut multiples when the theme is consensus-hot. Add back once sentiment cools and execution proves out.
If your model only works when you assume 90%+ growth for five years and flat costs, it isn’t a valuation.
Case studies: winners, overvalued names, and misleading signals
Let’s keep it clean and practical.
Winners where AI investment translated into value had clear monetization approach — be it of compute demand with pricing power, software lock-in and massive operating leverage (e.g NVIDIA). Or companies on enterprise side monetizing AI services that deepen enterprise stickiness.
What also distinguishes winners from the rest is their distribution. You can ship AI to tens of millions of customers tomorrow? That’s a revenue accelerator. Developers build around your stack; partners route business your way. That’s measurable ROI for buyers.
On the other side: Companies touting “AI-first” with no uplift in gross margin or net retention. They spend heavily on model training, only to discover inference costs eat their lunch. Startups claiming proprietary models trained on “web-scale data” with fuzzy consent. Legal risk isn’t a rounding error; it hits valuation. Public names trading at revenue multiples that assume monopoly economics in markets that plainly aren’t monopolies.
Misleading signals to watch: Vanity MAUs from free AI tools that don’t convert, press releases about “strategic partnerships” that are basically co-marketing and benchmarks won by overfitting to static datasets. Buyers care about business outcomes, not Elo ratings.
The dot-com bubble turned bandwidth and eyeballs into oxygen. Some of those bets matured into giants-Amazon, Google. Many flameouts weren’t frauds; they were simply too early or too expensive. AI differs in that the infrastructure is immediately monetizable and useful-but that doesn’t immunize the whole sector from a valuation cleanup.
Strategies for paving through the current landscape
- Diversify within AI layers: Balance “picks-and-shovels” (compute, power, networking) with application bets. Don’t overconcentrate in a single model thesis.
- Upgrade diligence: Read technical docs, not just investor decks. Ask about data rights, inference optimization, and retraining cadence.
- Scenario planning and Risk Sizing: Build base/bull/bear adoption curves and stress-test valuation under higher rates and slower monetization.
- Avoid FOMO entries: Wait for earnings clarity or pullbacks. Momentum can be your friend on the way up and your enemy on the door out.
- Prioritize profit-improving AI: Start with cost centers you can shrink-support deflection, doc creation, code assistance. Bank wins, then chase moonshots.
- Build transparent roadmaps: Publish success metrics, not just launch dates. Tie bonuses to ROI, not demo views.
- Price for inference: Don’t subsidise forever. Align pricing with usage and value delivered.
- Invest in margin engineering: Quantisation, retrieval augmentation, model distillation-make them first-class citizens.
From bubble to sustainable industrialisation: a hopeful blueprint
The optimistic view isn’t naïve; it’s conditional. Schmidt’s framing-AI as infrastructure-can become real if we build the boring parts well: energy, cooling, networking, and deployment reliability. Infrastructure doesn’t need sizzle. It needs uptime.
What it takes is energy realism, tighter product loops, data discipline and workforce augmentation.
Time horizons: 12–24 months: Expect turbulence. Winners keep compounding; pretenders get repriced. 3–5 years: AI productivity starts showing up in TFP, but unevenly. Sectors with repetitive knowledge work lead; industrial control and healthcare move slower due to safety regimes. 5–10 years: Infrastructure investments (energy, specialized silicon, networking) enable broader diffusion. The theme shifts from “model of the month” to “how every process got smarter.”
The outcome we should want isn’t an ever-higher multiple. It’s a steady, boring transfer of AI investment into durable earnings and margin.
Conclusion: reckoning with the $525B gap and moving beyond hype
Here’s the uncomfortable summary. Hundreds of billions invested, a fraction showing up as incremental revenue. Market speculation fills the difference. That doesn’t mean AI is a mirage; it means the bill for enthusiasm comes due if revenue doesn’t accelerate.
- For investors: Separate narrative from cash flow. Use layered valuation frameworks, size risk appropriately, and be choosy about moats and unit economics.
- For executives: Ship AI where it boosts profit now. Price for inference, measure ROI, and publish your scoreboard.
- For policymakers: Push for clean disclosures and data rights clarity; don’t jam the brakes, but do force truth into daylight.
Provocative question to sit with: If the froth drained out tomorrow, which AI bets would you still own with conviction for five years? If the answer is “I’m not sure,” the market just did you a favour by asking before the drawdown.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.