Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
Big Tech Is Burning 5 Billion to Build AI on a Power Grid From the 1950s. Musk Says Put It in Space.
Artificial Intelligence   Cloud Computing   Latest   Machine Learning

Big Tech Is Burning $655 Billion to Build AI on a Power Grid From the 1950s. Musk Says Put It in Space.

Last Updated on February 12, 2026 by Editorial Team

Author(s): Zoom In AI

Originally published on Towards AI.

Big Tech Is Burning $655 Billion to Build AI on a Power Grid From the 1950s. Musk Says Put It in Space.

Your electric bill is helping bankroll Bezos’s compute buildout. Elon wants to move the whole thing into orbit. Neither plan is proven yet. That is the terrifying part.

By Zoom In AI | February 9, 2026
I write about how AI actually works (not the marketing version), where safety and reality don’t match, and the economics and infrastructure behind the hype.

Stop me if this sounds like satire.

Four companies, Amazon, Google, Microsoft, and Meta, are signaling roughly $655 billion in capital expenditures this year to build AI data centers. That is infrastructure spending at a scale most governments would struggle to execute.

Their stocks dropped anyway.

Then Elon Musk did what Elon Musk does: he escalated. He merged SpaceX with xAI in a deal widely described as historic in size and started selling a different answer to the same bottleneck. Put the data center in orbit, power it with constant sunlight, cool it without water, skip the grid, skip the permits, skip the protests.

Here’s the point nobody is stating plainly enough:

Both strategies are bets against constraints you can’t PR your way around.
One is betting a 70-year-old grid can absorb an AI-era load spike without political revolt. The other is betting you can run industrial compute in an environment that punishes hardware and doesn’t forgive collisions.

And somewhere in the middle, your monthly bill creeps upward so a chatbot can respond faster.

Let’s zoom in.

I. The $655B question: what are you actually buying?

Start with the scoreboard.

  • Amazon: ~$200B
  • Alphabet/Google: $175–185B
  • Microsoft: ~$150B (based on guidance and reporting)
  • Meta: $115–135B

Call it ~$655B on the conservative end.

The market reaction tells you the mood has shifted. Investors used to reward spending because spending meant “we’re winning.” Now they want a plausible line from capex to revenue inside a time horizon that doesn’t feel like faith-based accounting.

Amazon CEO Andy Jassy argued the spend is not a vanity move and that AWS demand is outpacing supply. CNBC coverage:

Wall Street looked at the same numbers, then looked at the capex overshoot versus consensus estimates, and hit sell.

One quote captured the vibe: we moved from “capex triggers euphoria” to “show me the revenue.”

And here’s the detail that matters more than the headline number: how much cash gets swallowed by the build.

One breakdown claims Amazon is reinvesting the vast majority of operating cash flow into infrastructure and that free cash flow has compressed sharply relative to spend.

This is the new reality of AI infrastructure: the build is the business plan.

II. The part they don’t want you to read: your electric bill is part of the subsidy

All of this computing needs electricity. Not “a lot.” Industrial, steady, compounding electricity.

The problem is that the U.S. grid is not built for this moment. A widely cited stat: most of it was designed decades ago, and it is already strained.

PJM Interconnection, the largest grid operator in the U.S., is the clearest case study because it serves major data center corridors and tens of millions of people. PJM has warned about reliability margins tightening as demand accelerates.

Here is the mechanism that turns “Big Tech capex” into “your bill goes up”:

When grid operators and utilities procure capacity and reliability resources to meet rising load, those costs don’t stay inside hyperscaler budgets. They get allocated through regulated structures and ultimately land on ratepayers.

Bloomberg quantified how data center demand is showing up in procurement costs.

Then it gets political fast.

CNBC notes residential electricity prices were already up in 2025 and forecasts more increase in 2026, while local communities and national politicians begin treating data center growth as a cost-of-living issue.

This is why Musk’s orbit pitch suddenly stops sounding like pure cosplay. He’s pointing at a real bottleneck.

III. Musk’s pitch: “It’s always sunny in space”

Musk’s story is simple: Earth is slow and expensive. Space is fast, scalable, and solar.

The merger is the platform move that makes the pitch coherent. Put rockets, satellites, and an AI company under one roof, then sell the market on a future where compute is not bound to terrestrial grids.

Coverage is filled with the details that make the pitch feel real: FCC filings, orbital “data-center systems,” and the idea of solar-powered compute nodes.

And yes, the part that sounds like a fever dream is also in mainstream reporting: an IPO narrative tied to Musk’s birthday timing.

He sells orbit as a way to dodge everything Earth-based compute is colliding with:

  • grid queues
  • local opposition
  • water usage
  • permitting delays
  • rising rates

That’s the sales deck.

Become a Medium member

Now comes the engineering.

IV. Reality check: orbit has physics, too

Musk’s space thesis is not impossible. It’s just not free.

1) Timelines

Some analysts see orbital compute as a longer-horizon project, potentially well into the 2030s before economics converge.

2) Latency: training vs inference

Quick framing:

  • Training is bulk compute and can tolerate more latency.
  • Inference is a real-time product behavior and is latency-sensitive.

Even in low Earth orbit, the round-trip path can be a real constraint for the fastest-growing product category: interactive AI.

3) Radiation and reliability

Cosmic radiation degrades electronics. A chip that performs well in a lab is not the same as a fleet surviving years in orbit.

4) Debris and collision risk

Space traffic is not hypothetical. A collision cascade is a planetary externality.

5) The quiet motive: cash

CNBC also framed the merger through a simpler lens: xAI’s burn and the need for funding, while markets are still excited about AI.

Orbit may be the vision. It may also be strategic insurance and a funding narrative.

V. Reality check: Earth-based capex is a treadmill

Now look back at the terrestrial plan, and you’ll see its own trap.

The treadmill effect

AI hardware depreciates brutally. A cutting-edge fleet becomes “last gen” fast. That pushes hyperscalers into continuous reinvestment cycles that don’t slow down.

Enterprise AI ROI is still uneven

A sober point that’s easy to miss in the hype: a lot of enterprise AI projects still fail to scale into meaningful returns.

Debt is climbing

This buildout increasingly leans on debt markets as well as operating cash flow.

When the spending becomes a treadmill, the key question isn’t “can they build.” It’s “can they keep building if the market blinks.”

VI. The scorecard: who’s winning right now?

Apple is the quiet winner.
It spends far less capex than peers and still ships AI features by partnering for compute where it makes sense.

Amazon is playing the scale game with the highest sensitivity.
AWS growth is strong, but if demand softens, there’s not much buffer when you’re committing ~$200B a year.

Google is hedging better than most.
Big terrestrial build plus experiments that keep options open, including orbital concepts.

Musk is the wild card.
If orbital compute becomes viable sooner than expected, it compresses the shelf-life of Earth-based capex assumptions. If not, the merger still functions as a funding move wrapped in a moonshot.

Oracle looks like collateral damage.
It got caught in the whiplash of changing narratives around mega-deals and infrastructure expectations.

VII. What happens next

Next 90 days

  • Nvidia earnings: do capex announcements translate into orders at the scale implied?
  • Amazon Q1: if AWS growth dips, the narrative gets fragile fast.
  • SpaceX IPO filing: the S-1 will expose xAI financial reality and orbital plans in detail.
  • SpaceNews: IPO prep and investor meetings.

Next 12 months

  • 2026 elections: data center energy costs could become a local ballot issue in grid-stressed regions.
  • Orbital prototypes: Does any serious validation land in the real world?

2–5 years

  • Does political backlash force limits on construction?
  • Does “capex treadmill” economics break?
  • Does orbit move from story to system?

The Zoom In take

I’ll say what earnings calls avoid.

We are watching one of the largest infrastructure bets in modern history unfold in real time, and the business model underneath it is still being proven.

Amazon is spending ~$200B a year to build capacity into an enterprise market where most companies are still struggling to get real returns. Musk is selling an orbital alternative that turns grid failure into a growth narrative. Google is hedging. Apple is collecting upside without swallowing the full bill.

Meanwhile, ratepayers feel the stress first, and politics follows the bills.

For years CEOs have repeated the same line: the risk of underinvesting is greater than the risk of overinvesting.

That has been true in past booms right up until the moment it wasn’t.

The real question is not Earth vs space.

The real question is what happens when $655B per year meets a grid that can’t scale fast enough, an enterprise market that may not mature on schedule, and a public that’s starting to notice who pays for the transition.

We’re about to find out.

Follow Zoom In AI for more deep dives into the economics, infrastructure, and reality behind AI hype.
The marketing version is free. The truth costs attention.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.