Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
AI’s Cold War: The Infrastructure Race from Greenland to Orbit
Artificial Intelligence   Cloud Computing   Latest   Machine Learning

AI’s Cold War: The Infrastructure Race from Greenland to Orbit

Last Updated on January 20, 2026 by Editorial Team

Author(s): Eray Alguzey

Originally published on Towards AI.

AI’s Cold War: The Infrastructure Race from Greenland to Orbit

The Hidden Energy Bill of Artificial Intelligence

In a hyperscale data center in rural Virginia, forty cents of every dollar spent goes to a single task: keeping the machines from melting. This isn’t a design flaw. This is the new physics of artificial intelligence.

Over the past five years, AI chips have become four times more powerful, four times faster, and four times hotter. NVIDIA’s H100 GPU alone dissipates 700 watts of heat, and in a data center where thousands of GPUs are arrayed side by side, the picture is stark: as computing power scales, heat compounds exponentially. If this heat isn’t maintained between 18–27°C, billion-dollar hardware stops functioning.

According to the Uptime Institute’s 2024 report, 35–40% of total energy consumption in modern AI data centers goes directly to cooling systems (Uptime Institute, 2024). In facilities with high GPU density and hot climates, this ratio can climb to 45%. Even the most advanced hyperscale operators cannot push this below 30%. Because the problem isn’t just efficiency — it’s physical necessity: at this level of computational density, traditional air cooling is no longer adequate, and the industry is being forced to transition to far more energy-intensive systems like liquid cooling and immersion cooling.

In 2023, global AI compute demand stood at approximately 7 gigawatts — roughly equivalent to the electricity consumed by 7 million American homes. According to Goldman Sachs Research projections, this figure will exceed 20 gigawatts by the end of 2025 — a threefold increase in just two years (Goldman Sachs Research, 2024). Over the same period, electrical grids, energy investments, and permitting processes have not expanded at this pace. The result: the biggest bottleneck in the AI economy is no longer code, but thermodynamics.

The question “Where can we build a data center?” has lost its meaning. In its place comes a far more difficult and fundamental question: Where can we cool billions of calculations, for years on end, both physically and economically? Data center locations were once chosen based on access to skilled labor, proximity to internet backbone infrastructure, and real estate costs. Today the map is being redrawn — and this time, it’s not the compass pointing the way, but the thermometer.

This is precisely why the world’s largest technology companies have begun showing unexpected interest in geographies that appear nearly empty on maps — cold, remote, and logistically challenging. At the center of this interest, two locations have begun to emerge: Greenland, and an even more radical option — Earth’s orbit.

Why Location Has Become a Strategic Weapon Again

For data centers today, location is no longer merely a logistical choice; it’s a strategic factor that directly determines cost structure, scalability, and competitive advantage. There are fundamentally three reasons for this.

Cooling cost is no longer marginal — it’s structural

Cooling, which comprises only a limited portion of energy consumption in traditional cloud workloads, has moved to the center of total costs in high-density GPU clusters. According to the Uptime Institute’s 2024 research, the same hardware configuration can consume 20–30% more energy in a hot climate (Uptime Institute, 2024). As megawatts scale, this difference reaches into the millions.

The difference is striking in PUE (Power Usage Effectiveness), the fundamental metric for measuring data center efficiency. In hot regions like Phoenix, Arizona, typical PUE values range from 1.6–1.8, while in cold climates like Stockholm, this figure can drop to 1.15–1.25 (Uptime Institute, 2024). In other words, a data center in Stockholm can deliver the same computing power with 25–30% less energy. This is why the industry is no longer chasing cheap electricity — it’s chasing cold electricity.

But the problem isn’t just cooling. Energy continuity has trumped price

AI workloads, unlike classic IT systems, are extremely sensitive to interruption. Training large language models can take not hours, but weeks. This requires not just cheap energy, but 24/7 uninterrupted and predictable energy. Regions with limited grid capacity and high demand volatility are being rapidly eliminated.

For many major technology companies, the question is no longer “Is electricity expensive?” but rather “Can I guarantee this electricity for a decade?” According to Goldman Sachs analysis, more than 60% of the energy capacity needed for AI data centers through 2030 has not yet been added to electrical grids (Goldman Sachs Research, 2024). This means location selection is determined not by today’s energy prices, but by future capacity guarantees.

Efficiency gains aren’t the solution either

GPUs produce more work per watt with each generation; but absolute power remains very high. NVIDIA’s H100 has a maximum design power of up to 700W (NVIDIA, 2023). The industry is trying to squeeze efficiency from both hardware and software: more efficient chips, lighter models, smarter cooling.

But at the same time, demand is growing much faster. Cheaper and more efficient computation often opens the door to larger models and denser clusters. The result: total energy needs are expanding rather than shrinking; total heat load isn’t decreasing, it’s increasing.

At this point, classical optimization tools fall short. Better industrial chillers, more advanced liquid cooling systems, and smarter software only buy time. What changes the game lies elsewhere: climate, energy continuity, and the question of “where to dump the heat.”

In other words, geography has returned in the age of AI. And some geographies are far more advantageous in this new game. For instance, why has Greenland — one of the world’s coldest and most sparsely populated regions — begun playing an unexpected role in the future of AI infrastructure?

The Geopolitics of Cold and AI Infrastructure

A geography’s strategic value often stems not from its inherent characteristics, but from the era in which it’s encountered. In the age of oil, deserts rose to prominence; in the industrial age, coal basins. In the age of AI, a new strategic advantage is emerging: continuous, natural, and low-cost cold.

From this perspective, Greenland is transforming its long-standing reputation as “remote,” “empty,” and “harsh” into an unexpected advantage for AI infrastructure.

Climate Advantage: Cooling Not as a Cost, but as Geographic Reality

Greenland’s first and most obvious advantage is climate. For much of the year, temperatures hover below or just above freezing. For data centers, this means not just lower cooling costs; it means simpler, less complex, and lower-risk cooling architectures.

The difference is dramatic in PUE (Power Usage Effectiveness), the fundamental indicator of data center efficiency. While typical PUE in Virginia ranges from 1.5–1.7, in Greenland it can theoretically drop below 1.1 (Uptime Institute, 2024). This means the same computing power can be delivered with 30–40% less energy. In Greenland, cooling ceases to be an engineering problem and becomes a geographic feature.

Scalability: Empty Land, Dense Infrastructure

The second major advantage is scalability. Greenland’s population is approximately 57,000, with 90% concentrated in a few coastal settlements, particularly along the southwest corridor (Statistics Greenland, 2024). The interior is almost entirely uninhabited. Greenland’s land area is 2.16 million km² — the size of Western Europe, but with the population of a small town.

This structure offers a significant planning advantage for data center campuses housing tens of thousands of GPUs and dozens of megawatts. Energy infrastructure can scale along specific corridors rather than being distributed nationwide. Environmental impact assessments and social acceptance processes are far more manageable than in densely populated regions.

Energy Potential: Untapped Hydroelectric Resources

The third and critical factor is energy. According to the Greenland Institute of Natural Resources, Greenland’s technically developable hydroelectric potential stands at approximately 30 gigawatts (GINR, 2023). Current utilized capacity is below 100 megawatts — 0.3% of potential. For comparison: Denmark’s total electricity consumption is approximately 6 gigawatts.

This profile directly aligns with what AI data centers require: low-carbon, uninterrupted, and long-term predictable energy structure. And Greenland’s low population offers the opportunity to pair this capacity not with local consumption, but with energy-intensive industries — particularly data centers.

As I discussed in a previous analysis, Small Modular Reactors (SMRs) are also ideal candidates for these types of remote, high-density facilities. Greenland’s isolated energy grids and low population density create the conditions where SMRs excel. A hydroelectric and SMR combination could provide 24/7 uninterrupted power guarantee.

Rare Earth Elements: A Misleading Narrative

Greenland discussions frequently highlight rare earth elements (REEs). However, this frames the issue misleadingly.

Rare earth elements are geologically quite abundant; what’s “rare” is not economically extractable concentrations, but the complexity and environmental cost of the separation process. Approximately 70% of global production of this process is controlled by China — but this is not a geological monopoly, it’s industrial leadership (U.S. Geological Survey, 2024). China built this capacity from the 1990s onward by accepting the environmental costs in this domain.

Greenland does have genuinely significant rare earth deposits; but these are not yet at commercial production stage, and some projects (such as Kvanefjeld) have been suspended due to environmental concerns. For AI infrastructure purposes, Greenland’s true strategic value lies not in its minerals, but in its climate and energy.

Geopolitics: The Revenge of Geography

Explaining Greenland’s rise through technical factors alone is insufficient. The Arctic region is returning to the center of great power competition with opening sea routes, undersea fiber cables, energy corridors, and defense infrastructure.

This is precisely the process American geopolitical thinker Robert Kaplan foresaw in his “Revenge of Geography” thesis: technology didn’t erase geography; it made geographic advantages more valuable (Kaplan, 2012). International relations scholar Prof. Dr. Deniz Ülke Arıboğan similarly characterizes this process as “the return of geography,” and it’s starkly visible in Greenland’s case (Arıboğan, 2024). According to Arıboğan, Greenland is assuming a strategic position in the 21st century similar to the Suez Canal’s role in the 19th: a control point where new sea routes, energy corridors, and data flows intersect.

U.S. interest in Greenland is meaningful in this context. Donald Trump’s 2019 offer to purchase Greenland was viewed as an “odd” initiative in the political discourse of the time, yet concrete strategic calculations lay behind it. The U.S. has maintained a military presence in Greenland since the Cold War through Thule Air Base. But today, this presence relates not just to early warning systems, but to energy infrastructure, undersea cable networks, and data sovereignty.

AI data centers are becoming not merely commercial investments in this new infrastructure ecosystem, but strategic assets. For major technology companies, Greenland means access to Arctic corridors, NATO’s security umbrella, and a counterbalance to China’s Polar Silk Road project.

Economic Reality Check: Challenges

Of course, the picture isn’t entirely without problems. Harsh climate conditions increase construction costs: while building a 1-megawatt data center in Virginia costs approximately $10–12 million, in Greenland this figure can rise to $18–25 million (Data Center Dynamics, 2024). Logistics are complex, local skilled labor is limited, and fiber infrastructure is still developing.

However, an interesting threshold has been crossed in the AI age: for some major technology companies, the fundamental question is no longer “Where can we build cheaper?” but “Where can we operate safely for a decade?” Annual cooling savings of $2–3 million can recover the higher initial costs in 5–7 years.

In Summary: Cold, Energy, and Sovereignty

Greenland’s rise doesn’t rest on a single factor. When cold climate, scalable land, clean energy potential, and geopolitical position converge, a strong candidate emerges for the future of AI infrastructure.

But for some technology leaders, even this isn’t enough. Because Greenland is still, ultimately, on Earth. And Earth’s physical constraints don’t always align with AI’s appetite.

This is why the discussion is shifting in an increasingly radical direction: What if seeking cold on Earth isn’t enough? Are data centers in Earth orbit possible — and why is this question now being taken seriously?

Why Is AI Infrastructure Looking to Space?

Five years ago, it would have been dismissed as science fiction. Today, data centers in space are the subject of serious engineering and business model discussions. The reason is simple: space answers AI infrastructure’s two biggest problems simultaneously — cooling and energy.

In Earth orbit, particularly in low Earth orbit (LEO), ambient temperature is technically near absolute zero. Without atmosphere, removing heat doesn’t mean “cooling” in the classical sense, but radiating heat into space. The cooling problem that requires complex and expensive infrastructure on Earth becomes largely a natural consequence of physics laws in space.

There’s a similar advantage on the energy side. Without atmospheric loss and weather conditions, space-based solar collection systems can offer higher continuity and availability (NASA, 2023). This theoretically means an uninterrupted, near-carbon-zero energy source.

Of course, the idea of data centers in space isn’t about running ChatGPT’s instant queries from orbit today. While latency on fiber optic lines runs 1–5 milliseconds, LEO-to-Earth latency is at 25–35 milliseconds (ESA, 2024). However, not all AI workloads are latency-sensitive: it offers an ideal environment for large model training, batch processing, and long-term archival — tasks that can tolerate delay.

This idea is progressing not only in academic circles but with concrete steps in the private sector. Loft Orbital has launched its first commercial services for satellite edge computing. D-Orbit is conducting orbital data processing experiments. AWS Aerospace & Satellite and Microsoft Azure Space are developing ground-to-space hybrid cloud architectures (AWS, 2024). Axiom Space has announced its vision for space station-based computing capacity.

The most critical threshold moving this discussion from theory to engineering is the decline in launch costs. SpaceX’s Falcon 9 operated at approximately $60 million per launch in 2010. Today, with reusable rockets, this cost has dropped significantly per kilogram (SpaceX, 2024). Starship’s goal is to bring launch costs below $10 million in the 2030s.

But here too, the fundamental dynamic is the same: Can we operate this infrastructure safely, with predictable costs, and managing political risks over decades?

Greenland answers this question with today’s technology: cold climate, clean energy, geopolitical stability. Space is viewed as “tomorrow’s option” because it can theoretically solve cooling through radiation and collect energy from the sun with high continuity.

The AI revolution is being shaped not just by code, but by energy, cold, and geography. The winners will be not only those who train the best models, but those who most efficiently manage heat, secure energy for the longest term, and build infrastructure in the right location.

Today that place might be Greenland. Tomorrow, Low Earth Orbit. In the future, AI infrastructure’s map will encompass a geography far broader than Earth’s surface.

Because AI’s future won’t be written only in algorithms. It will be built in glaciers, orbits, and power lines.

References

  • Arıboğan, D. Ü. (2024). The Return of Geography: Geopolitics and Technology in the 21st Century. [Publication details to be added]
  • AWS (2024). AWS Aerospace and Satellite Solutions: Ground-to-Space Cloud Architecture. Amazon Web Services.
  • Data Center Dynamics (2024). Global Data Center Construction and Operating Costs: Regional Analysis. https://www.datacenterdynamics.com
  • European Space Agency (ESA) (2024). LEO-to-Earth Communication Latency Study: Network Performance Analysis. ESA Technical Documentation.
  • Goldman Sachs Research (2024). AI’s Energy Appetite: Infrastructure Demand Through 2030. Goldman Sachs Global Investment Research.
  • Greenland Institute of Natural Resources (GINR) (2023). Hydroelectric Potential Assessment: Technical and Economic Feasibility Study. Government of Greenland.
  • Kaplan, R. D. (2012). The Revenge of Geography: What the Map Tells Us About Coming Conflicts and the Battle Against Fate. Random House.
  • NASA (2023). Solar Power Efficiency in Low Earth Orbit: Technical Assessment and Performance Data. NASA Technical Reports Server.
  • NVIDIA (2023). H100 Tensor Core GPU Architecture: Technical Specifications and Power Requirements. NVIDIA Corporation.
  • SpaceX (2024). Launch Cost Evolution and Starship Economics: Reusability Impact on Access to Space. SpaceX Public Filings.
  • Statistics Greenland (2024). Population and Demographics Report: Annual Statistical Overview. Government of Greenland.
  • Uptime Institute (2024). Annual Outage Analysis 2024: The Growing Cost of Cooling in AI Infrastructure. Uptime Institute Research.
  • Uptime Institute (2024). Global Data Center Survey: PUE Trends and Climate Impact on Energy Efficiency. Uptime Institute.
  • U.S. Geological Survey (2024). Mineral Commodity Summaries: Rare Earth Elements. U.S. Department of the Interior.​​​​​​​​​​​​​​​​

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.