~/ai-gigawatts apps ← back to terminal
AI Gigawatt Comparator
power consumption of the top 10 AI models · training + inference · projected to 2027
jasper@grid:~/ai-gigawatts $ ./power-draw --top=10 --unit=GW
# data: public disclosures, Epoch AI, IEA, OpenAI, Meta, xAI, DCF, SemiAnalysis. last updated Apr 2026.
Grid Summary
Top-10 combined today
~7.8 GW
dedicated AI training + inference, 2026
Projected by 2027
~24 GW
≈ 3× growth in 18 months
Biggest single campus
5.0 GW
Meta Hyperion, LA (by 2030)
US homes equivalent
~17M
at 24 GW · ~1.4 kW/home avg
Top 10 Models — Power Draw
models 10
scale max = 3.0 GW
view current
sort by GW ↓
current operational GW
announced / projected GW
#
model
gigawatts
value
Training vs Inference — Power Split
▸ where does the power go? training = green · inference = yellow-striped
# peak training burst vs steady-state inference serving. numbers are rough — most labs don't publish splits.
Grid Context — What 1 GW Actually Means
scale log-ish, 0–50 GW
unit gigawatts
Facilities Roster
campuses 10 largest
filter AI-dedicated only
Online / ramping in 2026
Colossus 2 · xAI · Memphis · ramping 1→3 GW through 2026 · powers Grok xAI
Stargate Abilene · OpenAI/Oracle · TX · 1.2 GW phase-2 mid-2026 · powers GPT OpenAI
Prometheus · Meta · New Albany OH · 1 GW supercluster · Llama 4/5 training Meta
New Carlisle · Anthropic/AWS · IN · 1 GW online Jan 2026 · Claude training Anthropic
Announced / under construction
Hyperion · Meta · Richland Parish LA · 2 GW by 2030 → 5 GW scale-up Meta
Stargate (total) · OpenAI · 5+ US sites → 10 GW / $500B commitment by 2029 OpenAI
Fairwater · Microsoft · multi-site · $100B+ total capex · Copilot + partners Microsoft
Stargate UAE · OpenAI/G42/Oracle · Abu Dhabi · 2026 · 1st intl Stargate OpenAI
Notable mentions
Google TPU fleet · distributed · TPU v5p/v6 · powers Gemini · exact GW undisclosed Google
DeepSeek training · ~2k H800 GPUs for V3 · est ~1 MW peak · order of magnitude less DeepSeek
Methodology
▸ how we estimated

TRAINING GW is peak power draw of the cluster used to train the flagship model, averaged over the training run. Where the lab doesn't disclose, we back-compute from GPU count × TDP × utilization (typ. 0.4–0.6).

INFERENCE GW is steady-state serving power for the model's primary public product. This is harder to pin down — most figures are estimates derived from disclosed query-volume × per-query Wh.

PROJECTIONS use publicly-announced datacenter capacity (Stargate, Hyperion, Colossus 2, etc.) scaled to when the next flagship model is expected. Anything after 2027 is noted as speculative.

Sources: Epoch AI, IEA Electricity 2024, OpenAI's Stargate announcements, Meta capex calls, xAI public statements, MIT Technology Review, Data Center Frontier, IEEE Spectrum, SemiAnalysis. Error bars are wide — treat numbers as order-of-magnitude.

▸ power-draw --summary exit 0
jasper@grid:~/ai-gigawatts $ summary
✓ top 10 AI models consume ~7.8 GW in 2026 (operational)
→ projected 24 GW by end-2027 if announcements land on time
⚠ inference now dominates: ~80–90% of all AI electricity use
⚠ xAI Colossus ran on unpermitted gas turbines (EPA violation, 2025)
# DeepSeek trained V3 on ~2k GPUs — proves efficiency gaps are possible
✗ most labs do not publish training energy — all numbers are estimates
FAQ — Frequently Asked Questions about AI Gigawatts

Frequently Asked Questions — AI Gigawatts

What's the difference between gigawatts (GW) and gigawatt-hours (GWh)?

GW is instantaneous power — the rate at which electricity is drawn at any moment. GWh is total energy consumed over time. A 1 GW datacenter running 24/7 for a year consumes roughly 8,760 GWh. This page uses GW because it's the metric that matches what you'd see on the grid operator's dashboard — and what utilities have to actually provision for.

Why isn't OpenAI "number one" on the chart?

It depends what you measure. In announced future capacity (10 GW Stargate by 2029), OpenAI is the leader. In currently operational dedicated AI power as of April 2026, xAI's Colossus 2 is ahead thanks to its extremely fast buildout in Memphis. OpenAI's Stargate Abilene hits 1.2 GW mid-2026; Colossus 2 is already ramping through 1+ GW.

Do these numbers include training or just inference?

Both, where the split is known. Inference now accounts for an estimated 80–90% of all AI electricity consumption — training is a huge one-time burst, but billions of daily queries add up faster. The "Training vs Inference" panel above shows the rough split per model.

Why are DeepSeek and Mistral so much lower?

DeepSeek reportedly trained V3 on about 2,000 H800 GPUs over two months — roughly 1 MW peak, about three orders of magnitude below frontier US labs. Mistral publishes life-cycle data showing ~1 g CO₂ per page of text generated. Both use mixture-of-experts architectures that activate only a small fraction of parameters per query, and both serve far fewer daily users than ChatGPT or Gemini.

How reliable are these numbers?

Wide error bars. Most AI labs do not publish training energy or inference power figures. Numbers here are compiled from datacenter announcements, GPU count × TDP estimates, and published third-party analyses (Epoch AI, IEA, MIT Tech Review). Treat every value as ±30–50%. The relative ordering is more robust than absolute magnitudes.

Is this tool free to use?

Yes, completely free. No account, no subscription, no watermark. It is one of the free browser-based tools at jasperbernaers.com.