Behind the AI

Who funds, controls, and profits from the AI tools you use every day? Independent assessments. No corporate sponsors. Plain language.

Cost of Compute

Energy. Water. Carbon. Labor. The costs behind every query, every model, every company — that nobody puts on the label.

This index uses the best available public data. Where companies don't disclose, we estimate using published methodologies and mark confidence levels. Companies can submit corrections via our corrections email or contact page.

DATA CONFIDENCE:OFFICIALESTIMATEDROUGHUNDISCLOSED
YOUR annual AI footprint
~11 kgCO₂

10 queries/day × 365 days × ~3g each = 0.07% of average American's annual footprint

ONE model training run
502,000 kgCO₂

GPT-3 (official) to ~175,000+ MWh estimated (Grok 3)

45,000x your annual AI use

BP invented the personal carbon footprint to shift blame onto individuals. The AI industry is running the same playbook.

Sources: Hannah Ritchie, Sustainability by Numbers; Patterson et al. 2021

VIEW:
FILTER:
SORT:
Grok 3ESTIMATED
xAI
Energy175,680 MWh
CO₂
Water
Transparency☆☆☆☆☆
GPT-4ESTIMATED
OpenAI
Energy62,000 MWhlast checked: 25mo ago
CO₂11,000 t
Water
Transparency★☆☆☆☆
Llama 3.1 405BOFFICIAL
Meta
Energy27,500 MWhlast checked: 19mo ago
CO₂11,390 t
Water
Transparency★★★★☆
Llama 4OFFICIAL
Meta
Energy5,170 MWh
CO₂2,000 t
Water
Transparency★★★★☆
GPT-3 175BOFFICIAL
OpenAI
Energy1,287 MWhlast checked: 60mo ago
CO₂502 t
Water700,000 L
Transparency★★★☆☆
Llama 2 70BOFFICIAL
Meta
Energy688 MWhlast checked: 27mo ago
CO₂291 t
Water
Transparency★★★★☆
BLOOM 176BOFFICIAL
HuggingFace / BigScience
Energy433 MWhlast checked: 34mo ago
CO₂24.7 t
Water
Transparency★★★★★
GPT-4oUNDISCLOSED
OpenAI
Energy
CO₂
Water
Transparency☆☆☆☆☆
GPT-5UNDISCLOSED
OpenAI
Energy
CO₂
Water
Transparency☆☆☆☆☆
Claude 3 / 3.5 / 4UNDISCLOSED
Anthropic
Energy
CO₂
Water
Transparency☆☆☆☆☆
Gemini 1.0 / 2.0UNDISCLOSED
Google
Energy
CO₂
Water
Transparency★★☆☆☆

Who's Doing It Right

BLOOM

20x less carbon than GPT-3 at similar scale, trained on French nuclear grid. First full lifecycle analysis of a large language model.

Researcher: Sasha Luccioni, Hugging Face
Google Gemini

33x energy reduction per query in 12 months. Only major company to publish per-query methodology with energy, carbon, and water metrics.

Microsoft Phi-4-mini

10-50x more efficient than frontier models. 3.8B parameters outperforming much larger models on key benchmarks.

Mistral AI

First ISO-compliant lifecycle assessment for an AI model. 1.14 gCO2e per 400-token response. Partnered with Carbone 4 and ADEME.

Hugging Face AI Energy Score

166+ models rated for energy efficiency. Referenced by EU AI Act Code of Practice.

Azure Clean Deployment Guide

Best Regions
Sweden Central~15 gCO2/kWh
Norway East~20 gCO2/kWh
France Central~55 gCO2/kWh
Canada Central~30 gCO2/kWh
Worst Regions
Singapore~410 gCO2/kWh
US East (coal-heavy)~380 gCO2/kWh
Model Hierarchy (prefer top)
1.Phi-4-mini (when possible)
2.GPT-4o-mini
3.GPT-4.1
4.GPT-5 (only when necessary)

Reality check: “100% renewable” = annual accounting, not real-time clean power.

Disclaimer

Assessments reflect publicly available information and the published methodology of the Behind the AI Research Team. Grades represent analytical assessments derived from the published scoring framework, not statements of fact about internal company operations. If you believe any claim is inaccurate, contact corrections@behindtheai.org with the specific claim and your evidence.