Green AI principles
Green AI is about building and running AI systems in ways that cut their environmental toll. It favors energy-efficient computation, carbon-aware training schedules, and careful resource use across the entire model lifecycle.
The term took shape in a 2019 paper by Schwartz, Dodge, Smith, and Etzioni. They drew a line between Red AI (chasing better accuracy mainly by throwing more compute at the problem) and Green AI (getting good results without increasing — and ideally lowering — computational cost).
Why it matters
Large AI models are hungry for compute. That appetite drives high energy consumption, heavy water use for cooling, and mounting carbon emissions. For governance and compliance teams, overlooking environmental impact opens the door to reputational, regulatory, and financial exposure.
The scale of the problem
The numbers are stark and climbing fast. Global data center electricity consumption hit roughly 415 TWh in 2024 — about 1.5% of worldwide electricity use — and the International Energy Agency projects it will nearly double to 945 TWh by 2030. AI workloads are the fastest-growing slice of that total.
Water use is just as alarming. U.S. data centers consumed an estimated 17 billion gallons of cooling water in 2023, and projections suggest that figure could quadruple by 2028.
AI systems could add tens of millions of tons of CO2 annually. Several major tech companies have seen their carbon footprints grow despite net-zero pledges, largely because AI-driven infrastructure expansion outpaces their efficiency gains.
Training versus inference
One point often lost in public debate: inference (running a deployed model to answer queries or generate outputs) accounts for 80–90% of total AI computing energy, not training. Training happens once; serving billions of queries compounds the cost over time. Inference optimization is therefore the single biggest lever for shrinking AI's environmental footprint.
Key principles of Green AI
Green AI rests on the idea that efficiency and sustainability belong in the design process from day one, not bolted on afterward.
Efficiency-first design
Optimize compute, memory, and data at every stage — from architecture selection through training to deployment. Size the model to the task. A model ten times larger than necessary burns energy on every single inference call.
Transparency and measurement
Report energy consumption, carbon emissions, and water use as standard outputs of AI research and product development. Without measurement, there is nothing to optimize. The AI research community has long treated compute as a free resource; making costs visible is the first step toward changing that.
Hardware-aware development
Pick hardware that maximizes performance per watt, and match workloads to the right accelerator. Modern GPUs, TPUs, and specialized AI chips vary widely in energy efficiency depending on the task.
Location and timing awareness
Schedule heavy training runs during periods of high renewable energy availability. Prefer data center regions with cleaner electricity grids. The carbon intensity of the same computation can differ by an order of magnitude depending on when and where it runs.
Lifecycle thinking
Account for embodied carbon in hardware manufacturing, not just operational electricity. Producing, shipping, and eventually disposing of AI hardware carries a real environmental cost that standard carbon accounting often ignores.
Proportionality
Many applications do not need a frontier model. A well-tuned smaller model can often deliver equivalent practical performance at a fraction of the environmental cost. Matching model scale to actual task complexity avoids waste.
Efficiency techniques
Researchers have developed several effective methods for cutting AI's computational and environmental costs.
Model distillation trains a smaller "student" model to replicate the behavior of a larger "teacher" model. DistilBERT is the textbook example: it runs about 60% faster with 40% fewer parameters while keeping 97% of BERT's NLP performance.
Pruning strips out weights, neurons, or entire attention heads that contribute little to output quality. Applied to large models, pruning has been shown to cut energy consumption by roughly 32% with only modest accuracy loss.
Quantization lowers the numerical precision of model weights and activations — for instance, converting 32-bit floats to 8-bit integers. For most tasks, it can halve inference energy use with minimal quality degradation.
Mixture of experts (MoE) architectures route each input to a small subset of specialized sub-networks instead of activating the full model for every token. Active compute per inference drops sharply compared to a dense model with the same parameter count.
Other approaches include sparse and flash attention mechanisms, early exit and cascading (sending simpler queries to smaller models), and training-time optimizations like gradient checkpointing and mixed-precision arithmetic.
Carbon footprint measurement
A growing set of tools and standards exists for measuring and reporting AI's environmental impact.
CodeCarbon is the most widely adopted open-source Python library for the job. It directly measures GPU, CPU, and RAM electricity consumption during code execution, then applies real-time carbon intensity data for the hardware's geographic location.
Software Carbon Intensity (SCI), ratified as ISO/IEC 21031:2024, defines a standardized way to calculate a carbon intensity score per unit of functional software output. The Green Software Foundation has built on it with an SCI for AI specification covering training, fine-tuning, and inference.
ML CO2 Impact Calculator offers web-based estimation using hardware specs, training duration, and cloud region as inputs. Carbontracker provides research-focused profiling, and the major cloud providers (Google, AWS, Azure) each expose emissions data through their own dashboards.
Data center sustainability
The big cloud providers have made headline sustainability commitments, though actual results have not always kept pace.
Google has been carbon-neutral since 2007 and aims for 24/7 carbon-free energy by 2030. Its AI-driven cooling optimization cut cooling electricity by about 30% in some facilities, but total electricity consumption keeps growing as AI infrastructure expands.
Microsoft is targeting carbon-negative status by 2030 and has implemented an internal carbon tax to push teams toward efficiency. AWS hit 100% renewable energy matching in 2023, two years early, and has been the world's largest corporate renewable energy buyer for five consecutive years.
One important caveat: annual renewable energy matching does not mean zero-carbon operation. At any given hour, a data center may still draw fossil-fueled electricity from the grid. Scope 3 emissions from hardware manufacturing and supply chains also remain largely outside these commitments.
Regulatory landscape
EU AI Act
The EU AI Act includes environmental provisions, though critics consider them weaker than earlier parliamentary proposals. Providers of general-purpose AI models must follow standards for reducing energy and resource use, and regulators are tasked with facilitating voluntary codes of conduct on environmental sustainability. Most of the environmental language, however, remains voluntary rather than binding.
Corporate reporting requirements
The EU Corporate Sustainability Reporting Directive (CSRD) requires large EU companies to report detailed environmental data, including energy consumed by AI systems. California's SB 253 requires large companies operating in the state to report Scope 1, 2, and 3 emissions starting in 2026. Together, these rules turn Green AI from an ethical aspiration into a compliance obligation.
ESG integration
AI occupies a dual role in ESG: it is both a tool for ESG analysis (processing sustainability datasets, automating disclosure reports) and a subject of ESG scrutiny (its own energy and water consumption must appear in corporate environmental accounts). The AI-in-ESG market is projected to grow substantially as sustainability reporting and AI capabilities become more tightly linked.
Integration with governance standards
Teams applying ISO/IEC 42001 for AI management systems can fold environmental controls into their governance plans. Sustainability indicators belong in model documentation, audits, and approval workflows.
The Green Software Foundation's SCI specification is emerging as the technical backbone for AI emissions disclosures within broader corporate reporting frameworks such as GRI, TCFD, and the EU's CSRD.
Real-world examples
Salesforce launched the AI Energy Score in early 2025, a benchmarking tool that compares the energy consumption of AI models performing equivalent tasks. It was the first company to publicly disclose the energy consumption of its proprietary models.
Hugging Face has pushed for standardized model cards that include energy consumption data and carbon estimates, giving developers concrete numbers when choosing between models.
SAP released SAP Green Ledger, which ties emissions data directly to ERP financial records so companies can see how business decisions map to carbon outputs.
Best practices for Green AI
-
Start with measurement. You cannot optimize what you do not measure. Set up carbon tracking from the beginning of model development using tools like CodeCarbon or the SCI specification.
-
Right-size your models. Ask whether a smaller, more efficient model can meet your requirements. Many production workloads do not need frontier-scale models.
-
Prioritize inference optimization. Inference dominates operational energy, so focus on quantization, distillation, and efficient serving infrastructure first.
-
Pick green infrastructure. Select cloud regions with high renewable energy penetration and, where possible, schedule training during low-carbon-intensity windows.
-
Run lifecycle assessments. Track environmental costs end to end: hardware manufacturing, daily operation, and disposal.
-
Tie metrics to ESG commitments. Fold Green AI numbers into corporate sustainability reporting and existing ESG frameworks.
-
Reuse pre-trained models. Fine-tuning an existing model instead of training from scratch can cut compute requirements by orders of magnitude.
FAQ
What are the main sources of emissions in AI systems?
Operationally, most emissions come from running AI workloads on GPU clusters powered by the electricity grid. Inference typically accounts for 80–90% of that operational energy. Hardware manufacturing adds embodied carbon, cooling systems draw water, and data storage carries its own energy cost.
Can Green AI affect model performance?
It can, but in many cases the difference is negligible. Distillation, pruning, and quantization have shown that smaller models perform well with far fewer resources. Historically, algorithmic efficiency gains have matched or exceeded hardware scaling when it comes to reaching a given performance level.
Are there regulatory requirements for AI environmental impact?
Yes, and the list is growing. The EU AI Act includes energy efficiency provisions for general-purpose AI models. The EU CSRD requires environmental disclosure covering AI energy use. California's SB 253 mandates emissions reporting. As these requirements broaden, Green AI practices are shifting from optional to mandatory.
How do you measure AI carbon footprint?
Measure energy consumption during both training and inference, factoring in compute resources, data center efficiency (PUE), and energy source carbon intensity. CodeCarbon can automate much of this. ISO/IEC 21031:2024 (SCI) provides a standardized formula for carbon intensity per unit of functional output. For a full picture, include hardware manufacturing, cooling, and disposal.
What are the trade-offs between model performance and environmental impact?
Bigger models generally score higher on benchmarks but consume more energy. Techniques like distillation, pruning, quantization, and mixture of experts can cut environmental costs with modest performance trade-offs. The practical question is whether marginal accuracy improvements justify the added environmental burden — for many production use cases, they do not.