paint-brush
Carbon Aware Computing: Next Green Breakthrough or New Greenwashing?by@ismaelvelasco
3,528 reads
3,528 reads

Carbon Aware Computing: Next Green Breakthrough or New Greenwashing?

by Ismael VelascoJanuary 16th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

- Running compute jobs at a time and place when the electricity grid energy mix happens to be greener is not enough to reduce computing emissions. - To reduce their emissions, compute jobs need to run when demand is low; target stable electricity grids; and verifiably use curtailed electricity or run on genuinely additive renewable electricity. - The environmental challenge of computing is not energy efficiency, but energy demand. Because renewable energy can only power less than 13% of global energy demand, if our compute's energy demand grows in a year, even 100% effective carbon aware computing on the electricity grid will mean a net increase in our emissions. - Achieving carbon reductions of particular computing jobs is pointless if it does not reduce the net electricity demand of our overall compute. - True carbon-aware computing should ask not just how green is our use of the electricity grid for any given job at any given time, but how far is our compute actually reducing its net emissions, and how responsible is our use of the electricity grid as a whole. - For now we’re calling this more mature, holistic and nuanced approach ‘grid-aware computing’.

People Mentioned

Mention Thumbnail
featured image - Carbon Aware Computing: Next Green Breakthrough or New Greenwashing?
Ismael Velasco HackerNoon profile picture


A trending article on Hackernoon published in October 2022 by the Green Software Foundation arguably put ‘carbon-aware’ computing for the first time on the radar of the mainstream dev community. Carbon-aware computing refers to running your compute jobs when and where the electricity grid is being powered by renewable energy. The Hackernoon article coincided with the first carbon-aware software hackathon in the world, supported by companies like Intel, Microsoft, Globant, UBS, Accenture, Goldman Sachs, and more. Carbon-aware computing has entered the technology trigger phase of the Gartner hype cycle, and all signs point toward a rapid acceleration of adoption at scale.


Full disclosure, I was one of the contributors to the 2022 Hackernoon article and also served as a mentor for the Carbon Hack hackathon, where I met some great colleagues and was exposed to wonderfully innovative solutions. Several of the winning projects joined the Adora Foundation Incubation Lab, and remain inspiring collaborators to this day. But as I've looked under the hood, the evidence suggests to me that most carbon-aware implementations currently yield at best small-to-zero carbon reduction benefits, and at worse may well increase carbon emissions and pose dangers to local and national electricity grids. It also risks becoming, or is already, a greenwashing effort, as Big Tech accelerates its adoption and marketing of carbon-aware computing without any mention of its limitations or its risks of unintended consequences.


Examples of Big Tech’s embrace of carbon-aware patterns include:


The goal of this article is to suggest that while it's helpful to explore and even promote carbon-aware computing, it must be done with far greater rigor and transparency.


Responsible carbon-aware computing has the potential to contribute to greening tech emissions, but it is unconscionable to pursue it without factoring in the risks; evidencing mitigation steps and actual impact; and providing warning labels in its marketing and promotion.


Content Outline

This article is divided into eight sections. They build on each other but can be read on their own.

  1. Houston, we have a problem

  2. What software engineers need to know about how the grid works

  3. What’s the problem with carbon-aware software then?

  4. When does carbon-aware software make sense?

  5. Proposals for responsible carbon-aware computing

  6. The elephant in the room: rising computing demand

  7. Where do we take carbon-aware from here? Introducing grid-aware computing

  8. What can you do to help?



1. Houston, we have a problem

Running compute jobs when and where the electricity grid is being powered by renewable energy must mean that the emissions associated with running that code are reduced. Running code using renewable “clean” electricity by definition means that it’s not consuming "dirty" fossil fuel energy.


If we make all our software carbon-aware, timing it to run when and where the electricity grid is being powered by mostly renewable sources, then surely we can be confident we have effectively and innovatively reduced our environmental impact. Right?


This seems self-evident, and mostly the green computing community appears to agree. We’re full steam ahead and carbon-aware computing is being embraced at scale right now by Big Tech. We’re getting there, then, yes?


Not so fast.


  • Who has actually paused to confirm whether these seemingly obvious claims are true?

  • Does programming our software to responsively seek periods and locations with lower carbon-intensity electricity actually make a tangible difference?

  • Where are the studies that can prove this?

  • If these patterns are implemented at scale, can the tech sector legitimately say it’s contributed to actually reducing global carbon dioxide (CO2) emissions?


Afterall, the ICT sector needs to be on a pathway to reduce its carbon emissions by 45% in 2030 to be in-line with the Paris Agreement goals of limiting global warming to 1.5ºC.

Those of us involved in writing this have paused to ask these questions. We acknowledge we’re not the first to have done so [1] [2].


Based on our exploration, we believe there is evidence to show current carbon-aware approaches may be mostly futile. What’s more, they might actually be increasing emissions, while laying the foundations for the next generation of greenwashing across Big Tech. On the positive side, the evidence also suggests there are ways to implement such approaches with a greater likelihood of reducing emissions and avoiding perverse effects. In this light, we believe our grand collective oversight is to omit any mention of the HUGE caveats to carbon-aware computing.


We explore these concerns and caveats. We start by acknowledging the technical details of how electricity grids work in practice. We move on to consider how current carbon-aware software approaches do not appear to consider these realities. We then consider the bigger questions at play about what the tech sector has to grapple with to make meaningful reductions. The post concludes by proposing an iteration on the current carbon-aware guidelines for a more responsible and effective implementation, which we call “grid-aware computing”.


[1] - https://github.com/Green-Software-Foundation/carbon-aware-sdk/issues/222


[2] - https://adrianco.medium.com/dont-follow-the-sun-scheduling-compute-workloads-to-chase-green-energy-can-be-counter-productive-b0cde6681763


2. What software engineers need to know about how the grid works

What’s so potentially wrong about shifting compute loads in response to carbon intensity? To answer this question, we need to start with a bird’s-eye view of how electricity grids work in practice. Once we have that understanding, we can start to see where the problems lie.


How do electricity grids actually work?

The amount of electricity available on the grid doesn’t fluctuate freely. It’s controlled and planned in advance so that on any given day, there is a consistent amount of electricity available to use (aka supply). There are also controls to ensure a consistent amount of electricity being used (aka demand). A key objective for those managing any grid is to monitor these two sides, supply and demand, and ensure they are in balance.


Any imbalances trigger serious problems, which are usually brought about by a change in the frequency. When the frequency suddenly spikes or dips, it can cause damage to electrical equipment, and ultimately brownouts and blackouts.


The expected demand for any given day is predicted using data. This allows the grid managers to ensure there is enough electricity available. There typically aren’t big demand differences from one day to the next. There is some daily fluctuation as people get up, go to bed, etc. But it’s usually predictable enough.


Seasonal differences also impact demand. For example, there is more demand in the winter months as days are shorter and colder, meaning people need more light and heat. But again, available data allows us to predict such fluctuations predictably.


US electricity demand 1/1/2019 - 12/31/2019 - image courtesy of U.S. Energy Information Administration, Hourly Electric Grid Monitor



Balancing Supply & Demand


Electricity supply is generated via three primary means:

  1. Fossil fuels such as oil, coal, and gas
  2. Nuclear
  3. Renewables such as solar, wind, hydro, and geothermal


The proportions of electricity generated by each is referred to as the fuel mix.


Quick reference: Fuel-mix

The combined sources from which electricity has been produced. The average fuel mix varies from grid-to-grid.

Source: Ember's Yearly Electricity Data; Ember's European Electricity Review; Energy Institute Statistical Review of World Energy.


This representation is from https://ourworldindata.org/electricity-mix

It also varies in each local grid, often on an hourly basis. On any given day, renewables will, in most places, represent a fraction of the daily supply. The rest will be made up from burning fossil fuels.

The fuel mix of UK electricity supply 2:30pm 24 August 2023 to 2:30pm 25 August 2023 - image courtesy of https://electricityinfo.org/fuel-mix-last-24-hours/



There are two scenarios in which it would be necessary for those responsible for balancing the grid to take action to ensure demand and supply stay balanced.


  1. A decrease in demand – there’s less energy required compared to what’s being generated.
  2. An increase in demand – there’s more energy required compared to what’s being generated.


Let’s use a few oversimplified, hypothetical examples to illustrate what options are commonly used to address these scenarios.


Managing dips in demand

The scenario: It’s a winter night in Paris, and at 8 pm, everyone simultaneously turns off their lights.


This is unexpected. There would be too much energy being put onto the grid, but nowhere for that energy to go because there’s no demand for it.


Option 1: Curtailment

To keep demand and supply in balance, a response is to decrease the amount of supply. This is known as curtailment.



Quick reference – Curtailment

Curtailment is a reduction in the output of a generator from what it could otherwise produce given available resources, typically on an involuntary basis. This might happen to balance energy supply and demand, or due to transmission constraints. Wikipedia.


What happens: The most common way of curtailing electricity is to lower the price. This aims to incentivize suppliers to produce less, which would mean they “ramp down” or turn off some sources of supply.


This means energy suppliers have to make both an economic decision and a practical one. The practical part comes from the fact that not all sources of power scale up or down with equal ease. The table below, comparing energy sources, should help you understand why.


Energy source

Scalability

Renewables – Solar, Wind, Hydro

Inflexible – You can’t just reduce the amount of wind blowing or the amount of sun from shining.

Nuclear

Less flexible – Significant safety and operational challenges with suddenly changing output.

Fossil Fuel – Coal

Flexible – Output can scale up or down, but becomes more expensive if more output is needed.

Fossil Fuel – Gas

Extremely flexible – Very quick to scale output up or down.


In our example, Paris sits on the French grid which is significantly powered by nuclear power. This source of energy is slow to react to sudden changes in demand. So, there could be scenarios where there might be more supply than demand, and the grid would still be out of balance.


Option 2: Storage

Providers might look to store additional supply in batteries, pumped hydro, or other mechanisms.

What happens: Storing excess supply in batteries or through pumped hydro is another lever that can be pulled to bring balance to the grid. By directing the extra supply to a storage location, operators can buy themselves time to adjust the overall supply to meet the new, lower demand.


When demand is high again, that stored energy can be put back onto the grid in a controlled manner.


But what if storage isn’t enough, or isn’t available on a grid? There’s one final option that is available to keep things in balance.


Option 3: Create artificial demand

The grid uses incentives to artificially spike electricity consumption to grow demand and match the remaining excess in supply. This is called demand management.


What happens: The grid provides an incentive for businesses, though recently some consumer-based schemes are being trialed, to increase their electricity use beyond what they might normally need. Most likely through a special tariff offering cheaper electricity at these times. By doing this, the grid can inflate electricity demand to the point where it can achieve a new balance between supply and demand.


Therefore, unplanned drops in electricity use, like the Paris light example, are highly unlikely to result in equivalent reduced emissions. Beyond a very specific, legally mandated range, the grid compensates for unplanned drops in a way that negates the savings. The net amount of emissions on any given day will be approximately the same in almost every case.

Some interesting research and discussions is going on about how data centers can be part of the solutions here. A great example to dive into a Stretched grid? Managing data center energy demand and grid capacity, published Oct 2023.


Managing peaks in demand

The scenario: It’s an unusually hot summer night in Tokyo, and at 8pm everyone simultaneously turns on their air conditioners.


This is unexpected. There would be too much energy being asked for, i.e. too much demand, and not enough energy supply to meet it.


Options: The techniques for managing these sudden increases are largely the reverse of the above.


  • Increase the price to incentivize providers to put more supply onto the grid. Remember from the table above, renewables and nuclear do not scale easily. So, supply during unplanned demand spikes often comes from fossil-fuel sources, which produce more carbon emissions.

  • Use whatever is available in storage – batteries or pumped hydro.

  • Offer incentives to artificially reduce demand.


Therefore, unplanned spikes in electricity use, like the Tokyo air conditioner example, are highly likely to result in increased emissions. This is a result of the need for energy providers to quickly ramp up supply to match demand and the fact that this is most easily done by using fossil-fuel energy sources – often gas, sometimes coal.


Avoiding unplanned peaks and troughs



We can see from this that unplanned spikes or drop-offs in demand aren’t good for grids. Unplanned drop-offs don’t actually reduce the amount of electricity being generated, so have no net impact. Unplanned surges have to be met and usually are met with ramping up fossil-fuel production.


Additionally, the very act of rapidly ramping up or down supply adds extra emissions. Many power sources are designed for steady-state conditions so sudden changes can lead to inefficient operation. Ramping up can also bring older and less efficient plants online. These are used as "peaking plants" to meet sudden surges in demand. Start-up and shutdown processes can also be extra intensive.


All this means extra emissions on top of producing the additional electricity itself. It might be minor, and mitigated by the transition to battery power, but is nevertheless an additional negative effect of this scenario.


3. What’s the problem with carbon-aware software then?

Let’s turn to exploring how the grid works in conjunction with the current carbon-aware software patterns.


How the grid and carbon-aware software interfaces

Up until now, carbon-aware software techniques have been focused on the opportunities presented by changing fuel mixes on the supply side. As we’ve seen above, effective grid management is about maintaining an equilibrium. Messing with that balance has consequences, and for the most part, the impact results in increased carbon emissions.



Quick reference – Time-shifting compute

Looking for the time of day when electricity will be greenest e.g. when there are fewest fossil-fuels in the energy mix, and setting compute jobs to run at that time. This means that the time of day the jobs run is dynamic and frequently changes.


⏱ Grid management and “carbon-aware” time-shifting

Let's use a simple example to illustrate this concept. Say that you run a single, scheduled database backup job every day. You decide to change the scheduled time of running that task depending on the grid mix for a given day. The electricity to run this compute will already be factored into your grid’s daily electricity demand planning.


Now, say that over the course of a day, your local grid produces 100 tons of CO2 by generating the electricity it supplies. And, over the course of a day, your local grid supplies electricity with the following mix:


Time of day

Expected demand

Fossil Fuel Mix

Renewables Mix

Morning

Low

80%

20%

Afternoon

High

50%

50%

Night

Low

80%

20%


Remember that the grid has already planned for all its expected demand for that day. Based on this it produces electricity that generates 100 tons of CO2. Therefore whenever you choose to run your backup job, 100 tons of CO2 will still be generated by the grid for that day.

Changing the timing of your job to run during the afternoon, when the mix of renewables is highest, doesn't actually change the day's emissions. By running your regular compute job during that renewable window, you have merely displaced the days’ emissions, not reduced them.


Quick reference – Emissions displacement

Occurs when emissions are successfully reduced from one source or in one area, but at the same time causes emissions to increase from another source or area.

A good analogy is a train with some carriages being “green” and some being “dirty”. If you’re taking the train anyway, and move to a green carriage, you are not affecting the overall load of the train as a whole. Someone else will travel in the dirty carriage instead. The emissions from that train running are still exactly the same.


Zero Carbon Displacement requires strict analysis of an entire grid ecosystem to make sure that no additional fossil fuel based energy is being forced into use before it can be claimed.


In fact, your time-shifting may result in more than 100 tons of CO2 being generated for the day. That is because, in the afternoon on this day, demand is also high. By deciding to shift your backup job to run at this time, you’ll add additional (unplanned) demand onto the grid. As a result of this, additional supply might need to be quickly ramped up to balance the grid. As we covered earlier, this extra supply is most likely going to come from a fossil fuel energy source.


Time shifting can also lead to grid instability because of these ever-shifting demand fluctuations.

So far, no actual gains have been made. You’ve not helped reduce carbon emissions. Operating as an individual, you probably haven’t been impacted much by your time-shifting approach. However, things can become harmful if this is done at scale. But there are ways in which time-shifting can be refined to make it actually helpful, which we'll get to.


🌍 Grid management and “carbon-aware” location-shifting


Quick reference – Location-shifting compute

Looking for grids that have a greener fuel mix than your local one, and sending compute jobs to run on servers in that grid instead of your own.


To illustrate the idea, let’s imagine you are a fictional global corporation called Stoogle Tech. Every national branch needs to back up its databases every day. Now imagine, that each branch detects that the local grid in Lisbon is currently running on 80% renewables and 20% fossil fuels, and they all independently decide to send their backup jobs to run there.


Suddenly there is a whole lot of extra demand hitting Lisbon’s grid. The demand for the day will now no longer be the expected 100% but say, 110%.


The problem is that there is still only 80% of renewable energy available to Lisbon’s local grid. In order to keep electricity demand and supply balanced, Lisbon will most likely cover that extra 10% with fossil fuels. The international carbon-aware location-shifting initiative has just added extra emissions to Lisbon’s grid.


A displacement effect is seen again. These compute jobs have displaced the emissions from every other country to Portugal, for the same net emissions globally. Or has it?


In fact, it’s probably worse than that. Driving up demand in Lisbon and stimulating above-average fossil fuel consumption there has resulted in a net increase of CO2. Plus the electricity demand from each of the local regions may not have actually reduced as a result of the jobs moving. The emissions in those local grids are still approximately the same. The global net consumption of electricity rose, as did CO2 emissions.


The implications become worse as things scale. Now imagine not just Stoogle Tech, but also Bircosoft Tech, Wapple Tech, and Macebook Tech all getting on the location-shifting bandwagon. Let’s say all of their available servers are powered by national grids. Suddenly Lisbon’s electricity demand hits 120%, and local grid demand dips.


Location-shifting computing jobs make no positive difference in this example, just like time-shifting but it's adding emissions and potentially risking grid instability for others. In this regard, the corporations' well-intentioned efforts are worse than those who just run their jobs when they feel like it, or better yet run them in a predictable fashion.


Can location really break the grid?


Upward and downward spikes in computing-related electricity demand can indeed break grids, especially less resilient ones. It’s happened before in Venezuela, Iran, Georgia, and Kazakhstan, among other places, when bitcoin mining created equivalent surges in computing-specific electricity demand.


Ultimately, the problems vary from grid-to-grid and depend on how resilient each grid is. To cause problems, you’d need a big spike in highly diversified grids, like Europe, or in grids that have invested heavily in storage, like California. But it could be quite modest to trigger serious impacts in less resilient grids like South Australia, with less grid interconnections and less fossil fuel energy for supply responses, or in India or South Africa, with less energy diversity.


The key point is that simply reading "x computing job is timed to run when and where the grid is greenest" should not be assumed to mean it has in any way reduced emissions, and it could have perverse effects.


4. When does carbon-aware software make sense?

Let’s be absolutely clear on the answer to the question “is carbon-aware computing just bad?”


No. We don’t intend to bash the core concepts of carbon-aware software.


The core concept that shifting compute jobs to respond to the electricity available is sound.


The criticism is that current approaches never apply any warning labels.


We fail to mention that time and location-shifting patterns are only helpful in certain circumstances, futile in most, and potentially harmful in others. There is a general assumption that time and location shifting are greener ways to run compute, with no verification and no risk mitigation.


We are concerned that the current approach is actually hampering tech’s sustainability efforts even when meaning to assist them. First, phrasing messaging that any company adopting time and location shifting is now a little greener—a recipe for greenwashing. Second, by promoting patterns which, if adopted at scale without any risk analysis or mitigation, are likely to be harmful.


Most crucially we don’t see carbon-aware software meaningfully addressing the elephant in the room. The environmental challenge of computing is not primarily one of energy optimization but of energy demand.


For the best part of a century, the amount of electricity consumed by the same computing job has become exponentially smaller. This should in theory mean the technology sector is greener than it has ever been. But these extraordinary gains in efficiency have been dwarfed by the increases in computing electricity demand.





Carbon-aware computing is a novel form of optimization. It seeks to carry out essentially the same compute using less fossil-based electricity, by targeting more renewable energy. But any gains from such optimization will be meaningless if our electricity demand grows faster than our optimization gains.


We think there is a way to reframe carbon-aware computing to address both, optimization and demand, and not merely make cosmetic improvements to Business As Usual. The damage of runaway climate change to populations around the world demands that we do better, and we believe the tech sector is well-resourced enough to tackle this in a meaningful way.


How can we make carbon-aware computing work?

There are two ways in which the logic of the carbon-aware approach can indeed reduce emissions.


First Approach: Time-shifting or location-shifting compute to when demand is naturally low and then using electricity that would otherwise be curtailed. This is very close to the current approach, but it prioritizes electricity demand over electricity mix.


Second Approach: Having computing jobs run on renewable electricity that is additive to the grid. The shortest authoritative summary of this reasoning is from a White House investigation into crypto mining (see page 24). The relevant part says:


"There are two primary ways.... using grid electricity would result in zero direct GHG emissions:

  1. constructing or contracting for new clean electricity sources or
  2. using existing renewable electricity that would otherwise be curtailed by the grid.

When... electricity [comes] from existing renewable sources, it displaces the GHG emissions in the near-term, shifting users of renewable sources to fossil fuel sources. This is because coal and natural gas often supply electricity generation for each additional unit of electricity demanded in the United States. As the amount of renewable sources is held constant, but electricity demand increases, additional fossil power will likely be dispatched. This displacement results in no net change or in increases in total global emissions through a process called leakage.”


Based on the above, we have 3 proposals for a new approach to carbon-aware computing, to maximize its positive impacts and mitigate its risks, two of which we outline in this section.


5. Proposals for responsible carbon-aware computing

Proposal 1: Prioritise demand intensity above carbon intensity, and only target stable grids

Low demand times are most likely to coincide with times of excess renewable energy, which would otherwise be curtailed, which is to say wasted, to maintain grid stability. This is precisely the scenario where time-shifting and location-shifting actually translate into emissions reductions from computing. Our compute runs on renewable electricity no one else will use, and thus will not generate direct emissions.


As we explored in what software engineers need to know about how the grid works, targeting low demand times has intrinsic environmental benefits independently of how much of the grid is running on renewables. It can play a part in helping the grid avoid ramp-ups/downs and contribute to grid stability, both of which have environmental, social, and economic benefits.

If we schedule our computing based on grid demand in a highly predictable, stable fashion we don’t create unpredictable daily spikes and we maximize the chances of running on otherwise curtailed renewable energy and actually reducing our emissions.


How is this different from the current prevailing approach of targeting low carbon-intensity times in the grid?

As an example, an area with strong solar infrastructure might have a greener energy mix on sunnier, hotter periods of the day. That is also when people might be at work, so you would have both, a greener mix, and medium demand. At this time, solar energy will be fully utilized, and there will be no excess/curtailment. A carbon intensity API might suggest 11 am is a good time to run your compute, but it will not reduce emissions at all. It might make no difference, or, if the electricity demand from compute jobs is large enough at 11 am in response to that API, the likelihood of requiring additional fossil fuels is much greater, meaning you are adding emissions.


Furthermore, because renewable energy supply, unlike electricity demand, is so unpredictable, timing a lot of compute to trigger when grid carbon intensity is low will add unpredictability to the grid, risking instability, hugely increasing the chances of perverse effects, environmental, social and economic.


This is to say there is no obvious scenario where targeting low demand times is not positive for the environment, but there are many scenarios where targeting grid carbon intensity will be ineffective or harmful.


A demand-first approach is not incompatible with current carbon-aware approaches and tooling.

Once we have prioritized low-demand times, we can still use existing APIs or data sources to target low carbon-intensity triggers.


In this scenario, our compute jobs would never run at 11 am, even if grid carbon intensity is low, because we would know the chances of curtailment are remote. But they might run at 4 am on a windy storm, and not at 5 am when the winds have calmed, maximizing even more the likelihood of running on otherwise curtailed energy and reducing our emissions.


These approaches are not incompatible. What if we first looked for grids that currently have low demand AND then sought those with a period of naturally high renewable electricity production?


Warning Labels Remain

The above has merit when happening at a relatively small scale. But if everyone does this at the same time? Then we still have the problem of creating demand spikes, one of our core worries about the current approaches. Whether just time-shifting or also location-shifting, at scale, this low-demand-first approach is dramatically safer than the current one, but it still carries risks that must be assessed and mitigated.


A call for innovation

Thinking about the challenges of large-scale demand and carbon-aware computing carries risks, but also opportunities. The current stage is experimental, fragmented, and dispersed. But there is room to take the approach even further and imagine a long-term goal. Let’s make it standard that our compute jobs, and their underlying infrastructure, interface with grids in a systemic way and become part of the solution rather than the problem. These ideas fall into the realm of demand management, which we touch on in what software engineers need to know about how the grid works.


There are many experiments ongoing in this area, some at a significant scale, but we need a more holistic vision at the policy, business, technical, operational, and infrastructural level of what is possible, what is necessary, and what it should look like. By interacting with grid management systems, ideally, in an automated, collaborative, and democratic way, we could harness the synergies between the demand management challenges of scaling up renewable electricity and reducing the emissions from compute.


Democratic here is key, as we all have a stake, are and, and will be impacted by these interactions. This can’t just be the realm of the Big Tech players. We all need a chance to participate through open-source standards, protocols, and public engagement and participation.

We explore these ideas further in addressing the elephant in the room.


Proposal 2: Run compute on truly additive renewable energy

TL;DR:

To be in any way effective, computing must target green energy sources that are in fact additive, and transparently address and mitigate the risks of perverse effects.


There are two common ways to compute on additive renewable energy that can be achieved.


Quick reference – Additive renewable energy

“Additive” or “additional” renewable electricity means your purchase is financing new renewable electricity that would otherwise not exist. Related is applying the principle of "additionality" to renewable energy generation, particularly in carbon markets.


If your compute consume 50 terawatts of electricity and you pay for new solar panels that generate 50 terawatts of electricity, you achieve additionality. You can claim, in theory, that your compute is emissions neutral. In practice it’s less clear-cut, but this is the general idea.


Traditional carbon markets often sell ‘carbon credits’ based on already existing renewable electricity. In this scenario there is no additionality. You are merely claiming the existing renewable energy production as yours and giving responsibility for the existing dirty energy production to someone else. This is not reducing emissions at all.


Power Purchase Agreements (PPAs) and Renewable Energy Certificates (RECs)

The primary way that many organizations tackle this is through carbon markets. These in turn sell two main instruments: Renewable Energy Certificates (RECs) and Power Purchase Agreements (PPAs).


This remains a highly problematic approach. Why? Because the vast majority of RECs are non-additive.


They enable you to buy into the existing green energy mix, and simply take credit for their contribution. But you have zero effect on what’s called ‘emissionality’, which has similarities to the displacement effect.


Quick reference – Emissionality

New renewable energy projects don’t always pull emissions out of the atmosphere. The reason they help is because they displace fossil fuel power plants that would otherwise keep polluting.


But which projects are effective? That can vary greatly from project-to-project as well as the fuel-mix of the grid to which the project will be connected. For example, adding one more solar power purchase agreement (PPA) in California increasingly reduces output from a mix of natural gas plants and existing solar farms. But adding a new wind PPA in Wyoming nearly always reduces output at a coal plant, avoiding more emissions. This practice of comparing and acting on the avoided emissions of different renewable energy projects is called “emissionality.”


You can read more on this by WattTime, who popularised the emissionality term.


PPAs are commonly employed throughout the business world, especially in data centres. Corporate purchasers make agreements with energy companies promising to purchase the power and RECs generated by the renewable project for a specific time period, often the next 10-15 years.


While PPAs are often touted as a critical mechanism responsible for a company’s green credentials and central to its ESG strategy, they can be misleading. Even if PPAs are attributed to particular renewable projects, they do not generally directly power data centres. In other words, just because green electrons are being produced, does not mean those electrons are directly powering the compute within a data centre – despite often being touted as so. There is also the risk of double counting.


Therefore the best implementation of carbon markets involves ensuring that the renewable energy you purchase is additive.


Direct renewable electricity: integrating distributed compute and distributed electricity

A second, rarer version of additionality - yet far more effective.


Instead of purchasing some remote renewable infrastructure and “accounting” your claim to being powered by renewables, actually power your compute directly from your renewable sources.


If your compute is being directly powered by your own solar panels or wind turbines etc., there is no sleight of hand or complex statistical projections. Your compute is effectively off-grid to the extent that it is directly powered by your renewable sources.


While preferable in terms of emissions, this approach is challenging to scale and risks perverse effects as we get into below. Hyperscale computing today is concentrated in massive data centres. To power such enormous compute directly requires renewable generation facilities taking up massive amounts of land and water, around already huge data centre land occupation. While this can genuinely reduce compute emissions from hyperscalers, there are usually wider environmental, social, and economic impacts, aside from the logistics involved.



As an example, one such project is underway in Zaragoza, Spain. A 40,000 square-meter data centre will be supplied by two solar farms. Just one of these two solar farms, accounting for 90MW, will span 232 gross hectares (2.3m square metres). This is roughly the size of Central Park in New York. It will take up land rich in biodiversity, which, provisions notwithstanding, it seems set to damage, including endangered species of both animals and trees. Similarly, the recent data centre built by Google in Chile is double the size, extracting 169 litres of water/second in the local area and would thus require close to 10 million square metres to be powered directly by solar.


Local populations are already feeling the impact of the huge expansion of data centres. There’s a movement forming calling for a moratorium on data center construction. It’s happening globally – in Ireland, The Netherlands, and Singapore. The resistance is not just around electricity consumption. Water use is a huge issue too. Local populations in New Mexico, USA, Uruguay, and Chile continue to be at the forefront of the struggle over resource use.


However, hyperscale is not the only model, and it doesn’t have to be the inevitable future.


Most computing today is highly distributed or distributable. There are experiments with co-location of compute (crypto-currency specifically) where renewable generation already exists. This ensures direct power for compute using renewable energy, and plays a role in renewable electricity demand management. This too has the risk for perverse incentives effects. But with the right guardrails could be a significant paradigm to expand and explore.


There are a number of examples of having different form factors for generation than a giant, warehouse scale computer. The Energy Onion by David Sykes of Octopus Energy presents a way of thinking about energy that centers on variable renewable energy and efficiency.


You can also argue that it's possible to have the convenience of cloud computing without the need to have hyperscale data centers too. Companies like Oxide, with their cloud computer are all about making the things we associate with cloud providers (ease of use), and making them available without the massive buildings. Another example, serverless companies experimenting with Deep Green datacentres.


Integrating with distributed electricity generation

Mainstream electricity generation, akin to hyperscale compute, tends to be concentrated in massive power plants. Renewable electricity infrastructure however makes distributed energy generation possible. Instead of electricity being generated in a few massive central nodes, significant amounts could be generated in a large number of widely distributed, smaller nodes and microgrids. Distributed renewable energy is of particular relevance to the Global South, and with battery storage there is growing momentum around its dramatic expansion. The highest profile initiative in this area is probably Global Energy Alliance for People and Planet (GEAPP), launched at COP26 with an expected $100 billion to be invested in distributed renewable energy in the Global South.


The idea of matching distributed renewable energy generation with distributed computing has been floated and holds promise. This not only allows the off-grid powering of compute, but also expands the possibilities of dual use. For example distributed data centre servers can be used simultaneously for compute and for heating, reducing the energy currently spent for indoor heating.


6. The elephant in the room: rising computing demand



As we discussed in When does carbon-aware software make sense?, the key challenge for greening computing is not optimization but electricity demand. We think that carbon-aware computing, if it is to fulfill its potential and promise, needs to directly engage with this reality.


Our refined carbon-aware proposals don’t stand to gain us much if we are not also tackling the big question: how much of the world’s resources is it acceptable for tech to use?


There is a danger that the key takeaway from proposals 1 and 2, is that if we build and run electricity and data centers more innovatively, we can safely continue with business as usual. We can safely build massive AI products, keep growing our data centres and enjoy the benefits of limitless personal compute potential as long as we are targetting the growing renewable energy resources at low-demand times.


70% of all electricity still comes from fossil fuels which will reduce to 65% in 2025. This is encouraging, but there is no short or medium term scenario in which targetting curtailed renewable energy could power our global compute. There is also no scenario where additive purchases or direct renewable provision could grow at the speed required to catch up and keep up with rising compute demand in time to affect significantly our global warming trajectory.


It cannot be understated that one of the biggest shifts required to reduce carbon emissions in line with the Paris Agreement is to accept that we cannot continue to grow everything without some constraints. At least not in the short-term whilst we wildly exceed the world’s carbon budgets and need to drastically reduce our emissions. The need to manage growth would remain even if we completely switched entirely to renewables: we would run out of the minerals and metals we would needto keep up with current energy demand growth rates


Proposal 4: Demand shaping computing electricity use so it stays within agreed resource use boundaries


TL;DR: The core question that should be on all responsible technologists’ minds: is my compute’s net electricity demand reducing, or at least slowing its rate of increase? This is a question that can be addressed at the indivdidual, company, national and international level.


The global emissions picture

The tech industry is caught between the commercial imperative for growth, and the business as well as global costs and risks of accelerating global warming. Beyond the polarities of growth vs degrowth, what must surely be accepted is that unlimited growth is unviable for our industry and for our planet. Whatever the boundaries of the debate, we need to accept there should be limits to the net resources consumed by our sector, not just to how energy-efficiently we consume them.


“The current emissions from computing are about 2% of the world total but are projected to rise steeply over the next two decades. By 2040 emissions from computing alone will be more than half the emissions level acceptable to keep global warming below 1.5°C. This growth in computing emissions is unsustainable: it would make it virtually impossible to meet the emissions warming limit. Plus, the emissions from the production of computing devices far exceed the emissions from operating them. So, even if software is more energy efficient, producing more of them will make the emissions problem worse.”

Low carbon and sustainable computing, by Professor Wim Vanderbauwhede


Two models created for this article by Professor Vanderbauwhede show that the real problem our planet faces is not how we optimise our compute via patterns like carbon-aware compute, but how we change the alarming growth trend in computing driven electricity demand.


The first model shows that because carbon-aware computing does not assume energy demand reduction, only greener compute whatever the demand, it will hardly slow down our race to planetary tipping points.


Business as usual (BAU) would mean an 800% increase in computer-related electricty demand by 2040, and a 310% increase in our sector's emissions by 2040 - most of the planet's carbon budget.


At current computing demand growth rates, implementing carbon-aware computing with the adjustments we propose, by 20240 computing-related emissions would rise by 280%. Every single reduction counts and buys us days, months, years before irreversible milestones, so that 20% difference matters. But it still spells disaster. Like placing a bandaid on a serious wound.


In contrast, demand reduction has an exponential effect. if our sector continued to grow, but managed to limit that growth to 26% between now and 2040, our computer-related emissions that year would be 50% of what they are today, accounting for the rise in renewables. With our proposed carbon aware improvements, the emissions savings would be 56%. In this scenario, our improved carbon-aware computing might be not a bandaid, but one of the ingredients of a genuine solution to our sector and our planet's environmental challenge.


This goes back to our call for more holistic, long term, systemic thinking on the relationship between computing and the increasingly decarbonising energy grid, where demand response mechanisms will become increasingly necessary, as well as the opportunities of distributed computing and distributed energy systems to at least complement the predominant centralised model of massive data centres and power plants.


Fundamentally, what we are calling for is for some innovative backcasting, that can translate the global carbon budget into a tech energy budget, over a critical timeline, and identify the innovations, integrations and optimisations needed to not just operate but thrive in those scenarios.


Carbon aware computing implies a wider vision of the relationship between emissions and electricity demand, consumption, generation and management. The event-driven architecture of our early implementations is a great foundation on which to build. The refinements we offer in proposals 1 and 2 may mitigate the risks and optimise the benefits. But there is also an opportunity to think bigger, into how we implement, expand and evolve such patterns at scale with key energy and policy partners in a way that not just optimises our emissions, but reduces net consumption in a fair and equitable way across all nations. The concept of improved carbon-aware computing could be hugely helpful.


If this article sparks a conversation on what these patterns could look like for data centres, for AI, for blockchains, and indeed for power plants, for renewable energy providers and infrastructure vendors, for investors and regulators, there is no doubt that breakthroughs would follow.


7. Where do we take carbon-aware from here? Introducing grid-aware computing.

If the assumptions made through this post are correct – please do reach out if you have something to add, contributions would be most welcome – we are entirely justified in promoting the next version of carbon-aware computing.


For argument’s sake let’s call it grid-aware computing for now. This would be the version that addresses the realities of what is impactful and what isn’t given the real-world constraints of managing electricity grids and existing with tight global carbon budgets.


Quick reference – Grid-aware computing

The next proposed iteration of carbon-aware computing that helps developers address the impact of computing shift in ways that make actual net reductions to the emissions associated with local and global electricity grids. The key approaches are:

  1. Run compute when demand is low, targeting curtailed green electricity in stable grids.
  2. Run compute on additive electricity.
  3. Demand-shape computing electricity use so it stays within agreed resource use boundaries.

Grid-aware computing: avoiding the greenwashing trap

This blog has, above all, identified that the version of “carbon aware computing” as currently presented, promoted, and increasingly marketed by more and more Big Tech companies, is not actually a trustable contribution to the environmental impact of computing. On the contrary, we argue that it is mostly ineffective and full of unacknowledged risks. This is not a judgement of intent. Whether implemented in good faith or not, the effect is to signal a green step forward, which we think in most cases is not a step at all, and in some cases it's not green.


If we think of our three proposals for grid-aware software (GAC) in relation to Business as Usual, including current carbon aware computing (CAC), this is what we envision:




Endorsing the current carbon aware paradigm without question, verification or risk analysis opens the door to a technically subtle and dangerous new wave of greenwashing. We are still on time to inject caution and nuance into carbon-aware discourse, and more crucially, into its implementations.


This is not to discredit current efforts, but to de-risk them and improve them, before the current concept, without warning labels or risk mitigations, gains enough traction to add brand value and scale up, without guardrails. By then it will be too late and we will learn the consequences through hindsight.


As of now, whenever you read: we have made this app carbon aware, or timed this compute job to when the grid is greenest - unless there is some actual evidence of impact, assume that the announcement will make little to no positive difference to emissions. And if the implementations really scales, consider it is likely to damage both the climate and grid stability/access, with all the economic and social consequences.


We have done our best to outline a constructive, more careful approach, building on what’s already there with an eye to what lies ahead. Our hope is we can capture the current desire to make software more carbon-aware but make it more effective, drastically reducing its risks, and significantly increasing the likelihood of climate benefits.


We’ve named this approach “grid-aware computing” to emphasise that what matters is our overall systemic impact on the grid, rather than the carbon intensity metrics at any given time, or the emissions of any given compute job.So let us by all means embrace, experiment and innovate with our proposals 1 and 2 for improved carbon aware computing: it is potentially useful and impactful. But in doing so let’s not make the automatic assumption that we’re prioritising the right work.


The grid-aware approach means that we should never let carbon aware implementations of specific compute tasks distract us from the central, constant question at the heart of our third proposal: is our compute’s net electricity demand reducing?


8. What can you do to help?

Big Tech is listening to us, and this is an inflection point right now.


We have the opportunity and responsibility to shape corporate discourse and action around carbon-aware computing in a responsible direction that will reduce emissions in a noticeable way.


You can do this by:

  • Sharing this article with practitioners of carbon-aware approaches to educate and inform.

  • Contribute to this content through raising issues or suggesting edits in the relevant ClimateAction.tech github repo;

  • Communicating the ideas and issues presented in this post to your work community and relevant stakeholders and networks;

  • Conducting further research and case-studies into the dangers, mitigations and improvements of current carbon-aware approaches and sharing those; and

  • Building upon the initial concepts of grid-aware software through research, prototypes, case studies, or feedback.

  • Joining ClimateAction.tech to network your efforts with peers and colleagues


The choice is yours. The time is now.


Note: This article builds on an open source article series written by Hannah Smith and Ismael Velasco and hosted at ClimateAction.tech. The reviews and contributions of Michael J. Oghia, Fershad Irani, Wim Vanderbauwhed and additional informal input by Phillip Jenner and Chris Adams are gratefully acknowledged.