Google Cloud Carbon Footprint

Photo by Pedro Henrique Santos on Unsplash

Today will be about Google Cloud’s recent announcement named “Google Carbon Footprint”. This will be used to show customers what their carbon emission is in relation to the use of the Google Cloud Platform. Our approach to the calculation is called “Energy-based apportionment of shared IT Infrastructure” because the cloud is a shared infrastructure across many users. How can we split the total energy used by customers and by carbon emission?

Sustainability Strategy at Google, how to reduce carbon emissions of operations?

The first pillar is energy efficiency because the greenest energy is the one that you don’t use. The most efficient you are, the less energy you use for the corresponding workload, and the best it is for the planet. Second, decarbonized energy should be used as much as possible. And finally, as an ultimate resort, you can, you can use carbon offsets. There is clearly an order in this approach, offset may only be used as a last option. 

About the energy efficiency of Google’s data centre, the PUE is 1.1 (based on the data centre overhead over the IT hardware used), bearing in mind the industry is around 1.6 or 1.7 and our customers’ 1.8 or even 2.0. Regarding decarbonized energy, Google has been matching its total electricity consumption globally with renewable energy since 2017. We have 50 global projects around the globe, meaning wind farms and solar farms for a total capacity today of around six gigawatts. Finally, there is a big question about the relevance of the quality of carbon offsets. That’s why Google is not buying credit on the market, preferring to make direct investments into projects to reduce their emissions. The reductions made can be measured and audited by third parties. Our main project is collecting methane (a highly potential greenhouse gas) from former landfills, which means we have machines and can precisely measure the carbon emission reduction.

Coming back to our approach, Google has been net operational carbon neutral since 2007. We started then with carbon offsets because it was the most relevant option but we also considered it wasn’t enough. We invested significantly into renewable energy from 2010 and, from 2017, Google has matched its global annual electricity use with wind and solar purchases. We have now announced a strategy to become Net Zero – 24/7 Carbon-Free Energy. At the moment, we are using 100% renewable energy as a global match, which means our annual global energy consumption compared to our annual global energy production, but that doesn’t mean it is the case at every point in time and place throughout the year.  every aware of it of the year. This is the reason why we have implemented this Net Zero – 24/7 CFE strategy. If we make the calculation today, we are 67% into this goal, our aim being 100%. Net-Zero will be spread across the whole value chain, taking into account our suppliers’ emissions as well. 

What are my levers of action as a Google Cloud customer? 

Google is providing data, region by region, in terms of carbon intensity (the emission of greenhouse gas compared to one-kilowatt hour of electricity production in a particular location). It all depends on the grid energy mix, some countries rely more on renewable energies, others are more nuclear, coal, fuel, gas so there’s a significant difference.

As you can see in this column grid carbon intensity, there is a number of co2 grams per kilowatt-hour and the difference between regions can be tremendous, reaching a factor of five. This tool, therefore, gives you more data for your choice of location.

We also made a tool which, using this data, helps you to choose a region based on your criteria, may it be price, carbon footprint, or latency (obviously, if you put your workload on the other side of the planet, you will java network latency). We’ve also marked with a leaf the regions with the lowest co2 emissions. Upon testing the tool, we found out that 90% of all users are more likely to select a low carbon origin when they chose this indicator. Across new users, it was more than 50%. Clearly, giving more data to users impacts their behaviour. 

Finally, last October, we released a GCP carbon footprint, which gives you emissions based on your actual use of GCP. It’s embedded inside the GCP console and broken down by project, region, product, and month. You can also export the more granular data into BigQuery. As of today, it is only electricity-related emissions in a location-based approach, which means we don’t yet calculate the renewable energy purchases but we are working today to incorporate all the scopes according to the GHG Protocol carbon reporting standard.

How does that work? 

We’ve made this energy-based approach, which is measuring the actual electricity used by the data centre. 

As you might know, Google infrastructure is all run as a container-based infrastructure or internal container orchestrator whose name is Borg. And it is based on the Borg experience that we’ve made the open-source Kubernetes project that we donated to the CNCF Foundation. Google’s infrastructure has been fully container-based for more than 10 years now. Everything is run as a product approach and, behind the scenes, you have a financial flow. In a large-scale microservice architecture, it is necessary to be able to work out the cost of running each service, even though it is composed of many services within the infrastructure. We take the foundational energy use, for example, let’s say, the shelf system of Google, and its purpose is to sell filesystem service to all other Google services. We take his total energy use running on the machines and then we split it across all its users. We then follow this graph up to the final stage which are services that are used by end customers and users. So finally, we have the apportioned energy use of the final service. Now we take for example BigQuery or serverless data warehouse, we have its total energy use over a month or a day. We can then convert it into carbon by using grid carbon intensity. In order to apportion that to customers, we use the revenue. Say a customer represents 10% of the revenue of the service, we will allocate 10% of the energy use of that service. We do this SKU by SKU.

Key takeaways

  • Digital services providers are challenged by their customers for reporting their carbon footprint
  • Providing more information to customers impacts their behaviour
  • Carbon footprints are not easily comparable between providers because there is no standardized methodology

Q&A Section

Q: What is the feedback you received from your clients after the announcement of this GCP carbon footprint tool?

A: The request to disclose came from the customers and it filled a need. 

Q: Have you had requests for improvements of this tool?

A: Yes, the first one is to ensure that we cover all scopes. As I mentioned, today, we are just covering two scopes of electricity and we know it’s not enough. Customers are also asking for more granularity and for more levers of action. What can they do to reduce their impact? They are asking us to provide automation for moving workloads, or to be able to make an estimate 

Q: Can you explain the scopes in a little more detail? 

A: Scopes are part of the Carbon Reporting Standard? It’s like financial accounting, but instead of counting euros, you count grams of carbon emissions or the equivalent. It is split into three scopes: direct emission (when you drive your car, it’s emitting carbon). Scope two is electricity, where you are indirectly emitting carbon, more or less depending on the grid energy mix (coal emits the most, nuclear, wind, or solar will emit less). And scope three, on top of considering your company’s emissions, you also have to take your suppliers’ emissions into account. 

Q: Some of your clients are probably already measuring their carbon footprint, have you exchanged with them about the differences in measuring between your and their methods?  

A: The standards that already exist are not designed for shared infrastructures or microservices. You need a lot of data because you need to know where the workloads are at what time and for what services. etc. Most companies can measure at a machine level but don’t have the maturity or the data to make the right apportionment.

Vincent Poncet

Vincent Poncet

Principal Architect & Digital Pollution at Google

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture