Harvard’s Home for climate & Sustainability

AI, energy demand, and regulators: A conversation with Le Xie

Harvard professor says energy regulators must get creative as computing demands from artificial intelligence strain the grid.
Oct 23, 2024

The widespread adoption of energy-hungry AI tools began less than two years ago with the release of ChatGPT 3.5. Now a week can hardly pass without artificial intelligence further embedding itself into our lives.

The technology has implications for the economy, national security, and the climate. As big tech platforms build AI into their flagship products, data centers are using ever more electricity – right as the world needs to be cutting energy use to prevent the worst effects of climate change. 

A coal-fired power plant in Nebraska, for example, has stayed open past its planned retirement date to serve data centers owned by Google and Facebook-parent Meta. Indeed, Google recently attributed an emissions spike to the impact of AI on energy consumption. Renewables are spreading, and Google and Microsoft are investing in carbon-free nuclear, but Wall Street investors are betting on new gas plants.

The International Energy Agency estimates that data centers will account for 6 percent of American power demand by 2026, up from 4 percent in 2022. Barclays, a bank, sees data center demand almost tripling by 2030.

To put these dizzying developments in context, the Salata Institute spoke with Le Xie, the Gordon McKay Professor of Electrical Engineering, who joined the faculty at Harvard School of Engineering and Applied Sciences this semester. Xie has published extensively on sustainable electrification and the role of the electric grid in addressing climate change.

He emphasized the role of regulators, the need to overcome delays expanding new transmission lines, and explained the potential for AI to optimize grid operations.

How can policymakers address the growth of energy consumption amid the urgent need to decarbonize?

The electricity sector is one of the largest sources of carbon emissions. Yet, to clean up a lot of other sectors, like heating and transportation, the most viable solution is to electrify them. Therefore, addressing climate change requires the electrification of many other sectors.

To do this, we need to integrate more and more renewables into the electric grid and that requires more transmission lines to move energy from regions rich in renewables to those with fewer renewable resources. However, on average it requires ten to 12 years to build a new transmission line in the United States. Compared to that, leading AI chipmaker Nvidia updates the software of its AI chips every few months. It is untenable for an industry with such fast-evolving dynamics to be constrained in a regulatory framework that evolves over decades. So, regulatory reform should be an important starting point.

Texas offers an interesting case study in this regard. The majority of Texas’s grid is managed by a local council and operates independently of the power grids that span the eastern and western United States. This independence means that Texas generally does not fall under federal jurisdiction. Unhindered by extensive regulatory constraints that grids in other regions typically face, in the 2000s Texas built the Competitive Renewable Energy Zones (CREZ) project to bring gigawatts of abundant wind energy in West Texas to load centers in the eastern part of the state. Elsewhere in the U.S. we will need a federal-state-local coordinated regulatory framework to accelerate the buildout of many more such projects.

What about technical innovations? Can anything be done to improve carrying capacity on existing powerlines elsewhere?

Yes, though it ultimately comes back to regulations, too.

Consider a transmission line that is experiencing significant congestion. The throughput of this line could be enhanced by some engineering innovations – for example, integrating storage at both ends to control and dispatch the flow.

However, this approach encounters a significant regulatory barrier. In most regions, transmission companies cannot have storage as part of their asset portfolios exclusively for arbitrage. They may be allowed to own some energy storage for reliability purposes. However, due to the dual nature of storage, where improving reliability often leads to arbitrage, one can argue: Is expanding the storage capacity used for profit-making or for reliability? This represents another subtle policy challenge.

AI proponents often say the technology will help us overcome demand issues. Can you explain how that might be the case?

Given the constraints of the current electric grid infrastructure, and the stark reality that no new transmission lines will be operational in the next 10 years, our best bet would be to efficiently utilize all power grid resources. And AI holds significant potential in this area.

For example, since seeing firsthand the devastating effect of winter storm Uri in 2021 in Texas, where I lived, we have been doing a lot of research in understanding the potential of flexibility from the demand side.

During such periods of peak demand or natural disasters, consumers must adopt a responsive and flexible approach. For example, during events like Uri, one could use smart grid technologies – to involve AI to moderate heating levels slightly, ensuring everyone has access to heat when they need it most. Is it necessary to do laundry when the neighboring town has no power? The key idea is to offer incentives and automation to those who can postpone their energy usage by a day.

Second, we must intelligently incorporate all kinds of energy demands and offer consumers choices so that the grid can match these with fluctuating renewable generation. The traditional philosophy was to allow supply to follow demand, which was feasible because the supply side was fully controllable. We could burn as much coal or gas as demand required at our power plants. However, if a significant portion of our portfolio is wind and solar, how can we control these resources? The only solution is to let demand move in tandem with supply. The key is to have a just-in-time balance of supply and demand, facilitated by a smart grid and reliable energy storage – batteries.

Can AI address load issues at the grid level?

We need to address the push and pull between engineering and policy. For example, as we introduce more and more variable renewable energy resources into our power grids, these resources can induce newer issues such as sub-synchronous oscillations – essentially causing swinging power flow on transmission lines. As a result, transmission throughput must be significantly lowered for stability concerns. Consider this analogy: constructing an eight-lane highway but, due to inebriated drivers only two lanes are deemed safe for use. This represents a substantial underutilization of infrastructure, wherein only 20 percent of the power line’s capacity could be utilized.

While there is a need to construct additional lines, we could focus on optimizing the operation of the existing grid. For example, you can use AI to identify the sources of oscillations – the inebriated drivers in our analogy – and make sure their behavior is controlled.

AI advocates say the technology can make us more resilient to climate change. Do you have any examples?

We just published a paper investigating how AI could be used in wildfire risk identification. In Southern California the utilities possess extensive imagery of past wildfires detailing their spread and affected areas. These utilities are tasked with devising mitigation plans aimed to deactivate key powerlines that risk sparking fires, while impacting as few customers as possible – popularly known as Public Safety Power Shutoff. Given that they cover numerous regions with a vast customer base, prioritizing shut-off efforts becomes crucial. GPT-type tools can analyze all this data and deliver a very comprehensive plan to utility engineers indicating specific areas to deenergize including travel plans, and targeted focus before a fire occurs.

The tech giants are rolling out all sorts of new AI features. For example, if I google the temperature to bake chocolate chip cookies, I’ll first see an AI-generated answer before links to resources, which it used to show up top. But maybe I don’t want that, because I know that AI uses significantly more energy than other computing applications. How concerned should we be about this shift?

Currently, it is generally estimated that where a typical Google search costs one unit of electricity, an a AI-powered search consumes about 10 times more energy. However, per computation, energy consumption is decreasing at a significant rate. Additionally, there are two pieces of exciting new research on this here at Harvard.

Number one, there is a considerable potential to make machine learning algorithms more efficient. Currently, we have a generic architecture known as deep neural networks. These architectures could be specialized and tweaked to make training the models a lot more energy efficient.

Number two, at a data center level, many tasks can be adjusted in terms of location and timing. As an example, when you’re running a Zoom call, you don’t want your call to be interrupted. However, when you are training a large language model [the backbone of AI tools such as GPT], it makes sense to be flexible. It is inconsequential if AI-models are trained at 3 a.m. in California or 4 p.m. in Texas – do it when the energy is cheap and available. And in fact, we are currently observing similar strategies being implemented in the cryptocurrency mining context: Miners are shifting operations to follow the energy price.