SIAM News Blog
SIAM News
Print

The Structure and Evolution of the Next-generation Electric Grid

By John Guckenheimer

The electric power grid is a quintessential complex system, one whose complexity continues to grow. This article briefly describes the structure and evolution of the grid, a few of the recent ongoing changes, and some of the problems described at a workshop sponsored by the National Academies Committee on Analytic Foundations of the Next Generation Electric Grid (see sidebar below), in Irvine, California, February 11–13, 2015.*

■ ■ ■ 

The first electric power plant in the U.S. began operation in 1882. During the first half of the 20th century, the industry grew to serve the entire country and became a regulated monopoly. The grid emerged with the construction of transmission lines that could deliver power from one utility to others. Low-cost power from, for example, hydroelectric plants could be transmitted over long distances, and by connecting customers to multiple generators, the lines also provided greater reliability. During the 1950s and 60s, the amount of electricity produced and consumed in the U.S. increased by approximately 3% annually. That growth slowed during the energy crisis of the 1970s.

Until the 1990s, the industry was regulated on a state-by-state basis, with rates set to give utilities reasonable profits. To create greater flexibility through electricity markets and to stimulate technical innovation, which had stagnated, the industry was then restructured. In much of the country, entities called “regional transmission organizations” (RTOs) or “independent service operators” (ISOs) were created to establish wholesale markets and assume responsibility for the reliability of the grid. This brought a new level of complexity to the industry. Power shortages in California during the summer of 2000 led to extreme price fluctuations; the problems in California called attention to the need for careful regulation of the wholesale electric markets to protect the interests of consumers.

Electricity is an unusual commodity because it cannot be stored readily. Viewed as an enormous electric circuit, the grid obeys Kirchoff’s laws for voltage and current in the system. It has been operated with few constraints on loads: Customers have been free at all times to use as much or as little electricity as they desire. In this mode of operation, the production of electricity must be varied in real time to match the loads. An excess of producers must be available at almost all times to cope with emergencies or extreme loads––during a summer heat wave, for example.

The RTOs and ISOs conduct daily auctions to select the producers that will be used, with the final unit commitment adjusted in hourly and five-minute increments. These decisions are based on the solution of large mixed-integer optimization problems that take into account constraints, most notably transmission line capacities. State-of-the-art optimization algorithms have become an important tool for system operators, and it is highly likely that further algorithmic advances will translate into further dollars saved. Currently, the unit commitment decisions are based on linearized, static approximations of the grid and do not reflect transient dynamics or alternating voltage and current properties of the power flows. 

Electric power generation today is dominated by large power plants, most of which are slow to start or stop. Under these conditions, how is the grid to respond instantaneously to fluctuations in demand or to events like lightning strikes and equipment failures? The answer is embedded in the physics of  alternating current:  Frequency changes and phase relations between voltages and currents affect the amounts of power delivered. The speed of large spinning dynamos and “reactive power” of three-phase AC circuits adapt to the changing loads. The system must operate with reserve capacity sufficient to prevent unforeseen events from destabilizing the system. When destabilization does occur, protective devices, such as circuit breakers and relays, shed loads or reconfigure the topology of the network to prevent blackouts and/or equipment failures.

Satellite image of North America at night. Image courtesy of NASA.
New technologies are having an impact on this control environment. Among the major changes are the following:

(1) The amount of energy that can be obtained from renewable sources (mainly photovoltaic panels and wind turbines) is increasing rapidly. These resources provide intermittent power in both predictable (day/night) and unpredictable (cloud cover) ways, and they lack the inertia of large generators. Both of these characteristics make the scheduling and unit commitment problems for the system operators much more challenging. Rooftop solar panels, moreover, are installed with the capacity to reverse power flows in the distribution system and feed energy back from customers to the grid. 

(2) “Smart grids” refer to technologies that provide feedback from end users to the systems controlling the grid. One way to control loads is to have devices turned off automatically at times of high demand. Water heaters, refrigerators, air conditioners, and washing machines all consume large amounts of power but need not run continuously. Even more power is required to charge electric vehicles, but their batteries have the potential to store substantial amounts of energy, which could be fed back into the grid. Clearly, the bidirectional energy flows of smart grids make the control problems for the grid much more complicated. In the current environment, there is an interface between high-voltage transmission and local, lower-voltage distribution systems. Smart grids and smaller, highly distributed generation resources blur this separation.

(3) Devices that monitor and measure the state of the grid have improved enormously. Several years ago, under the American Recovery and Reinvestment Act, a synchrophasor initiative funded the development of phasor measurement units (PMUs), which provide time-stamped information about voltages, currents, phase angles at 30 Hz, and other quantities at 30 Hz and higher. More than a thousand of these devices have been installed nationwide, giving qualitatively improved measurements of grid dynamics. We are only beginning to formulate ways in which this information can be used to improve grid performance. For security reasons, the PMU data will not become public, but mathematicians can certainly help with the creation of algorithms for analyzing or using the data. 

 ■ ■ ■

The electric grid is an indispensable part of the national infrastructure on which we rely every day. There is a great deal of uncertainty about how the grid will evolve. Change in generation fuels is occurring faster than anticipated just a few years ago, with the conversion of coal plants to natural gas and the growth of renewable sources. Regulation at state and national levels plays an enormous role in the construction of new resources, such as transmission lines, and in the operation of electricity markets.

Low-cost, efficient storage devices could greatly simplify the reliability problems of the grid and/or enable the proliferation of decentralized microgrids. In a contrasting scenario, smart grids in which demand response enables operators to control end-user devices would smooth demand peaks and adapt to intermittent generation of power. Greatly improved real-time monitoring of the network could help in the detection and prevention of impending equipment failures and unstable power flows. 

Building on the success of optimization algorithms in the operation of wholesale electricity markets, we need to create interdisciplinary research communities that can facilitate the use of mathematics in creating the enabling technologies for the next-generation electric grid.

Readers may be interested in two earlier reports, which can be accessed here and here.

John Guckenheimer is the Abram R. Bullis Professor in Mathematics at Cornell University.


Building a Multidisciplinary Community

The ad hoc National Academies Committee on Analytic Foundations of the Next Generation Electric Grid, with 15 members drawn from mathematics and engineering, is co-chaired by John Guckenheimer (Cornell) and Tom Overbye (UIUC). The committee’s charge is to produce a report that addresses the questions:

(1) What are the critical areas of mathematical and computational research that must be addressed for the next-generation electric transmission and distribution (power grid) system? Do current research efforts in these areas (including non-U.S. efforts) need to be adjusted or augmented? 

(2) How can the U.S. Department of Energy help build the multidisciplinary community––including cutting-edge knowledge of mathematics, statistics, and computation, along with a deep understanding of the emerging electric grid and of the questions that need to be answered if its potential is to be realized––needed to address this research frontier? What mix of backgrounds is needed, and how can the community be developed? How can DOE extend its reach beyond its existing ties?

blog comments powered by Disqus