Wednesday, July 31, 2013

Why Utilities have Avoided Disruption Thus Far – Financial Metrics

This is part 3 of a series on disruption of electric utilities.

Disruption of Electric Utilities
1.  Background on Utilities
2.  Why Utilities have Avoided Disruption Thus Far – Reliability
3.  Why Utilities have Avoided Disruption Thus Far – Financial Metrics
4.  Community Choice Aggregation is a Red Herring Disruptor
5.  Distributed Solar is the Real Threat - Trends
6.  Distributed Solar is the Real Threat - The Difficult Position of Utilities
7.  A Survival Strategy for Utilities

-------------------------------------

Across industries, low-end disruptions are among the most common forms of disruptive innovation.  A low-end disruption occurs when a new entrant targets the lowest margin customers of an incumbent company.  Examples include steel minimills, which first produced low-margin rebar rather than high margin sheet steel, or Toyota USA, which started with the low-margin Toyopet and Corona models before moving up market to make Lexus vehicles.  Low-end disruptions occur because incumbent companies do not mind losing their lowest margin customers, and thus do not waste their resources fighting new entrants.  Unfortunately for incumbents, a rational pursuit of short-term profit maximization, can lead to a longer term disruption as new market entrants are able to gain a foothold in the market and move to higher margin products over time.

Utilities have been resistant to low-end disruption because they are unique among industries in the way they measure profitability.  Since the utility regulatory structure was established, utilities have been able to collect revenue according to a revenue requirement formula.  The revenue requirement is the total amount that the utility is allowed to collect from customers and, with demand forecasts, is used by regulators to set retail rates for consumers:

Utility revenue requirement = opex + (rate base * rate of return)

Opex is equal any operations and maintenance costs the utility faces as well as fuel costs if the utility owns generation assets.  A utility’s rate base is equal to its capital expenditures minus depreciation.  The utility’s rate of return is an agreed upon figure which is negotiated with state public utilities commissions for distribution assets, and the federal government for transmission assets.

The unique part about the utility profit formula is that there are no low-margin customers.  It is more expensive to build utility assets to serve customers in rural areas compared to those in dense urban areas, but all utility assets contribute to the rate base.  The costs associated with the more expensive customers are simply redistributed and collected in the rates of all customers.  Under this unique profit formula, a utility will respond aggressively when a new entrant targets any utility customer.

Friday, July 26, 2013

Why Utilities have Avoided Disruption Thus Far – Reliability

This is part 2 of a series on disruption of electric utilities.

Disruption of Electric Utilities
2.  Why Utilities have Avoided Disruption Thus Far – Reliability

-------------------------------------

In the United States, the typical utility consumer experiences loss of power for only 2 hours each year.[1] That means utilities are reliable 99.98% of the time.  Not bad.

Source: Wikimedia Commons
I mentioned previously that I believe utilities have not been disrupted by innovative new entrants because they do an excellent job serving customers.  They offer electric power, as much and whenever needed.  And with a reliability of 99.98%, they offer a product that is hard to match by other entrants to the electricity market.  There may be certain customers that face more frequent interruptions, such as those in rural areas, but the typical utility customer receives a great service.

Despite the impressive reliability metrics of US utilities, utilities continue to prize reliability above all other measures of performance.  In a 2012 survey of hundreds of utility executives, the top ranked issue facing the industry was reliability.[2] Furthermore, reliability has been the number 1 or 2 issue in every such annual survey since the surveys began in 2006.[3] Utilities are not perfect at delivering reliable service, but their employees are oriented to respond to customer outages and have been working against the metric of reliability for over 100 years. Any new market entrant has a difficult task in better addressing this customer need.
 




[1] The 2 hour per year figure takes some rough estimation because EIA does not publish this information.  I come up with 2 hours by looking at data from a table on page 19 of LaCommare & Eto.  Including LaCommare & Eto’s own survey data, we get a median SAIDI figure of 107 and a median MAIFI of 5.5.  If we assume an average momentary interruption of 2.5 minutes for the numbers in the MAIFI index, we get an average outage time per year per customer of 121 minutes, or 2 hours.   Having said that, the utility surveys listed above are self-reported, and therefore they may not include all events that a utility is unaware of or neglects to count.  In addition, widespread outages from natural disasters are sometimes not included in the data because SAIDI and MAIFI are meant to measure routine events.

Sunday, July 21, 2013

Background on Utilities

This is part 1 of a series on disruption of electric utilities.

Disruption of Electric Utilities
1.  Background on Utilities

-------------------------------------

The electric utility industry has long appeared immune from disruption.  Investor owned electric utilities follow the same business model today that they used in 1907.  In that year, Samuel Insull released a policy paper on the electric power industry.[1] Insull, a protégé of Thomas Edison, conceived the idea of private electric monopolies that would be regulated by the states.  The policy paper caught on as the dominant approach to establish electric utilities.  State regulation created a stable legal and economic framework for utilities as natural monopolies, which enabled the rapid growth of electricity service in America.  Insull went on to become President of Commonwealth Edison, the utility serving Chicago.

A natural question becomes, why have electric utilities been able to resist change?  A common argument is that electric utilities are monopolies protected by the government and therefore highly defensible businesses.  The industry is also highly capital intensive.  However, powerful monopolies in other capital intensive industries, such as Standard Oil, AT&T, and U.S. Steel, have all been disrupted by some combination of competition and anti-trust legislation.  I will argue instead that electric utilities have been more resistant to disruption for two reasons: 
  1. Utilities do an excellent job solving real customer needs.
  2. Utilities as regulated businesses are evaluated on unusual financial metrics which coincidentally encourage utilities to fight new entrants to a greater extent relative to other business.
In my next post I will examine further the reasons why utilities have successfully avoided disruption.




[1] Ed Smeloff and Peter Asmus, Reinventing Electric Utilities (Washington, DC: Island Press, 1997), pp. 10-11.

Tuesday, July 16, 2013

Disruption of Electric Utilities: Introduction

I will be writing a new multi-part series about the disruption of electric utilities.  This topic has received significant attention recently, particularly after the Edison Electric Institute (EEI) published this policy paper on disruption in the electric utility industry.  EEI is the association of United States shareholder-owned electric power companies and lobbies government on behalf of electric utilities.

I’ll start with some background on the industry, discuss some potential disruptive innovations, and finally give my recommendations to the utility industry for how they can successfully innovate on their own and avoid being disrupted.  

These posts are written entirely by me, but they derive from a policy paper written for Professor Clayton Christensen's class, Building and Sustaining Successful Enterprises, at Harvard Business School.  Professor Christensen provided meaningful guidance in the writing of this paper.

Friday, June 7, 2013

San Onofre Nuclear Power Plant Shut Down

San Onofre Nuclear Generating Station
Source: NRC

Southern California Edison announced it will permanently shut down the San Onofre Nuclear Power Plant in California.  The plant has been closed since early 2012 due to a leak, which upon further investigation revealed significant maintenance problems.  San Onofre, located on the coast in Pendleton, California between Los Angeles and San Diego, joins four other nuclear power plants in California that have been shut down over the years.

  • San Onofre; Pendleton, CA (San Diego County); closed 2012
  • Rancho Seco; Herald, CA (Sacramento County); closed 1989
  • Humboldt Bay; Eureka, CA (Humboldt County); closed 1983
  • Santa Susana Sodium Reactor Experiment; Simi Valley, CA (Ventura County); closed 1964
  • Vallecitos Nuclear Station; Pleasanton, CA (Alameda County); closed 1963
The only remaining nuclear power plant in California is PG&E's Diablo Canyon, located in Avila Beach, CA (San Luis Obispo County). 

The downside of the closure of San Onofre is that Southern Californians will consume higher cost, higher carbon electricity in the immediate future.  Nuclear does not count as renewable under California's Renewable Portfolio Standard (RPS).  Therefore, with California's 2020 RPS target of 33% renewable power, the utility owners of San Onofre will replace the output of San Onofre with 67% fossil fuel generation.  

My own position is that despite the benefits nuclear power has provided to California, it was never a good idea in the first place to install expensive power plants without a viable long-term fuel storage or re-processing plan.  In addition, California lost valuable natural coast to the enormous plants, and as Japan has shown, nuclear power in seismically active regions is a risky proposition.  

The closure of San Onofre will undoubtedly lead to more pressure from activists regarding the license renewal of Diablo Canyon.  The licenses for Diablo Canyon's two nuclear reactors are currently set to expire in 2024 and 2025, but PG&E has filed an application to extend the licenses for an additional 20 years.  The Nuclear Regulatory Commission has set up a website specifically for members of the public to learn how they can get involved with the license renewal decision.

Friday, December 28, 2012

Jim Rogers Does Not Care About Climate Change

A recent Businessweek article by Paul M. Barrett makes the case that Jim Rogers, the soon-to-be retiring CEO of Duke Energy, should be the next Secretary of Energy.  While leading America's largest electric utility, Rogers surprised industry observers by supporting cap-and-trade legislation for carbon dioxide emissions.  Because of his background as both a business leader and environmentalist, he would appear to have strong bipartisan credentials for the Department of Energy position under President Obama.

I had the pleasure of hearing Mr. Rogers speak in a small group setting this past November.  I was impressed by his honesty and openness, but was not pleased by content of his talk.  When asked why he supported the failed Waxman-Markey cap and trade legislation, Mr. Rogers smiled and said:

"My time in Washington taught me something.  When there is a big movement behind something, you either get trampled by the parade or you jump out in front of it.

This is going to sound cold-hearted and pragmatic – which I am.  I never had a 'polar bear' moment.  I’m a jump in front of the parade kind of guy."

Mr. Rogers supported the cap-and-trade legislation because he believed that cap-and-trade was going to pass regardless of his actions, and he wanted to get involved in order to delay implementation and lesson the burden on electric utilities.  In particular, Mr. Rogers sought to support utilities such as his own which have invested heavily in coal-fired generation.  Mr. Rogers is a shrewd businessman and a talented public speaker, but his values are not as environmentally friendly as his public persona would suggest.  For this reason, I cannot support Jim Rogers to be the next Secretary of Energy.  His reign would be one of sandbagging and greenwashing rather than meaningful progress on energy and environmental issues.

Friday, June 1, 2012

Capacity Markets – Renewable Generation

This post is part of a multi-part series on capacity markets.



In a previous post, I described how electricity markets in the US provide incentives for independent power generators to build and maintain generating capacity.  Capacity ensures that the grid has sufficient ability to generate the necessary electricity during peak hours.  These markets function well for traditional natural gas power plants which can be turned on and off fairly easily.  The post looks at the current methodology for valuing the capacity of renewable intermittent resources, such as wind or solar.

In general, wind blows stronger at night, but this chart of the power output 
every day for a month of a California wind farm shows that there can be 
quite a lot of variability
Peak hours for the grid in most of the United States are during the summer afternoons when buildings have their air conditioner turned on (the exceptions are winter-peaking areas in the northern United States where customers have a lot of baseboard electric heating).  Often, the wind is not blowing its strongest on hot afternoons.  In addition, while these hours tend to be sunny, there could be significant cloud cover during an important hour, or the air conditioning load could remain high at dusk when the sun sets.  Wind and solar cannot be counted on to be available the same as a natural gas combustion turbine.

This one day chart of the power output from a photovoltaic solar
installation shows the impact that cloud cover can have on solar power.
Geographic diversity of solar power throughout the state should mute
many of these variations for the purpose of system-wide capacity. 
On the other hand, conventional power plants such as a natural gas power plan is not available all the time either, and it still receives capacity value.  Conventional resources have scheduled maintenance, and unplanned outages.  Moreover, even if wind and solar do not always perform at their maximum rated output during peak hours, surely they are providing some benefit which could be estimated statistically.

California has attempted to address this issue by creating a net qualifying capacity (NQC) methodology to determine the amount of resource adequacy a power plant of a given technology provides.  Resource adequacy, as I mentioned previously, is the closest thing California has to a forward capacity payment.

The NQC for renewables is determined by an “exceedance methodology”, calculate by California state regulators: the public utilities commission (CPUC), the energy commission (CEC), and the ISO (CAISO).  The exceedance approach measures the minimum amount of generation produced by the resource in a certain percentage of peak hours.  The exceedance level used to calculate the QC of wind and solar resources is 70%.  Another way to describe the exceedance level is that the 70% exceedance level of a resource’s production profile is the maximum generation amount that it produces at least 70% of the time (during peak hours).  The peak hours, for the purpose of the exceedance methodology calculation, are 5 hours a day, 4-9 p.m. November to March and 1-6 p.m. April to October.**  These hours vary regionally, and would not make sense for a grid at a different latitude than California. 

To determine the minimum production level of solar and wind resources for 70% of the peak hours, California looks at historical values for load data and power output from solar and wind resources.  Typically, an average of the past 3 years is used.

NQC values for renewable power resources are dependent on seasonality, geographic diversity of the resource, and site specific factors. Anecdotally, I would expect the NQC value of a solar facility to be approximately 25-35% of its installed capacity (measured in MWs), and the NQC value for wind to be approximately 10-20%.


**5 hours a day year round is a relatively conservative metric because the industry standard for determining capacity among distributed resources is the top 250 load hours of the year.  250 hours is an “eyeballed” number for the peak hours in which the grid is most likely to have an outage.  A more rigorous loss of load probability (LOLP) analysis is done for reliability planning, but for economic estimates of resource planning, 250 hours will usually suffice.  5 hours a day is roughly 20% of the hours in the year, whereas 250 hours is less than 3% of the hours in the year.