X
Business

Blackout: 50 Million People Should Not Have Lost Power

While every indication is that the massive blackout across North America that left more than 50 million people without power was caused by a cascade of catastrophic events, it clearly showed that the North American power grid is extremely vulnerable. The
Written by Jill Feblowitz, Contributor

While every indication is that the massive blackout across North America that left more than 50 million people without power was caused by a cascade of catastrophic events, it clearly showed that the North American power grid is extremely vulnerable. The chances are one in a million that a natural occurrence that could not be forecast, such as a lightning strike disabling major generation on a hot high-demand day, would destabilize the grid, but it did. Be it lightening at the right moment at the right time in the right conditions, or something more nefarious like a terrorist attack, the power supply here and in most parts of the world is vulnerable.

The Bottom Line: The outage would not have been propagated across such a wide area if information technology were used properly.

What It Means

Grid and plant operators had real-time visibility, but not alert notification and decision support. The move toward more interconnection in recent years did not cause this problem; in fact, greater availability of power from a greater number of sources has prevented more frequent localized outages. From an engineering standpoint, control systems performed as they were designed, protecting the grid and the attached equipment from an even more disastrous event. There was some orderly shutdown of power plants, so operators were able to see the destabilization. Areas that are heavily interconnected to the grid, such as New England, were not affected. Presumably, those operators were able to isolate their systems from the grid failure.

Investment should not stop at the grid; information technology is critical to maintaining reliability. The physical infrastructure is inadequate for the task of supporting growing pockets of energy demand. In fact, during these tight times, many companies, not just energy companies, have not paid enough attention to maintaining the performance of their existing assets.

While the government committed funds to update outdated physical infrastructure in response, information technology also makes the grid less vulnerable:

  • Real-time simulations--For the past year, power companies have been envisioning a more comprehensive reliability system. Within seconds, this system would draw on real-time information not only on grid conditions, but also on generation sets. Real-time data historians already make that information as well as analytical tools available to energy companies. The second critical piece is the ability to simulate various scenarios and system constraints within minutes to come up with the optimal decision, whether that is isolation or takedown. The Takeaway: Technology exists to find the cause and cure quickly to minimize damage.
  • Root cause analysis and automated safeguards--Supervisory Control and Data Acquisition (SCADA) systems have already amassed data on the grid conditions to support a root cause analysis of the source of the blackout. Data historians can make that information available for quick analysis and corrective action. That action could be a change in control system automation. The Takeaway: If the problem is equipment failure, preventive and predictive maintenance-supported Enterprise Asset Management (EAM) systems are the answer.
  • Condition-based monitoring--Used to predict potential equipment failures through analysis of monitoring of failure indicators for operating equipment like transformers and generators, such as ambient temperature, oil level, vibration, and similar activity. As control companies such as ABB and GE Harris install more smart equipment, monitoring is even more critical. The Takeaway: The Blackout of 2003 will undoubtedly give rise to new regulations for visibility into the fail-safe equipment and ways to monitor the monitors; EAM is in use at some companies to do just that.
Conclusion: Greater connectivity and information availability is not the enemy. In fact, there are a number of systems integrators, application vendors, and service companies that have developed very robust cyber-security protocols that are less vulnerable than most SCADA systems. The mandate is clear: investment in the infrastructure for reliability is top priority.

AMR Research originally published this article on 15 August 2003.

Editorial standards