About

Wednesday, February 6, 2013

CONTINGENCY ANALYSIS

CONTINGENCY ANALYSIS

Contingency analysis indicates to the operator what might happen to
the system in the event of unplanned equipment outage. It essentially offers
answers to questions such as “What will be the state of the system if an outage
on part of the major transmission system takes place?” The answer might be
that power flows and voltages will readjust and remain within acceptable limits,
or that severe overloads and under-voltages will occur with potentially severe
consequences should the outage take place.
A severe overload, persisting long enough, can damage equipment of
the system, but usually relays are activated to isolate the affected equipment
once it fails. The outage of a second component due to relay action is more
serious and often results in yet more readjustment of power flows and bus
voltages. This can in turn cause more overloads and further removal of
equipment. An uncontrollable cascading series of overloads and equipment
removals may then take place, resulting in the shutting down of a significant
portion of the system.
The motivation to use contingency analysis tools in an EMS is that
when forewarned the operator can initiate preventive action before the event to
avoid problems should an outage take place. From an economic point of view,the operator strives to avoid overloads that might directly damage equipment, or
worse, might cause the system to lose a number of components due to relay
action and then cause system-wide outages.
Insulation breakdown, over-temperature relay action, or simply
incorrect operation of relay devices is internal causes of contingencies. External
contingencies are caused by environmental effects such as lightning, high winds
and ice conditions or else are related to some non-weather related events such as
vehicle or aircraft coming into contact with equipment, or even human or animal
direct contact. These causes are treated as unscheduled, random events, which
the operator can not anticipate, but for which they must be prepared.
The operators must play an active role in maintaining system security.
The first step is to perform contingency analysis studies frequently enough to
assure that system conditions have not changed significantly from the last
execution. The outcome of contingency analysis is a series of warnings or
alarms to the operators alerting them that loss of component A will result in an
overload of X% on line T1. To achieve an accurate picture of the system's
exposure to outage events several points need to be considered:
A) System Model
Contingency analysis is carried out using a power flow model of the
system. Additional information about system dynamics are needed to assess
stability as well. Voltage levels and the geographic extent to include in the
model are issues to be considered. In practice, all voltage levels that have any
possibility of connecting circuits in parallel with the high voltage system are
included. This leaves out those that are radial to it such as distribution networks.
While the geographical extent is difficult to evaluate, it is common to model the
system to the extent real-time measurement data is available to support the
model.
B) Contingency Definition
Each modeled contingency has to be specified on its own. The simplest
definition is to name a single component. This implies that when the model of
the system is set up, this contingency will be modeled by removing the single
component specified. Another important consideration is the means of
specifying the component outage. The component can be specified by name,
such as a transmission line name, or more accurately, a list of circuit breakers
can be specified as needing to be operated to correctly model the outage of the
component. Contingencies that require more than one component to be taken
out together must be defined as well. There is an advantage here to using a “list
of breakers” in that the list is simply expanded to include all breakers necessary
to remove all relevant equipment.
C) Double Contingencies
A double contingency is the overlapping occurrence of twoindependent contingent events. To be specific, one outside event causes an
outage and while this outage is still in effect, a second totally independent event
causes another component to be taken out. The overlap of the two outages often
causes overloads and under-voltages that would not occur if either happened
separately. As a result, many operators require that a contingency analysis
program be able to take two independent contingencies and model them as if
they had happened in an overlapping manner.
D) Contingency List
Generally, contingency analysis programs are executed based a list of
valid contingencies. The list might consist of all single component outages
including all transmission lines, transformers, substation buses, and all generator
units. For a large interconnected power system just this list alone could result in
thousands of contingency events to be tested. If the operators wished to model
double contingencies, the number becomes millions of possible events.
Methods of selecting a limited set of priority contingencies are then needed.
E) Speed
Generally, operators need to have results from a contingency analysis
program in the order of a few minutes up to fifteen minutes. Anything longer
means that the analysis is running on a system model that does not reflect
current system status and the results may not be meaningful.
F) Modeling Detail
The detail required for a contingency case is usually the same as that
used in a study power flow. That is, each contingency case requires a fully
converged power flow that correctly models each generator's VAR limits and
each tap adjusting transformer's control of voltage.

Historical Methods of Contingency Analysis

There is a conflict between the accuracy with which the power system
is modeled and the speed required for modeling all the contingencies specified
by the operator. If the contingencies can be evaluated fast enough, then all cases
specified on the contingency list are run periodically and alarms reported to the
operators. This is possible if the computation for each outage case can be
performed very fast or else the number of contingencies to be run is very small.The number of contingency cases to be solved in common energy management
systems is usually a few hundred to a few thousand cases. This coupled with the
fact that the results are to be as accurate as if run with a full power flow program
make the execution of a contingency analysis program within an acceptable time
frame extremely difficult.

Selection of Contingencies to be Studied

A full power flow must be used to solve for the resulting flows andvoltages in a power system with serious reactive flow or voltage problems when
an outage occurs. In this case, the operators of large systems looking at a large
number of contingency cases may not be able to get results soon enough. A
significant speed increase could be obtained by simply studying only the
important cases, since most outages do not cause overloads or under-voltages.
1) Fixed List
Many operators can identify important outage cases and they can get
acceptable performance. The operator chooses the cases based on experience
and then builds a list for the contingency analysis program to use. It is possible
that one of the cases that were assumed to be safe may present a problem
because some assumptions used in making the list are no longer true.
2) Indirect Methods (Sensitivity-Based Ranking Methods)
An alternative way to produce a reduced contingency list is to perform
a computation to indicate the possible bad cases and perform it as often as the
contingency analysis itself is run. This builds the list of cases dynamically and
the cases that are included in the list may change as conditions on the power
system change. This requires a fast approximate evaluation to discover those
outage cases that might present a real problem and require further detailed
evaluation by a full power flow. Normally, a sensitivity method based on the
concept of a network performance index is employed. The idea is to calculate a
scalar index that reflects the loading on the entire system.
3) Comparison of Direct and Indirect Methods
Direct methods are more accurate and selective than the indirect ones at
the expense of increased CPU requirements. The challenge is to improve the
efficiency of the direct methods without sacrificing their strengths. Direct
methods assemble severity indices using monitored quantities (bus voltages,
branch flows, and reactive generation), that have to be calculated first. In
contrast, the indirect methods calculate severity indices explicitly without
evaluating the individual quantities. Therefore, indirect methods are usually less
computationally demanding. Knowing the individual monitored quantities
enables one to calculate severity indices of any desired complexity without
significantly affecting the numerical performance of direct methods. Therefore,
more attention has been paid recently to direct methods for their superior
accuracy (selectivity). This has lead to drastic improvements in their efficiency
and reliability.
4) Fast Contingency Screening Methods
To build a reduced list of contingencies one uses a fast solution
(normally an approximate one) and ranks the contingencies according to its
results. Direct contingency screening methods can be classified by the
imbedded modeling assumptions. Two distinct classes of methods can beidentified:
a) Linear methods specifically intended to screen contingencies
for possible real power (branch MW overload) problems.
b) Nonlinear methods intended to detect both real and reactive
power problems (including voltage problems).
Bounding methods offer the best combination of numerical efficiency
and adaptability to system topology changes. These methods determine the
parts of the network in which branch MW flow limit violations may occur. A
linear incremental solution is performed only for the selected system areas rather
than for the entire network. The accuracy of the bounding methods is only
limited by the accuracy of the incremental linear power flow.
Nonlinear methods are designed to screen the contingencies for reactive
power and voltage problems. They can also screen for branch flow problems
(both MW and MVA/AMP). Recent proposed enhancements include attempts
to localize the outage effects, and speeding the nonlinear solution of the entire
system.
An early localization method is the “concentric relaxation” which
solves a small portion of the system in the vicinity of the contingency while
treating the remainder of the network as an “infinite expanse.” The area to be
solved is concentrically expanded until the incremental voltage changes along
the last solved tier of buses are not significantly affected by the inclusion of an
additional tier of buses. The method suffered from unreliable convergence, lack
of consistent criteria for the selection of buses to be included in the small
network; and the need to solve a number of different systems of increasing size
resulting from concentric expansion of the small network (relaxation).
Different attempts have been made at improving the efficiency of the
large system solution. They can be classified as speed up the solution by means
of:
1) Approximations and/or partial (incomplete) solutions.
2) Using network equivalents (reduced network representation).
The first approach involves the “single iteration” concept to take
advantage of the speed and reasonably fast convergence of the Fast Decoupled
Power Flow to limit the number of iterations to one. The approximate, first
iteration solution can be used to check for major limit violations and the
calculation of different contingency severity measures. The single iteration
approach can be combined with other techniques like the use of the reduced
network representations to improve numerical efficiency.
An alternative approach is based upon bounding of outage effects.
Similar to the bounding in linear contingency screening, an attempt is made to
perform a solution only in the stressed areas of the system. A set of bounding
quantities is created to identify buses that can potentially have large reactivemismatches. The actual mismatches are then calculated and the forward
solution is performed only for those with significant mismatches. All bus
voltages are known following the backward substitution step and a number of
different severity indices can be calculated.
The zero mismatch (ZM) method extends the application of localization
ideas from contingency screening to full iterative simulation. Advantage is
taken of the fact that most contingencies significantly affect only small portions
(areas) of the system. Significant mismatches occur only in very few areas of
the system being modeled. There is a definite pattern of very small mismatches
throughout the rest of the system model. This is particularly true for localizable
contingencies, e.g., branch outages, bus section faults. Consequently, it should
be possible to utilize this knowledge and significantly speed up the solution of
such contingencies. The following is a framework for the approach:
1) Bound the outage effects for the first iteration using for example a
version of the complete boundary.
2) Determine the set of buses with significant mismatches resulting
from angle and magnitude increments.
3) Calculate mismatches and solve for new increments.
4) Repeat the last two steps until convergence occurs.
The main difference between the zero mismatch and the concentric
relaxation methods is in the network representation. The zero mismatch method
uses the complete network model while a small cutoff representation is used in
the latter one. The zero mismatch approach is highly reliable and produces
results of acceptable accuracy because of the accuracy of the network
representation and the ability to expand the solution to any desired bus.

0 comments:

Post a Comment