About

Wind Energy

Wind energy has been one of humanity’s primary energy sources for transporting goods.

electric-charge

The simplest electrical phenomenon is static electricity, the temporary “charging” of certain objects when they are rubbed against each other.

solar-photovoltaic-and-solar-thermal

There are two basic categories of technologies that convert sunlight into useful forms of energy, aside from biomass-based systems that do this in a broader sense by using photosynthesis from plants as an intermediate step.

AC motors

AC motors.are.used.worldwide.in.many.applications.to. transform.electrical.energy.into.mechanical.energy.

AC Motor-Control

Wherever motors are used, they must be controlled. In Basics of Control Components you learned how various control products are used to control the operation of motors.

Wednesday, February 6, 2013

DYNAMIC SECURITY ANALYSIS

DYNAMIC SECURITY ANALYSIS

The North American Electric Reliability Council (NERC) defines
security as “the prevention of cascading outages when the bulk power supply is
subjected to severe disturbances.” To assure that cascading outages will not
take place, the power system is planned and operated such that the following
conditions are met at all times in the bulk power supply:
• No equipment or transmission circuits are overloaded;
• No buses are outside the permissible voltage limits (usually within
+5 percent of nominal); and
• When any of a specified set of disturbances occurs, acceptable
steady-state conditions will result following the transient (i.e.,
instability will not occur).
Security analysis is carried out to ensure that these conditions are met.
The first two require only steady-state analysis; but the third requires transient
analysis (e.g., using a transient stability application). It has also been recognized
that some of the voltage instability phenomena are dynamic in nature, and
require new analysis tools.Generally, security analysis is concerned with the system's response to
disturbances. In steady-state analysis the transition to a new operating condition
is assumed to have taken place, and the analysis ascertains that operating
constraints are met in this condition (thermal, voltage, etc.). In dynamic security
analysis the transition itself is of interest, i.e., the analysis checks that the
transition will lead to an acceptable operating condition. Examples of possible
concern include loss of synchronism by some generators, transient voltage at a
key bus (e.g., a sensitive load) failing below a certain level and operation of an
out-of-step relay resulting in the opening of a heavily loaded tie-line.
The computational capability of some control centers may limit
security analysis to steady state calculations. The post-contingency steady-state
conditions are computed and limit checked for flow or voltage violations. The
dynamics of the system may then be ignored and whether the post-contingency
state was reached without losing synchronism in any part of the system remains
unknown. As a result, instead of considering actual disturbances, the
contingencies are defined in terms of outages of equipment and steady-state
analysis is done for these outages. This assumes that the disturbance did not
cause any instability and that simple protective relaying caused the outage.
Normally, any loss of synchronism will cause additional outages thus making
the present steady-state analysis of the post-contingency condition inadequate
for unstable cases. It is clear that dynamic analysis is needed.
In practice, we define a list of equipment losses for static analysis.
Such a list usually consists of all single outages and a careful choice of multiple
outages. Ideally, the outages should be chosen according to their probability of
occurrence but these probabilities are usually not known. In some instance the
available probabilities are so small that comparisons are usually meaningless.
The choice of single outages is reasonable because they are likely to occur more
often than multiple ones. Including some multiple outages is needed because
certain outages are likely to occur together because of proximity (e.g., double
lines on the same tower) or because of protection schemes (e.g., a generator may
be relayed out when a line is on outage). The size of this list is usually several
hundred and can be a couple of thousand.
For dynamic security analysis, contingencies are considered in terms of
the total disturbance. All faults can be represented as three phase faults, with or
without impedances, and the list of contingencies is a list of locations where this
can take place. This is a different way of looking at contingencies where the
post-contingency outages are determined by the dynamics of the system
including the protection system. Obviously, if all possible locations are
considered, this list can be very large.
In steady-state security analysis, it is not necessary to treat all of the
hundreds of outages cases using power flow calculations, because the operator is
interested in worst possibilities rather than all possibilities. It is practical to use
some approximate but faster calculations to filter out these worst outages, which
can then be analyzed by a power flow. This screening of several hundredoutages to find the few tens of the worst ones has been the major breakthrough
that made steady state security analysis feasible. Generally, this contingency
screening is done for the very large list of single outages while the multiple
outages are generally included in the short list for full power flow analysis.
Currently, the trend is to use several different filters (voltage filter versus line
overload filter) for contingency screening. It is also necessary to develop fast
filtering schemes for dynamic security analysis to find the few tens of worst
disturbances for which detailed dynamic analysis will have to be done. The
filters are substantially different from those used for static security.
From a dispatcher’s point of view, static and dynamic security analyses
are closely related. The worst disturbances and their effects on the system are to
be examined. The effects considered include the resulting outages and the limit
violations in the post-contingency condition. In addition, it would be useful to
know the mechanism that caused the outages, whether they were due to distance
relay settings or loss of synchronism or other reasons. This latter information is
particularly useful for preventive action.
The stability mechanism that causes the outages is referred to as the
“mode of disturbance.” A number of modes exist. A single generating unit may
go out of synchronism on the first swing (cycle). A single unit may lose
synchronism after several cycles, up to a few seconds. Relays may operate to
cause transmission line outages. Finally, periodic oscillations may occur
between large areas of load and/or generation. These oscillations may continue
undamped to a point of loss of synchronism. All of these types of events are
called modes of disturbances.

Motivation for Dynamic Security Analysis

Ascertaining power system security involves considering all possible
(and credible) conditions and scenarios; analysis is then performed on all of
them to determine the security limits for these conditions. The results are given
to the operating personnel in the form of “operating guides,” establishing the
“safe” regimes of operation. The key power system parameter or quantity is
monitored (in real time) and compared with the available (usually precomputed)
limit. If the monitored quantity is outside the limit, the situation is
alerted or flagged for some corrective action.
Recent trends in operating power systems close to their security limits
(thermal, voltage and stability) have added greatly to the burden on transmission
facilities and increased the reliance on control. Simultaneously, they have
increased the need for on-line dynamic security analysis.
For on-line dynamic security analysis, what is given is a base case
steady-state solution (the real time conditions as obtained from the state
estimator and external model computation, or a study case as set up by the
operator) and a list of fault locations. The effects of these faults have to be
determined and, specifically, the expected outages have to be identified.Examining the dynamic behavior of the system can do this. Some form of fast
approximate screening is required such that the few tens of worst disturbances
can be determined quickly.
Traditionally, for off-line studies, a transient stability program is used
to examine the dynamic behavior. This program, in the very least, models the
dynamic behavior of the machines together with their interconnection through
the electrical network. Most production grade programs have elaborate models
for the machines and their controls together with dynamic models of other
components like loads, dc lines, static VAR compensators, etc. These models
are simulated in time using some integration algorithm and the dynamic
behavior of the system can be studied. If instability (loss of synchronism) is
detected, the exact mode of instability (the separation boundary) can be
identified. Many programs have relay models that can also pinpoint the outages
caused by relay operation due to the dynamic behavior.
To perform the analysis in on-line mode the time required for the
computation is a crucial consideration. That is, the analysis itself by a pure time
domain simulation is known to be feasible but whether this analysis can be
completed within the time frame needed in the control center environment is the
real challenge. The time taken for time domain analysis of power system
dynamics depends on many factors. The most obvious one is the length of
simulation or the time period for which the simulation needs to be done so that
all the significant effects of the disturbance can be captured. Other factors
include the size of the power system, and the size and type of the models used.
Additional factors like the severity of the disturbance and the solution algorithm
used also effects the computation time.
Determining the vulnerability of the present system conditions to
disturbances does not complete the picture because the solution to any existing
problems must also be found. Quite often the post-contingency overloads and
out-of limit voltage conditions are such that they can be corrected after the
occurrence of the fault. Sometimes, and especially for unstable faults, the postcontingency
condition is not at all desirable and preventive remedial action is
needed. This usually means finding new limits for operating conditions or
arming of special protective devices. Although remedial action is considered, as
a separate function from security analysis, operators of stability limited systems
need both.

Approaches to DSA

A number of approaches to the on-line dynamic stability analysis
problem have been studied. To date, engineers perform a large number of
studies off-line to establish operating guidelines, modified by judgement and
experience. Conventional wisdom has it that computer capability will continue
to make it more economically feasible to do on-line dynamic security
assessment, DSA, providing the appropriate methods are developed.The most obvious method for on-line DSA is to implement the off-line
time domain techniques on faster, more powerful and cheaper computers.
Equivalencing and localization techniques are ways to speed up the time domain
solutions. Also parallel and array processors show promise in accelerating
portions of the time domain solution.
Direct methods of transient stability, e.g., the transient energy function
method, have emerged with the potential of meeting some of the needs for DSA.
They offer the possibility of conducting stability studies in near real-time,
provide a qualitative judgement on stability, and they are suitable for use in
sensitivity assessments. The TEF methods are limited to first swing analysis.
An advantage, however, is that the TEF methods provide energy margins to
indicate the margin to instability.
Eigenvalue and related methods, and frequency response methods are
used as part of off-line studies, for example, using frequency response method to
design power system stabilizers, but are not currently thought of as part of an
on-line DSA. Probabilistic methods have the advantage of providing a measure
of the likelihood of a stability problem. Their application in dynamic security
assessment appears to be in the areas of contingency screening and in
quantifying the probability of the next state of the system.
Artificial intelligence techniques including computational neural
networks, fuzzy logic, and expert systems have proven to be appropriate
solutions to other power system operations problems, and there is speculation
that these technologies will play a major role in DSA.

OPTIMAL PREVENTIVE AND CORRECTIVE ACTIONS

OPTIMAL PREVENTIVE AND CORRECTIVE ACTIONS

stability problems, preventive actions are required. If a feasible solution exists
to a given security control problem, then it is highly likely that other feasible
solutions exist as well. In this instance, one solution must be chosen from
among the feasible candidates. If a feasible solution does not exist (which is
also common), a solution must be chosen from the infeasible candidates.
Security optimization is a broad term to describe the process of selecting a
preferred solution from a set of (feasible or infeasible) candidate solutions. The
term Optimal Power Flow (OPF) is used to describe the computer application
that performs security optimization within an Energy Management System.

Optimization in Security Control

To address a given security problem, an operator will have more than
one control scheme. Not all schemes will be equally preferred and the operator
will thus have to choose the best or “optimal” control scheme. It is desirable to
find the control actions that represent the optimal balance between security,
economy, and other operational considerations. The need is for an optimal
solution that takes all operational aspects into consideration. Security
optimization programs may not have the capability to incorporate all operational
considerations into the solution, but this limitation does not prevent security
optimization programs from being useful to utilities.
The solution of the security optimization program is called an “optimal
solution” if the control actions achieve the balance between security, economy,
and other operational considerations. The main problem of security
optimization seeks to distinguish the preferred of two possible solutions. A
method that chooses correctly between any given pair of candidate solutions is
capable of finding the optimal solution out of the set of all possible solutions.
There are two categories of methods for distinguishing between candidate
solutions: one class relies on an objective function, the other class relies on
rules.
1) The Objective Function
The objective function method assumes that it is possible to assign a
single numerical value to each possible solution, and that the solution with the
lowest value is the optimal solution. The objective function is this numerical
assignment. In general, the objective function value is an explicit function of
the controls and state variables, for all the networks in the problem.
Optimization methods that use an objective function typically exploit its
analytical properties, solving for control actions that represent the minimum.
The conventional optimal power flow (OPF) is an example of an optimization
method that uses an objective function.
The advantages of using an objective function method are:
• Analytical expressions can be found to represent MW production
costs and transmission losses, which are, at least from an economic
view point, desirable quantities to minimize.
• The objective function imparts a value to every possible solution.
Thus all candidate solutions can, in principle, be compared on the
basis of their objective function value.
• The objective function method assures that the optimal solution of
the moment can be recognized by virtue of its having the minimum
value.
Typical objective functions used in OPF include MW production costs
or expressions for active (or reactive) power transmission losses. However,
when the OPF is used to generate control strategies that are intended to keep the
power system secure, it is typical for the objective function to be an expression
of the MW production costs, augmented with fictitious control costs that
represent other operational considerations. This is especially the case whensecurity against contingencies is part of the problem definition. Thus when
security constrained OPF is implemented to support real-time operations, the
objective function tends to be a device whose purpose is to guide the OPF to
find the solution that is optimal from an operational perspective, rather than one
which represents a quantity to be minimized.
Some examples of non-economic operational considerations that a
utility might put into its objective function are:
• a preference for a small number of control actions;
• a preference to keep a control away from its limit;
• the relative preference or reluctance for preventive versus postcontingent
action when treating contingencies; and
• a preference for tolerating small constraint violations rather than
taking control action.
The most significant shortcoming of the objective function method is
that it is difficult (sometimes impossible) to establish an objective function that
consistently reflects true production costs and other non-economic operational
considerations.
2) Rules
Rules are used in methods relying on expert systems techniques. A
rule-based method is appropriate when it is possible to specify rules for
choosing between candidate solutions easier than by modeling these choices via
an objective function. Optimization methods that use rules typically search for a
rule that matches the problem addressed. The rule indicates the appropriate
decision (e.g., control action) for the situation. The main weakness of a rulebased
approach is that the rule base does not provide a continuum in the solution
space. Therefore, it may be difficult to offer guidance for the OPF from the rule
base when the predefined situations do not exist in the present power system
state.
Rules can play another important role when the OPF is used in the realtime
environment. The real-time OPF problem definition itself can be ill
defined and rules may be used to adapt the OPF problem definition to the
current state of the power system.

Optimization Subject to Security Constraints

The conventional OPF formulation seeks to minimize an objective
function subject to security constraints, often presented as “hard constraints,” for
which even small violations are not acceptable. A purely analytical formulation
might not always lead to solutions that are optimal from an operational
perspective. Therefore, the OPF formulation should be regarded as a framework
in which to understand and discuss security optimization problems, rather than
as a fundamental representation of the problem itself.1) Security Optimization for the Base Case State
Consider the security optimization problem for the base case state
ignoring contingencies. The power system is considered secure if there are no
constraint violations in the base case state. Thus any control action required will
be corrective action. The aim of the OPF is to find the optimal corrective action.
When the objective function is defined to be the MW production costs,
the problem becomes the classical active and reactive power constrained
dispatch. When the objective function is defined to be the active power
transmission losses, the problem becomes one of active power loss
minimization.
2) Security Optimization for Base Case and Contingency States
Now consider the security optimization problem for the base case and
contingency states. The power system is considered secure if there are no
constraint violations in the base case state, and all contingencies are manageable
with post-contingent control action. In general, this means that base case control
action will be a combination of corrective and preventive actions and that postcontingent
control action will be provided in a set of contingency plans. The
aim of the OPF is then to find the set of base case control actions plus
contingency plans that is optimal.
Dealing with contingencies requires solving OPF involving multiple
networks, consisting of the base case network and each contingency network.
To obtain an optimal solution, these individual network problems must be
formulated as a multiple network problem and solved in an integrated fashion.
The integrated solution is needed because any base case control action will
affect all contingency states, and the more a given contingency can be addressed
with post-contingency control action, the less preventive action is needed for
that contingency.
When an operator is not willing to take preventive action, then all
contingencies must be addressed with post-contingent control action. The
absence of base case control action decouples the multiple network problems
into a single network problem for each contingency. When an operator is not
willing to rely on post-contingency control action, then all contingencies must
be addressed with preventive action. In this instance, the cost of the preventive
action is preferred over the risk of having to take control action in the postcontingency
state. The absence of post-contingency control action means that
the multiple network problem may be represented as the single network problem
for the base case, augmented with post-contingent constraints.
Security optimization for base case and contingency states will involve
base case corrective and preventive action, as well as contingency plans for
post-contingency action. To facilitate finding the optimal solution, the objectivefunction and rules that reflect operating policy are required. For example, if it is
preferred to address contingencies with post-contingency action rather than
preventive action, then post-contingent controls may be modeled as having a
lower cost in the objective function. Similarly, a preference for preventive
action over contingency plans could be modeled by assigning the postcontingent
controls a higher cost than the base case controls. Some
contingencies are best addressed with post-contingent network switching. This
can be modeled as a rule that for a given contingency, switching is to be
considered before other post-contingency controls.
3) Soft Constraints
Another form of security optimization involves “soft” security
constraints that may be violated but at the cost of incurring a penalty. This is a
more sophisticated method that allows a true security/economy trade-off. Its
disadvantage is requiring a modeling of the penalty function consistent with the
objective function. When a feasible solution is not possible, this is perhaps the
best way to guide the algorithm toward finding an “optimal infeasible” solution.
4) Security versus Economy
As a general rule, economy must be compromised for security.
However, in some cases security can be traded off for economy. If the
constraint violations are small enough, it may be preferable to tolerate them in
return for not having to make the control moves. Many constraint limits are not
truly rigid and can be relaxed. Thus, in general, the security optimization
problem seeks to determine the proper balance of security and economy. When
security and economy are treated on the same basis, it is necessary to have a
measure of the relative value of a secure, expensive state relative to a less
secure, but also less expensive state.
5) Infeasibility
If a secure state cannot be achieved, there is still a need for the least
insecure operating point. For OPF, this means that when a feasible solution
cannot be found, it is still important that OPF reach a solution, and that this
solution be “optimal” in some sense, even though it is infeasible. This is
especially appropriate for OPF problems that include contingencies in their
definition. The OPF program needs to be capable of obtaining the “optimal
infeasible” solution. There are several approaches to this problem. Perhaps the
best approach is one that allows the user to model the relative importance of
specific violations, with this modeling then reflected in the OPF solution. This
modeling may involve the objective function (i.e., penalty function) or rules, or
both.

The Time Variable

The preceding discussion assumes that all network states are based on
the same (constant) frequency, and all transient effects due to switching and
outages are assumed to have died out. While bus voltages and branch flows are,in general, sinusoidal functions of time, only the amplitudes and phase
relationships are used to describe network state. Load, generation, and
interchange schedules change slowly with time, but are treated as constant in the
steady state approximation. There are still some aspects of the time variable that
need to be accounted for in the security optimization problem.
1) Time Restrictions on Violations and Controls
The limited amount of time to correct constraint violations is a security
concern. This is because branch flow thermal limits typically have several
levels of rating (normal, emergency, etc.), each with its maximum time of
violation. (The higher the rating, the shorter the maximum time of violation.)
Voltage limits have a similar rating structure and there is very little time to
recover from a violation of an emergency voltage rating.
Constraint violations need to be corrected within a specific amount of
time. This applies to violations in contingency states as well as actual violations
in the base case state. Base case violations, however, have the added
seriousness of the elapsed time of violation: a constraint that has been violated
for a period of time has less time to be corrected than a constraint that has just
gone into violation.
The situation is further complicated by the fact that controls cannot
move instantaneously. For some controls, the time required for movement is
significant. Generator ramp rates can restrict the speed with which active power
is rerouted in the network. Delay times for switching capacitors and reactors
and transformer tap changing mechanisms can preclude the immediate
correction of serious voltage violations. If the violation is severe enough, slow
controls that would otherwise be preferred may be rejected in favor of fast, less
preferred controls. When the violation is in the contingency state, the time
criticality may require the solution to chose preventive action even though a
contingency plan for post-contingent corrective action might have been possible
for a less severe violation.
2) Time in the Objective Function
It is common for the MW production costs to dominate the character of
the objective function for OPF users. The objective function involves the time
variable to the extent that the OPF is minimizing a time rate of change. This is
also the case when the OPF is used to minimize the cost of imported power or
active power transmission losses. Not all controls in the OPF can be “costs” in
terms of dollars per hour. The start-up cost for a combustion turbine, for
example, is expressed in dollars, not dollars per hour. The costing of reactive
controls is even more difficult, since the unwillingness to move these controls is
not easily expressed in either dollars or dollars per hour. OPF technology
requires a single objective function, which means that all control costs must be
expressed in the same units. There are two approaches to this problem:• Convert dollar per hour costs into dollar costs by specifying a time
interval for which the optimization is to be valid. Thus control
costs in dollars per hour multiplied by the time interval, yield
control costs in dollars. This is now in the same units as controls
whose costs are “naturally” in dollars. This approach thus
“integrates” the time variable out of the objective function
completely. This may be appropriate when the OPF solution is
intended for a well-defined (finite) period of time.
• Regard all fixed control costs (expressed in dollars) as occurring
repeatedly in time and thus having a justified conversion into
dollars per hour. For example, the expected number of times per
year that a combustion turbine is started defines a cost per unit
time for the start-up of the unit. Similarly, the unwillingness to
move reactive controls can be thought of as reluctance over and
above an acceptable amount of movement per year. This approach
may be appropriate when the OPF is used to optimize over a
relatively long period of time.
• Simply adjust the objective function empirically so that the OPF
provides acceptable solutions. This method can be regarded as an
example of either of the first two approaches.
Using an Optimal Power Flow Program
OPF programs are used both in on-line and in off-line (study mode)
studies. The two modes are not the same.
1) On-line Optimal Power Flow
The solution speed of an on-line OPF should be high enough so that the
program completes before the power system configuration has changed
appreciably. Thus the on-line OPF should be fast enough to run several time per
hour. The values of the algorithm’s input parameters should be valid over a
wide range of operating states, such that the program continues to function as
the state of the system changes. Moreover, the application needs to address the
correct security optimization problem and that the solutions conform to current
operating policy.
2) Advisory Mode versus Closed Loop Control
On-line OPF programs are implemented in either advisory or closed
loop mode. In advisory mode, the control actions that constitute the OPF
solution are presented as recommendations to the operator. For closed loop
OPF, the control actions are actually implemented in the power system, typically
via the SCADA subsystem of the Energy Management System. The advisory
mode is appropriate when the control actions need review by the dispatcher
before their implementation. Closed loop control for security optimization is
appropriate for problems that are so well defined that dispatcher review of the
control actions is not necessary. An example of closed loop on-line OPF is theConstrained Economic Dispatch (CED) function. Here, the constraints are the
active power flows on transmission lines, and the controls are the MW output of
generators on automatic generation control (AGC). When the conventional
Economic Dispatch would otherwise tend to overload the transmission lines in
its effort to minimize production costs, the CED function supplies a correction
to the controls to avoid the overloads. Security optimization programs that
include active and reactive power constraints and controls, in contingency states
as well as in the base case, are implemented in an advisory mode. Thus the
results of the on-line OPF are communicated to the dispatchers via EMS
displays. Considering the typical demands on the dispatchers’ time and
attention in the control center, the user interface for on-line OPF needs to be
designed such that the relevant information is communicated to the dispatchers
“at-a-glance.”
3) Defining the Real-time Security Optimization Problem
As the power system state changes through time, the various aspects of
the security optimization problem definition can change their relative
importance. For example, concern for security against contingencies may be a
function of how secure the base case is. If the base case state has serious
constraint violations, one may prefer to concentrate on corrective action alone,
ignoring the risk of contingencies. In addition, the optimal balance of security
and economy may depend on the current security state of the power system.
During times of emergency, cost may play little or no role in determining the
optimal control action. Thus the security optimization problem definition itself
can be dynamic and sometimes ill defined.

CONTINGENCY ANALYSIS

CONTINGENCY ANALYSIS

Contingency analysis indicates to the operator what might happen to
the system in the event of unplanned equipment outage. It essentially offers
answers to questions such as “What will be the state of the system if an outage
on part of the major transmission system takes place?” The answer might be
that power flows and voltages will readjust and remain within acceptable limits,
or that severe overloads and under-voltages will occur with potentially severe
consequences should the outage take place.
A severe overload, persisting long enough, can damage equipment of
the system, but usually relays are activated to isolate the affected equipment
once it fails. The outage of a second component due to relay action is more
serious and often results in yet more readjustment of power flows and bus
voltages. This can in turn cause more overloads and further removal of
equipment. An uncontrollable cascading series of overloads and equipment
removals may then take place, resulting in the shutting down of a significant
portion of the system.
The motivation to use contingency analysis tools in an EMS is that
when forewarned the operator can initiate preventive action before the event to
avoid problems should an outage take place. From an economic point of view,the operator strives to avoid overloads that might directly damage equipment, or
worse, might cause the system to lose a number of components due to relay
action and then cause system-wide outages.
Insulation breakdown, over-temperature relay action, or simply
incorrect operation of relay devices is internal causes of contingencies. External
contingencies are caused by environmental effects such as lightning, high winds
and ice conditions or else are related to some non-weather related events such as
vehicle or aircraft coming into contact with equipment, or even human or animal
direct contact. These causes are treated as unscheduled, random events, which
the operator can not anticipate, but for which they must be prepared.
The operators must play an active role in maintaining system security.
The first step is to perform contingency analysis studies frequently enough to
assure that system conditions have not changed significantly from the last
execution. The outcome of contingency analysis is a series of warnings or
alarms to the operators alerting them that loss of component A will result in an
overload of X% on line T1. To achieve an accurate picture of the system's
exposure to outage events several points need to be considered:
A) System Model
Contingency analysis is carried out using a power flow model of the
system. Additional information about system dynamics are needed to assess
stability as well. Voltage levels and the geographic extent to include in the
model are issues to be considered. In practice, all voltage levels that have any
possibility of connecting circuits in parallel with the high voltage system are
included. This leaves out those that are radial to it such as distribution networks.
While the geographical extent is difficult to evaluate, it is common to model the
system to the extent real-time measurement data is available to support the
model.
B) Contingency Definition
Each modeled contingency has to be specified on its own. The simplest
definition is to name a single component. This implies that when the model of
the system is set up, this contingency will be modeled by removing the single
component specified. Another important consideration is the means of
specifying the component outage. The component can be specified by name,
such as a transmission line name, or more accurately, a list of circuit breakers
can be specified as needing to be operated to correctly model the outage of the
component. Contingencies that require more than one component to be taken
out together must be defined as well. There is an advantage here to using a “list
of breakers” in that the list is simply expanded to include all breakers necessary
to remove all relevant equipment.
C) Double Contingencies
A double contingency is the overlapping occurrence of twoindependent contingent events. To be specific, one outside event causes an
outage and while this outage is still in effect, a second totally independent event
causes another component to be taken out. The overlap of the two outages often
causes overloads and under-voltages that would not occur if either happened
separately. As a result, many operators require that a contingency analysis
program be able to take two independent contingencies and model them as if
they had happened in an overlapping manner.
D) Contingency List
Generally, contingency analysis programs are executed based a list of
valid contingencies. The list might consist of all single component outages
including all transmission lines, transformers, substation buses, and all generator
units. For a large interconnected power system just this list alone could result in
thousands of contingency events to be tested. If the operators wished to model
double contingencies, the number becomes millions of possible events.
Methods of selecting a limited set of priority contingencies are then needed.
E) Speed
Generally, operators need to have results from a contingency analysis
program in the order of a few minutes up to fifteen minutes. Anything longer
means that the analysis is running on a system model that does not reflect
current system status and the results may not be meaningful.
F) Modeling Detail
The detail required for a contingency case is usually the same as that
used in a study power flow. That is, each contingency case requires a fully
converged power flow that correctly models each generator's VAR limits and
each tap adjusting transformer's control of voltage.

Historical Methods of Contingency Analysis

There is a conflict between the accuracy with which the power system
is modeled and the speed required for modeling all the contingencies specified
by the operator. If the contingencies can be evaluated fast enough, then all cases
specified on the contingency list are run periodically and alarms reported to the
operators. This is possible if the computation for each outage case can be
performed very fast or else the number of contingencies to be run is very small.The number of contingency cases to be solved in common energy management
systems is usually a few hundred to a few thousand cases. This coupled with the
fact that the results are to be as accurate as if run with a full power flow program
make the execution of a contingency analysis program within an acceptable time
frame extremely difficult.

Selection of Contingencies to be Studied

A full power flow must be used to solve for the resulting flows andvoltages in a power system with serious reactive flow or voltage problems when
an outage occurs. In this case, the operators of large systems looking at a large
number of contingency cases may not be able to get results soon enough. A
significant speed increase could be obtained by simply studying only the
important cases, since most outages do not cause overloads or under-voltages.
1) Fixed List
Many operators can identify important outage cases and they can get
acceptable performance. The operator chooses the cases based on experience
and then builds a list for the contingency analysis program to use. It is possible
that one of the cases that were assumed to be safe may present a problem
because some assumptions used in making the list are no longer true.
2) Indirect Methods (Sensitivity-Based Ranking Methods)
An alternative way to produce a reduced contingency list is to perform
a computation to indicate the possible bad cases and perform it as often as the
contingency analysis itself is run. This builds the list of cases dynamically and
the cases that are included in the list may change as conditions on the power
system change. This requires a fast approximate evaluation to discover those
outage cases that might present a real problem and require further detailed
evaluation by a full power flow. Normally, a sensitivity method based on the
concept of a network performance index is employed. The idea is to calculate a
scalar index that reflects the loading on the entire system.
3) Comparison of Direct and Indirect Methods
Direct methods are more accurate and selective than the indirect ones at
the expense of increased CPU requirements. The challenge is to improve the
efficiency of the direct methods without sacrificing their strengths. Direct
methods assemble severity indices using monitored quantities (bus voltages,
branch flows, and reactive generation), that have to be calculated first. In
contrast, the indirect methods calculate severity indices explicitly without
evaluating the individual quantities. Therefore, indirect methods are usually less
computationally demanding. Knowing the individual monitored quantities
enables one to calculate severity indices of any desired complexity without
significantly affecting the numerical performance of direct methods. Therefore,
more attention has been paid recently to direct methods for their superior
accuracy (selectivity). This has lead to drastic improvements in their efficiency
and reliability.
4) Fast Contingency Screening Methods
To build a reduced list of contingencies one uses a fast solution
(normally an approximate one) and ranks the contingencies according to its
results. Direct contingency screening methods can be classified by the
imbedded modeling assumptions. Two distinct classes of methods can beidentified:
a) Linear methods specifically intended to screen contingencies
for possible real power (branch MW overload) problems.
b) Nonlinear methods intended to detect both real and reactive
power problems (including voltage problems).
Bounding methods offer the best combination of numerical efficiency
and adaptability to system topology changes. These methods determine the
parts of the network in which branch MW flow limit violations may occur. A
linear incremental solution is performed only for the selected system areas rather
than for the entire network. The accuracy of the bounding methods is only
limited by the accuracy of the incremental linear power flow.
Nonlinear methods are designed to screen the contingencies for reactive
power and voltage problems. They can also screen for branch flow problems
(both MW and MVA/AMP). Recent proposed enhancements include attempts
to localize the outage effects, and speeding the nonlinear solution of the entire
system.
An early localization method is the “concentric relaxation” which
solves a small portion of the system in the vicinity of the contingency while
treating the remainder of the network as an “infinite expanse.” The area to be
solved is concentrically expanded until the incremental voltage changes along
the last solved tier of buses are not significantly affected by the inclusion of an
additional tier of buses. The method suffered from unreliable convergence, lack
of consistent criteria for the selection of buses to be included in the small
network; and the need to solve a number of different systems of increasing size
resulting from concentric expansion of the small network (relaxation).
Different attempts have been made at improving the efficiency of the
large system solution. They can be classified as speed up the solution by means
of:
1) Approximations and/or partial (incomplete) solutions.
2) Using network equivalents (reduced network representation).
The first approach involves the “single iteration” concept to take
advantage of the speed and reasonably fast convergence of the Fast Decoupled
Power Flow to limit the number of iterations to one. The approximate, first
iteration solution can be used to check for major limit violations and the
calculation of different contingency severity measures. The single iteration
approach can be combined with other techniques like the use of the reduced
network representations to improve numerical efficiency.
An alternative approach is based upon bounding of outage effects.
Similar to the bounding in linear contingency screening, an attempt is made to
perform a solution only in the stressed areas of the system. A set of bounding
quantities is created to identify buses that can potentially have large reactivemismatches. The actual mismatches are then calculated and the forward
solution is performed only for those with significant mismatches. All bus
voltages are known following the backward substitution step and a number of
different severity indices can be calculated.
The zero mismatch (ZM) method extends the application of localization
ideas from contingency screening to full iterative simulation. Advantage is
taken of the fact that most contingencies significantly affect only small portions
(areas) of the system. Significant mismatches occur only in very few areas of
the system being modeled. There is a definite pattern of very small mismatches
throughout the rest of the system model. This is particularly true for localizable
contingencies, e.g., branch outages, bus section faults. Consequently, it should
be possible to utilize this knowledge and significantly speed up the solution of
such contingencies. The following is a framework for the approach:
1) Bound the outage effects for the first iteration using for example a
version of the complete boundary.
2) Determine the set of buses with significant mismatches resulting
from angle and magnitude increments.
3) Calculate mismatches and solve for new increments.
4) Repeat the last two steps until convergence occurs.
The main difference between the zero mismatch and the concentric
relaxation methods is in the network representation. The zero mismatch method
uses the complete network model while a small cutoff representation is used in
the latter one. The zero mismatch approach is highly reliable and produces
results of acceptable accuracy because of the accuracy of the network
representation and the ability to expand the solution to any desired bus.

POWER SYSTEM SECURITY

POWER SYSTEM SECURITY

By power system security, we understand a qualified absence of risk of
disruption of continued system operation. Security may be defined from a
control point of view as the probability of the system's operating point remaining
in a viable state space, given the probabilities of changes in the system
(contingencies) and its environment (weather, customer demands, etc.).
Security can be defined in terms of how it is monitored or measured, as the
ability of a system to withstand without serious consequences any one of a preselected
list of “credible” disturbances (“contingencies”). Conversely,
insecurity at any point in time can be defined as the level of risk of disruption of
a system's continued operation.
Power systems are interconnected for improved economy and
availability of supplies across extensive areas. Small individual systems would
be individually more at risk, but widespread disruptions would not be possible.
On the other hand, interconnections make widespread disruptions possible.Operation of interconnected power systems demands nearly precise
synchronism in the rotational speed of many thousands of large interconnected
generating units, even as they are controlled to continuously follow significant
changes in customer demand. There is considerable rotational energy involved,
and the result of any cascading loss of synchronism among major system
elements or subsystems can be disastrous. Regardless of changes in system load
or sudden disconnection of equipment from the system, synchronized operation
requires proper functioning of machine governors, and that operating conditions
of all equipment remain within physical capabilities.
The risk of cascading outages still exists, despite improvements made
since the 1965 northeast blackout in the United States. Many factors increase
the risks involved in interconnected system operation:
• Wide swings in the costs of fuels result in significant changes in
the geographic patterns of generation relative to load. This leads to
transmission of electric energy over longer distances in patterns
other than those for which the transmission networks had been
originally designed.
• Rising costs due to inflation and increasing environmental
concerns constrain any relief through further transmission
construction. Thus, transmission, as well as generation, must be
operated closer to design limits, with smaller safety (security)
margins.
• Relaxation of energy regulation to permit sales of electric energy
by independent power producers, together with increasing pressure
for essentially uncontrolled access to the bulk power transmission
network.

Development of the Concept of Security

Prior to the 1965 Northeast blackout, system security was part of
reliability assured at the system planning stage by providing a strong system that
could ride out any “credible” disturbances without serious disruption. It is no
longer economically feasible to design systems to this standard. At that time,
power system operators made sure that sufficient spinning reserve was on line to
cover unexpected load increases or potential loss of generation and to examine
the impact of removing a line or other apparatus for maintenance. Whenever
possible, the operator attempted to maintain a desirable voltage profile by
balancing VARs in the system.
Security monitoring is perceived as that of monitoring, through
contingency analysis, the conditional transition of the system into an emergency
state.

Two Perspectives of Security Assessment

There is a need to clarify the roles of security assessment in theplanning and real-time operation environments. The possible ambiguity is the
result of the shift of focus from that of system robustness designed at the
planning stage as part of reliability, to that of risk avoidance that is a matter
operators must deal with in real time. The planner is removed from the timevarying
real world environment within which the system will ultimately
function. The term “security” within a planning context refers to those aspects
of reliability analysis that deal with the ability of the system, as it is expected to
be constituted at some future time, to withstand unexpected losses of certain
system components. Reliability has frequently been considered to consist of
adequacy and security. Adequacy is the ability to supply energy to satisfy load
demand. Security is the ability to withstand sudden disturbances. This
perspective overlooks the fact that the most reliable system will ultimately
experience periods of severe insecurity from the operator’s perspective. System
operations is concerned with security as it is constituted at the moment, with a
miscellaneous variety of elements out for maintenance, repair, etc., and exposed
to environmental conditions that may be very different from the normal
conditions considered in system planning. In operations, systems nearly always
have less than their full complement of equipment in service. As a result, an
operator must often improvise to improve security in ways that are outside the
horizon of planners.

Security Assessment Defined

Security assessment involves using available data to estimate the
relative security level of the system currently or at some near-term future state.
Approaches to security assessment are classified as either direct or indirect.
• The direct approach: This approach evaluates the likelihood of
the system operating point entering the emergency state. It
calculates the probability that the power System State will move
from normal state to emergency state, conditioned on its current
state, projected load variations, and ambient conditions. It is
common practice to assess security by analyzing a fixed set of
contingencies. The system is declared as insecure if any member
of the set would result in transition to the emergency state. This is
a limiting form of direct assessment, since it implies a probability
of one of the system's being in the emergency state conditioned on
the occurrence of any of the defined contingencies.
• The indirect approach: Here a number of reserve margins are
tracked relative to predetermined levels deemed adequate to
maintain system robustness vis-a-vis pre-selected potential
disturbances. An indirect method of security assessment defines a
set of system “security” variables that should be maintained with
predefined limits to provide adequate reserve margins.
Appropriate variables might include, MW reserves, equipment
emergency ratings (line, transformer, etc.), or VAR reserves within
defined regions. The reserve margins to be maintained for each of
the security variables could be determined by offline studies for anappropriate number of conditions with due consideration to the
degree to which random events can change the security level of a
system in real time. Security assessment then would consist of
tracking all such reserve margins relative to system conditions.
For a number of years, security concerns dealt with potential postcontingency
line overloads and because line MW loading can be studied
effectively by means of a linear system network model, it was possible to study
the effects of contingencies using linear participation or distribution factors.
Once derived for a given system configuration, they could be applied without
further power flow analysis to determine post-contingency line loading even, by
superposition, for multiple contingencies. Such a computationally simple
method of analysis made on-line contingency assessment practicable for
“thermal security,” where reactive flows were not of concern.
More recently, post-contingency voltage behavior has become a
prominent element in security assessment. Assessment of “voltage security” is a
complex process because the behavior of a system undergoing voltage collapse
cannot be completely explained on the basis of static analysis alone.

Implications of Security

The trend towards reducing the costs associated with robust systems
has lead to heightened requirements of active security control. This necessitates
an increase in the responsibilities of the system operator. Accordingly, it
requires operator training and the development and provision of tools that will
enable the operator to function effectively in the new environment.

Security Analysis

On-line security analysis and control involve the following three
ingredients:
• Monitoring
• Assessment
• Control
The following framework relates the three modules:
Step 1. Security Monitoring: Identify whether the system is in the
normal state or not using real-time system measurements. If the system
is in an emergency state, go to step 4. If load has been lost, go to step
5.
Step 2. Security Assessment: If the system is in the normal state,
determine whether the system is secure or insecure with respect to a set
of next contingencies.
Step 3. Security Enhancement: If insecure, i.e., there is at least one
contingency, which can cause an emergency, determine what action to
take to make the system secure through preventive actions.
Step 4. Emergency Control (remedial action): Perform proper
corrective action to bring the system back to the normal state following
a contingency, which causes the system to enter an emergency state.
Step 5. Restorative Control: Restore service to system loads.
Security analysis and control have been implemented in modem energy

POWER FLOW CONTROL

POWER FLOW CONTROL

power flows:
1. Prime mover and excitation control of generators.
2. Switching of shunt capacitor banks, shunt reactors, and static var
systems.
3. Control of tap-changing and regulating transformers.
4. FACTS based technology.
A simple model of a generator operating under balanced steady-state
conditions is given by the Thévenin equivalent of a round rotor synchronous
machine connected to an infinite bus as discussed in Chapter 3. V is the
generator terminal voltage, E is the excitation voltage, δ is the power angle, and
X is the positive-sequence synchronous reactance. We have shown that:

P =(EV/X)sinδ
Q = (V/X)(Ecosδ −V)

The active power equation shows that the active power P increases
when the power angle δ increases. From an operational point of view, when the
operator increases the output of the prime mover to the generator while holding
the excitation voltage constant, the rotor speed increases. As the rotor speed
increases, the power angle δ also increases, causing an increase in generator
active power output P. There is also a decrease in reactive power output Q,
given by the reactive power equation. However, when δ is less than 15°, the
increase in P is much larger than the decrease in Q. From the power-flow point
of view, an increase in prime-mover power corresponds to an increase in P at
the constant-voltage bus to which the generator is connected. A power-flow
program will compute the increase in δ along with the small change in Q.
The reactive power equation demonstrates that reactive power output Q
increases when the excitation voltage E increases. From the operational point of
view, when the generator exciter output increases while holding the primemover
power constant, the rotor current increases. As the rotor current
increases, the excitation voltage E also increases, causing an increase ingenerator reactive power output Q. There is also a small decrease in δ required
to hold P constant in the active power equation. From the power-flow point of
view, an increase in generator excitation corresponds to an increase in voltage
magnitude at the infinite bus (constant voltage) to which the generator is
connected. The power-flow program will compute the increase in reactive
power Q supplied by the generator along with the small change in δ.
The effect of adding a shunt capacitor bank to a power-system bus can
be explained by considering the Thévenin equivalent of the system at that bus.
This is simply a voltage source VTh in series with the impedance Zsys. The bus
voltage V before connecting the capacitor is equal to VTh. After the bank is
connected, the capacitor current IC leads the bus voltage V by 90°. Constructing
a phasor diagram of the network with the capacitor connected to the bus reveals
that V is larger than VTh. From the power-flow standpoint, the addition of a
shunt capacitor bank to a load bus corresponds to the addition of a reactive
generating source (negative reactive load), since a capacitor produces positive
reactive power (absorbs negative reactive power). The power-flow program
computes the increase in bus voltage magnitude along with a small change in δ.
Similarly, the addition of a shunt reactor corresponds to the addition of a
positive reactive load, wherein the power flow program computes the decrease
in voltage magnitude.
Tap-changing and voltage-magnitude-regulating transformers are used
to control bus voltages as well as reactive power flows on lines to which they
are connected. In a similar manner, phase-angle-regulating transformers are
used to control bus angles as well as real power flows on lines to which they are
connected. Both tap changing and regulating transformers are modeled by a
transformer with an off-nominal turns ratio. From the power flow point of view,
a change in tap setting or voltage regulation corresponds to a change in tap ratio.
The power-flow program computes the changes in Ybu bus voltage magnitudes
and angles, and branch flows.
FACTS is an acronym for flexible AC transmission systems. They use
power electronic controlled devices to control power flows in a transmission
network so as to increase power transfer capability and enhance controllability.
The concept of flexibility of electric power transmission involves the ability to
accommodate changes in the electric transmission system or operating
conditions while maintaining sufficient steady state and transient margins.
A FACTS controller is a power electronic-based system and other static
equipment that provide control of one or more ac transmission system
parameters. FACTS controllers can be classified according to the mode of their
connection to the transmission system as:
1. Series-Connected Controllers.
2. Shunt-Connected Controllers.
3. Combined Shunt and Series-Connected Controllers.The family of series-connected controllers includes the following
devices:
1. The Static Synchronous Series Compensator (S3C) is a static,
synchronous generator operated without an external electric energy
source as a series compensator whose output voltage is in
quadrature with, and controllable independently of, the line current
for the purpose of increasing or decreasing the overall reactive
voltage drop across the line and thereby controlling the transmitted
electric power. The S3C may include transiently rated energy
storage or energy absorbing devices to enhance the dynamic
behavior of the power system by additional temporary real power
compensation, to increase or decrease momentarily, the overall real
(resistive) voltage drop across the line.
2. Thyristor Controlled Series Compensation is offered by an
impedance compensator, which is applied in series on an ac
transmission system to provide smooth control of series reactance.
3. Thyristor Switched Series Compensation is offered by an
impedance compensator, which is applied in series on an ac
transmission system to provide step-wise control of series
reactance.
4. The Thyristor Controlled Series Capacitor (TCSC) is a capacitive
reactance compensator which consists of a series capacitor bank
shunted by thyristor controlled reactor in order to provide a
smoothly variable series capacitive reactance.
5. The Thyristor Switched Series Capacitor (TSSC) is a capacitive
reactance compensator which consists of a series capacitor bank
shunted by thyristor controlled reactor in order to provide a
stepwise control of series capacitive reactance.
6. The Thyristor Controlled Series Reactor (TCSR) is an inductive
reactance compensator which consists of a series reactor shunted
by thyristor controlled reactor in order to provide a smoothly
variable series inductive reactance.
7. The Thyristor Switched Series Reactor (TSSR) is an inductive
reactance compensator which consists of a series reactor shunted
by thyristor controlled reactor in order to provide a stepwise
control of series inductive reactance.
Shunt-connected Controllers include the following categories:
1. A Static Var Compensator (SVC) is a shunt connected static var
generator or absorber whose output is adjusted to exchange
capacitive or inductive current so as to maintain or control specific
parameters of the electric power system (typically bus voltage).
SVCs have been in use since the early 1960s. The SVC application
for transmission voltage control began in the late 1970s.
2. A Static Synchronous Generator (SSG) is a static, self-commutated
switching power converter supplied from an appropriate electricenergy source and operated to produce a set of adjustable multiphase
output voltages, which may be coupled to an ac power
system for the purpose of exchanging independently controllable
real and reactive power.
3. A Static Synchronous Compensator (SSC or STATCOM) is a
static synchronous generator operated as a shunt connected static
var compensator whose capacitive or inductive output current can
be controlled independent of the ac system voltage.
4. The Thyristor Controlled Braking Resistor (TCBR) is a shuntconnected,
thyristor-switched resistor, which is controlled to aid
stabilization of a power system or to minimize power acceleration
of a generating unit during a disturbance.
5. The Thyristor Controlled Reactor (TCR) is a shunt-connected,
thyristor-switched inductor whose effective reactance is varied in a
continuous manner by partial conduction control of the thyristor
valve.
6. The Thyristor Switched Capacitor (TSC) is a shunt-connected,
thyristor-switched capacitor whose effective reactance is varied in
a stepwise manner by full or zero-conduction operation of the
thyristor valve.
The term Combined Shunt and Series-Connected Controllers is used to
describe controllers such as:
1. The Unified Power Flow Controller (UPFC) can be used to control
active and reactive line flows. It is a combination of a static
synchronous compensator (STATCOM) and a static synchronous
series compensator (S3C) which are coupled via a common dc link.
This allows bi-directional flow of real power between the series
output terminals of the S3C and the shunt output terminals of the
STATCOM, and are controlled to provide concurrent real and
reactive series line compensation without an external electric
energy source. The UPFC, by means of angularly unconstrained
series voltage injection, is capable of controlling, concurrently or
selectively, the transmission line voltage, impedance, and angle or,
alternatively, the real and reactive power flow in the line. The
UPFC may also provide independently controllable shunt reactive
compensation.
2. The Thyristor Controlled Phase Shifting Transformer (TCPST) is a
phase shifting transformer, adjusted by thyristor switches to
provide a rapidly variable phase angle.
3. The Interphase Power Controller (IPC) is a series-connected
controller of active and reactive power consisting of, in each phase,
of inductive and capacitive branches subjected to separately phaseshifted
voltages. The active and reactive power can be set
independently by adjusting the phase shifts and/or the branch
impedances, using mechanical or electronic switches. In the
particular case where the inductive and capacitive impedancesform a conjugate pair, each terminal of the IPC is a passive current
source dependent on the voltage at the other terminal.
The significant impact that FACTS devices will make on transmission
systems arises because of their ability to effect high-speed control. Present
control actions in a power system, such as changing transformer taps, switching
current or governing turbine steam pressure, are achieved through the use of
mechanical devices, which impose a limit on the speed at which control action
can be made. FACTS devices are capable of control actions at far higher
speeds. The three parameters that control transmission line power flow are line
impedance and the magnitude and phase of line end voltages. Conventional
control of these parameters is not fast enough for dealing with dynamic system
conditions. FACTS technology will enhance the control capability of the
system.
A potential motivation for the accelerated use of FACTS is the
deregulation/competitive environment in contemporary utility business. FACTS
have the potential ability to control the path of the flow of electric power, and
the ability to effectively join electric power networks that are not well
interconnected. This suggests that FACTS will find new applications as electric
utilities merge and as the sale of bulk power between distant exchange partners
becomes more wide spread.

EMS FUNCTIONS (energy management system)

EMS FUNCTIONS

System dispatchers at the EMS are required to make short-term (nextday) and long-term (prolonged) decisions on operational and outage scheduling
on a daily basis. Moreover, they have to be always alert and prepared to deal
with contingencies that may arise occasionally. Many software and hardware
functions are required as operational support tools for the operator. Broadly
speaking, we can classify these functions in the following manner:
• Base functions
• Generation functions
• Network functions
Each of these functions is discussed briefly in this section.
Base Functions
The required base functions of the EMS include:
• The ability to acquire real time data from monitoring equipment
throughout the power system.
• Process the raw data and distribute the processed data within the
central control system.
Data acquisition (DA) acquires data from remote terminal units (RTUs)
installed throughout the system using special hardware connected to the real
time data servers installed at the control center. Alarms that occur at the
substations are processed and distributed by the DA function. In addition,
protection and operation of main circuit breakers, some line isolators,
transformer tap changers and other miscellaneous substation devices are
provided with a sequence of events time resolution.

Data Acquisition

The data acquisition function collects, manages, and processes
information from the RTUs by periodically scanning the RTUs and presenting
the raw analog data and digital status points to a data processing function. This
function converts analog values into engineering units and checks the digital
status points for change since the previous scan so that an alarm can be raised if
status has changed. Computations can be carried out and operating limits can be
applied against any analog value such that an alarm message is created if a limit
is violated.

Supervisory Control

Supervisory control allows the operator to remotely control all circuit
breakers on the system together with some line isolators. Control of devices can
be performed as single actions or a line circuit can be switched in or out of
service.

Alarm Processor

The alarm processor software is responsible to notify the operator of
changes in the power system or the computer control system. Many
classification and detection techniques are used to direct the alarms to the
appropriate operator with the appropriate priorities assigned to each alarm.

Logical Alarming

This provides the facility to predetermine a typical set of alarm
operations, which would result from a single cause. For example, a faulted
transmission line would be automatically taken out of service by the operation of
protective and tripping relays in the substation at each end of the line and the
automatic opening of circuit breakers. The coverage would identify the
protection relays involved, the trip relays involved and the circuit breakers that
open. If these were defined to the system in advance, the alarm processor would
combine these logically to issue a priority 1 alarm that the particular power
circuit had tripped correctly on protection. The individual alarms would then be
given a lower priority for display. If no logical combination is viable for the
particular circumstance, then all the alarms are individually presented to the
dispatcher with high priority. It is also possible to use the output of a logical
alarm as the indicator for a sequence-switching procedure. Thus, the EMS
would read the particular protection relays which had operated and restore a line
to service following a transient fault.

Sequence of Events Function

The sequence of events function is extremely useful for post-mortem
analysis of protection and circuit breaker operations. Every protection relay, trip
relay, and circuit breaker is designated as a sequence of events digital point.
This data is collected, and time stamped accurately so that a specified resolution
between points is possible within any substation and across the system.
Sequence of events data is buffered on each RTU until collected by data
acquisition automatically or on demand.

Historical Database

Another function includes the ability to take any data obtained by the
system and store in a historical database. It then can be viewed by a tabular or
graphical trend display. The data is immediately stored within the on-line
system and transferred to a standard relational data base system periodically.
Generally, this function allows all features of such database to be used to
perform queries and provide reports.

Automatic Data Collection

This function is specified to define the process taken when there is a
major system disturbance. Any value or status monitored by the system can bedefined as a trigger. This will then cause a disturbance archive to be created,
which will contain a pre-disturbance and a post-disturbance snapshots to be
produced.

Load Shedding Function

This facility makes it possible to identify that particular load block and
instruct the system to automatically open the correct circuit breakers involved.
It is also possible to predetermine a list of load blocks available for load
shedding. The amount of load involved within each block is monitored so that
when a particular amount of load is required to shed in a system emergency, the
operator can enter this value and instruct the system to shed the appropriate
blocks.

Safety Management

Safety management provided by an EMS is specific to each utility. A
system may be specified to provide the equivalent of diagram labeling and paper
based system on the operator’s screen. The software allows the engineer, having
opened isolators and closed ground switches on the transmission system, to
designate this as safety secured. In addition, free-placed ground symbols can be
applied to the screen-based diagram. A database is linked to the diagram system
and records the request for plant outage and safety document details. The
computer system automatically marks each isolator and ground switch being
presently quoted on a safety document and records all safety documents using
each isolator or ground switch. These details are immediately available at any
operating position when the substation diagram is displayed.

Generation Functions

The main functions that are related to operational scheduling of the
generating subsystem involve the following:
• Load forecasting
• Unit commitment
• Economic dispatch and automatic generation control (AGC)
• Interchange transaction scheduling
Each of these functions is discussed briefly here.

Load Forecasting

The total load demand, which is met by centrally dispatched generating
units, can be decomposed into base load and controlled load. In some systems,
there is significant demand from storage heaters supplied under an economy
tariff. The times at which these supplies are made available can be altered using
radio tele-switching. This offers the utility the ability to shape the total demand
curve by altering times of supply to these customers. This is done with theobjective of making the overall generation cost as economic and
environmentally compatible as possible. The other part of the demand consists
of the uncontrolled use of electricity, which is referred to as the natural demand.
It is necessary to be able to predict both of these separately. The base demand is
predicted using historic load and weather data and a weather forecast.

Unit Commitment

The unit commitment function determines schedules for generation
operation, load management blocks and interchange transactions that can
dispatched. It is an optimization problem, whose goal is to determine unit
startup and shutdown and when on-line, what is the most economic output for
each unit during each time step. The function also determines transfer levels on
interconnections and the schedule of load management blocks. The software
takes into account startup and shutdown costs, minimum up and down times and
constraints imposed by spinning reserve requirements.
The unit commitment software produces schedules in advance for the
next time period (up to as many as seven days, at 15-minute intervals). The
algorithm takes the predicted base demand from the load forecasting function
and the predicted sizes of the load management blocks. It then places the load
management blocks onto the base demand curve, essentially to smooth it
optimally. The operator is able to use the software to evaluate proposed
interchange transactions by comparing operating costs with and without the
proposed energy exchange. The software also enables the operator to compute
different plant schedules where there are options on plant availability

Economic Dispatch and AGC

The economic dispatch (ED) function allocates generation outputs of
the committed generating units to minimize fuel cost, while meeting system
constraints such as spinning reserve. The ED functions to compute
recommended economic base points for all manually controlled units as well as
economic base points for units which may be controlled directly by the EMS.
The Automatic Generation Control (AGC) part of the software
performs dispatching functions including the regulation of power output of
generators and monitoring generation costs and system reserves. It is capable of
issuing control commands to change generation set points in response to
changes in system frequency brought about by load fluctuations.
Interchange Transaction Scheduling Function
This function allows the operator to define power transfer schedules on
tie-lines with neighboring utilities. In many instances, the function evaluates the
economics and loading implications of such transfers.

Current Operating Plan (COP)

As part of the generation and fuel dispatch functions on the EMS at a
typical utility is a set of information called the Current Operating Plan (COP)
which contains the latest load forecast, unit commitment schedule, and hourly
average generation for all generating units with their forecast operating status.
The COP is typically updated every 4 to 8 hours, or as needed following major
changes in load forecast and/or generating unit availability.

Network Analysis Functions

Network applications can be subdivided into real-time applications and
study functions. The real time functions are controlled by real time sequence
control that allows for a particular function or functions to be executed
periodically or by a defined event manually. The network study functions
essentially duplicate the real time function and are used to study any number of
“what if” situations. The functions that can be executed are:
• Topology Processing (Model Update) Function.
• State Estimation Function.
• Network Parameter Adaptation Function
• Dispatcher Power Flow (DPF)
• Network Sensitivity Function.
• Security Analysis Function.
• Security Dispatch Function
• Voltage Control Function
• Optimal Power Flow Function
Topology Processing (Model Update) Function
The topology processing (model-updating) module is responsible for
establishing the current configuration of the network, by processing the
telemetered switch (breakers and isolators) status to determine existing
connections and thus establish a node-branch representation of the system.

State Estimation Function

The state estimator function takes all the power system measurements
telemetered via SCADA, and provides an accurate power flow solution for the
network. It then determines whether bad or missing measurements using
redundant measurements are present in its calculation. The output from the state
estimator is given on the one-line diagram and is used as input to other
applications such as Optimal Power Flow.

Network Parameter Adaptation Function

This module is employed to generate forecasts of busbar voltages and loads. The forecasts are updated periodically in real time. This allows the state
estimator to schedule voltages and loads at busbars where no measurements are
available.

Dispatcher Power Flow (DPF)

A DPF is employed to examine the steady state conditions of an
electrical power system network. The solution provides information on network
bus voltages (kV), and transmission line and transformer flows (MVA). The
control center dispatchers use this information to detect system violations
(over/under-voltages, branch overloads) following load, generation, and
topology changes in the system.

Network Sensitivity Function

In this function, the output of the state estimator is used to determine
the sensitivity of network losses to changes in generation patterns or tie-line
exchanges. The sensitivity parameters are then converted to penalty factors for
economic dispatch purposes.

Security Analysis Function

The SA is one of the main applications of the real time network
analysis set. It is designed to assist system dispatchers in determining the power
system security under specified single contingency and multiple contingency
criteria. It helps the operator study system behavior under contingency
conditions. The security analysis function performs a power flow solution for
each contingency and advises of possible overloads or voltage limit violations.
The function automatically reviews a list of potential problems, rank them as to
their effect and advise on possible reallocation of generation. The objective of
OSA is to operate the network closer to its full capability and allow the proper
assessment of risks during maintenance or unexpected outages.

Security Dispatch Function

The security dispatch function gives the operator a tool with the
capability of reducing or eliminating overloads by rearranging the generation
pattern. The tool operates in real-time on the network in its current state, rather
than for each contingency. The function uses optimal power flow and constrains
economic dispatch to offer a viable security dispatch of the generating resources
of the system.

Voltage Control Function

The voltage control (VC) study is used to eliminate or reduce voltage
violations, MVA overloads and/or minimize transmission line losses using
transformer set point controls, generator MVAR, capacitor/reactor switching,
load shedding, and transaction MW.

Optimal Power Flow Function

The purpose of the Optimal Power Flow (OPF) is to calculate
recommended set points for power system controls that are a trade-off between
security and economy. The primary task is to find a set of system states within a
region defined by the operating constraints such as voltage limits and branch
flow limits. The secondary task is to optimize a cost function within this region.
Typically, this cost function is defined to include economic dispatch of active
power while recognizing network-operating constraints. An important limitation
of OPF is that it does not optimize switching configurations.
Optimal power flow can be integrated with other EMS functions in
either a preventive or corrective mode. In the preventive mode, the OPF is used
to provide suggested improvements for selected contingency cases. These cases
may be the worst cases found by contingency analysis or planned outages.
In the corrective mode, an OPF is run after significant changes in the
topology of the system. This is the situation when the state estimation output
indicates serious violations requiring the OPF to reschedule the active and
reactive controls.
It is important to recognize that optimization is only possible if the
network is controllable, i.e., the control center must have control of equipment
such as generating units or tap-changer set points. This may present a challenge
to an EMS that does not have direct control of all generators. To obtain the full
benefit of optimization of the reactive power flows and the voltage profile, it is
important to be able to control all voltage regulating devices as well as
generators.
The EMS network analysis functions (e.g., Dispatcher Power Flow and
Security Analysis) are the typical tools for making many decisions such as
outage scheduling. These tools can precisely predict whether the outage of a
specific apparatus (i.e., transformer, generator, or transmission line) would cause
any system violations in terms of abnormal voltages or branch overloads.
In a typical utility system, outage requests are screened based on the
system violation indications from DPF and SA studies. The final approval for
crew scheduling is granted after the results from DPF and SA are reviewed.

Operator Training Simulator

An energy management system includes a training simulator that
allows system operators to be trained under normal operating conditions and
simulated power system emergencies. System restoration may also be
exercised. It is important to realize that major power system events are
relatively rare, and usually involve only one shift team out of six, real
experience with emergencies builds rather slowly. An operator-training
simulator helps maintain a high level of operational preparedness among the
system operators.
The interface to the operator appears identical to the normal control
interface. The simulator relies on two models: one of the power system and the
other represents the control center. Other software is identical to that used in
real time. A scenario builder is available such that various contingencies can be
simulated through a training session. The instructor controls the scenarios and
plays the role of an operator within the system.

ENERGY CONTROL CENTER

ENERGY CONTROL CENTER

The following criteria govern the operation of an electric power
system:
• Safety
• Quality
• Reliability
• Economy
The first criterion is the most important consideration and aims to
ensure the safety of personnel, environment, and property in every aspect of
system operations. Quality is defined in terms of variables, such as frequency
and voltage, that must conform to certain standards to accommodate the
requirements for proper operation of all loads connected to the system.
Reliability of supply does not have to mean a constant supply of power, but it
means that any break in the supply of power is one that is agreed to and tolerated
by both supplier and consumer of electric power. Making the generation cost
and losses at a minimum motivates the economy criterion while mitigating the
adverse impact of power system operation on the environment.
Within an operating power system, the following tasks are performed in
order to meet the preceding criteria:
• Maintain the balance between load and generation.
• Maintain the reactive power balance in order to control the voltage
profile.
• Maintain an optimum generation schedule to control the cost and
environmental impact of the power generation.
• Ensure the security of the network against credible contingencies.
This requires protecting the network against reasonable failure of
equipment or outages.
The fact that the state of the power network is ever changing because
loads and networks configuration change, makes operating the system difficult.
Moreover, the response of many power network apparatus is not instantaneous.
For example, the startup of a thermal generating unit takes a few hours. This
essentially makes it not possible to implement normal feed-forward control.
Decisions will have to be made on the basis of predicted future states of the
system.
Several trends have increased the need for computer-based operator
support in interconnected power systems. Economy energy transactions, relianceon external sources of capacity, and competition for transmission resources have
all resulted in higher loading of the transmission system. Transmission lines
bring large quantities of bulk power. But increasingly, these same circuits are
being used for other purposes as well: to permit sharing surplus generating
capacity between adjacent utility systems, to ship large blocks of power from
low-energy-cost areas to high-energy cost areas, and to provide emergency
reserves in the event of weather-related outages. Although such transfers have
helped to keep electricity rates lower, they have also added greatly to the burden
on transmission facilities and increased the reliance on control.
Heavier loading of tie-lines which were originally built to improve
reliability, and were not intended for normal use at heavy loading levels, has
increased interdependence among neighboring utilities. With greater emphasis
on economy, there has been an increased use of large economic generating units.
This has also affected reliability.
As a result of these trends, systems are now operated much closer to
security limits (thermal, voltage and stability). On some systems, transmission
links are being operated at or near limits 24 hours a day. The implications are:
• The trends have adversely affected system dynamic performance.
A power network stressed by heavy loading has a substantially
different response to disturbances from that of a non-stressed
system.
• The potential size and effect of contingencies has increased
dramatically. When a power system is operated closer to the limit,
a relatively small disturbance may cause a system upset. The
situation is further complicated by the fact that the largest size
contingency is increasing. Thus, to support operating functions
many more scenarios must be anticipated and analyzed. In
addition, bigger areas of the interconnected system may be affected
by a disturbance.
• Where adequate bulk power system facilities are not available,
special controls are employed to maintain system integrity.
Overall, systems are more complex to analyze to ensure reliability
and security.
• Some scenarios encountered cannot be anticipated ahead of time.
Since they cannot be analyzed off-line, operating guidelines for
these conditions may not be available, and the system operator
may have to “improvise” to deal with them (and often does). As a
result, there is an ever increasing need for mechanisms to support
dispatchers in the decision making process. Indeed, there is a risk
of human operators being unable to manage certain functions
unless their awareness and understanding of the network state is
enhanced.
To automate the operation of an electric power system electric utilities
rely on a highly sophisticated integrated system for monitoring and control.Such a system has a multi-tier structure with many levels of elements. The
bottom tier (level 0) is the high-reliability switchgear, which includes facilities
for remote monitoring and control. This level also includes automatic
equipment such as protective relays and automatic transformer tap-changers.
Tier 1 consists of telecontrol cabinets mounted locally to the switchgear, and
provides facilities for actuator control, interlocking, and voltage and current
measurement. At tier 2, is the data concentrators/master remote terminal unit
which typically includes a man/machine interface giving the operator access to
data produced by the lower tier equipment. The top tier (level 3) is the
supervisory control and data acquisition (SCADA) system. The SCADA system
accepts telemetered values and displays them in a meaningful way to operators,
usually via a one-line mimic diagram. The other main component of a SCADA
system is an alarm management subsystem that automatically monitors all the
inputs and informs the operators of abnormal conditions.
Two control centers are normally implemented in an electric utility,
one for the operation of the generation-transmission system, and the other for
the operation of the distribution system. We refer to the former as the energy
management system (EMS), while the latter is referred to as the distribution
management system (DMS). The two systems are intended to help the
dispatchers in better monitoring and control of the power system. The simplest
of such systems perform data acquisition and supervisory control, but many also
have sophisticated power application functions available to assist the operator.
Since the early sixties, electric utilities have been monitoring and controlling
their power networks via SCADA, EMS, and DMS. These systems provide the
“smarts” needed for optimization, security, and accounting, and indeed are
really formidable entities. Today’s EMS software captures and archives live
data and records information especially during emergencies and system
disturbances.
An energy control center represents a large investment by the power
system ownership. Major benefits flowing from the introduction of this system
include more reliable system operation and improved efficiency of usage of
generation resources. In addition, power system operators are offered more indepth
information quickly. It has been suggested that at Houston Lighting &
Power Co., system dispatchers’ use of network application functions (such as
Power Flow, Optimal Power Flow, and Security Analysis) has resulted in
considerable economic and intangible benefits. A specific example of $ 70,000
in savings achieved through avoiding field crew overtime cost, and by leaving
equipment out of service overnight is reported for 1993. This is part of a total of
$ 340,000 savings in addition to increased system safety, security and reliability
has been achieved through regular and extensive use of just some network
analysis functions.