Control-Lyapunov function
Encyclopedia
In control theory
Control theory
Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems. The desired output of a system is called the reference...

, a control-Lyapunov function is a generalization of the notion of Lyapunov function
Lyapunov function
In the theory of ordinary differential equations , Lyapunov functions are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Named after the Russian mathematician Aleksandr Mikhailovich Lyapunov, Lyapunov functions are important to stability theory and control...

  used in stability
Lyapunov stability
Various types of stability may be discussed for the solutions of differential equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Lyapunov...

 analysis. The ordinary Lyapunov function is used to test whether a dynamical system
Dynamical system
A dynamical system is a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a...

 is stable (more restrictively, asymptotically stable). That is, whether the system starting in a state in some domain D will remain in D, or for asymptotic stability will eventually return to . The control-Lyapunov function is used to test whether a system is feedback stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state by applying the control u.

More formally, suppose we are given a dynamical system
where the state x(t) and the control u(t) are vectors.

Definition. A control-Lyapunov function is a function that is continuous, positive-definite (that is V(x,u) is positive except at where it is zero), proper (that is as ), and such that

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:

Artstein's theorem. The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear programming problem
Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....


for each state x.

The theory and application of control-Lyapunov functions were developed by Z. Artstein and E. D. Sontag
Eduardo D. Sontag
Eduardo Daniel Sontag is an American/Argentine mathematician, and Professor at the Rutgers University, who works in the field of systems biology and control theory and engineering.- Biography :...

in the 1980s and 1990s.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by
Now given the desired state, , and actual state, , with error, , define a function as
A Control-Lyapunov candidate is then
which is positive definite for all , .

Now taking the time derivative of

The goal is to get the time derivative to be
which is globally exponentially stable if is globally positive definite (which it is).

Hence we want the rightmost bracket of ,
to fulfill the requirement
which upon substitution of the dynamics, , gives
Solving for yields the control law
with and , both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected
which is a linear first order differential equation which has solution

And hence the error and error rate, remembering that , exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for and solve for . This is left as an exercise for the reader but the first few steps at the solution are:

which can then be solved using any linear differential equation methods.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK