Stochastic control
Encyclopedia
Stochastic control is a subfield of control theory
which deals with the existence of uncertainty in the data. The designer assumes, in a Bayesian probability
-driven fashion, that a random noise with known probability distribution affects the state evolution and the observation of the controllers. Stochastic control aims to design the optimal controller that performs the desired control task with minimum average cost despite the presence of these noises.
An extremely well studied formulation in stochastic control is that of linear-quadratic-Gaussian problem
. Here the model is linear, and the objective function is the expected value of a quadratic form, and the additive disturbances are distributed in a Gaussian manner. A basic result for discrete time centralized systems is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all systems that are merely linear and quadratic (LQ), and the Gaussian assumption allows for the optimal control laws, that are based on the certainty-equivalence property, to be linear functions of the observations of the controllers.
This property fails to hold for decentralized control, as was demonstrated by Witsenhausen in the celebrated Witsenhausen's counterexample
.
Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, or noise in the multiplicative parameters
of the model—would cause the certainty equivalence property not to hold. In the discrete-time case with uncertainty about the parameter values in the transition matrix and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a matrix Riccati equation can still be obtained for iterating to each period's solution.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.
Control theory
Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems. The desired output of a system is called the reference...
which deals with the existence of uncertainty in the data. The designer assumes, in a Bayesian probability
Bayesian probability
Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with propositions, whose truth or falsity is...
-driven fashion, that a random noise with known probability distribution affects the state evolution and the observation of the controllers. Stochastic control aims to design the optimal controller that performs the desired control task with minimum average cost despite the presence of these noises.
An extremely well studied formulation in stochastic control is that of linear-quadratic-Gaussian problem
Linear-quadratic-Gaussian control
In control theory, the linear-quadratic-Gaussian control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise, having incomplete state information and undergoing control subject to quadratic costs...
. Here the model is linear, and the objective function is the expected value of a quadratic form, and the additive disturbances are distributed in a Gaussian manner. A basic result for discrete time centralized systems is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all systems that are merely linear and quadratic (LQ), and the Gaussian assumption allows for the optimal control laws, that are based on the certainty-equivalence property, to be linear functions of the observations of the controllers.
This property fails to hold for decentralized control, as was demonstrated by Witsenhausen in the celebrated Witsenhausen's counterexample
Witsenhausen's counterexample
- A deceptively simple problem :Witsenhausen's counterexample, shown in the figure above, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968, and remains unsolved...
.
Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, or noise in the multiplicative parameters
Multiplier uncertainty
In macroeconomics, multiplier uncertainty is lack of perfect knowledge of the multiplier effect of a particular policy action, such as a monetary or fiscal policy change, upon the intended target of the policy...
of the model—would cause the certainty equivalence property not to hold. In the discrete-time case with uncertainty about the parameter values in the transition matrix and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a matrix Riccati equation can still be obtained for iterating to each period's solution.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.