SIAM News Blog
SIAM News
Print

Discrete-Time Markov Jump Linear Systems

By O.L.V. Costa and M.D. Fragoso

When associated with unexpected events that cause losses, abrupt changes are extremely undesirable. Such changes can be due, for instance, to environmental disturbances, component failures or repairs, changes in subsystem interconnections, changes in the operation point of a nonlinear plant. These situations can arise in economic systems, aircraft control systems, solar thermal plants with central receivers, robotic manipulator systems, communication networks, large flexible structures for space stations.

It is important to have efficient tools to deal with the effects of abrupt changes. To that end, we must be able to model the changes adequately. In a control-oriented perspective, attempts to carve out an appropriate mathematical framework for the study of dynamical systems subject to abrupt changes in structure (switching structure) date back at least to the 1960s.

In this scenario, a critical design issue for modern control systems is that they should be capable of maintaining acceptable behavior and meeting certain performance requirements even in the presence of abrupt changes in the system dynamics. Within this context lies a particularly interesting class of models: discrete-time Markov jump linear systems (MJLSs). Since its inception the models in this class have been closely connected with systems that are vulnerable to abrupt changes in their structure, and the associated literature surrounding this subject is fairly extensive (see, for example, [2, 3, 13], and references therein).

To introduce the main ideas, we consider the simplest homogeneous MJLS, defined as:

\[x(k + 1) = A_{\theta(k)} x(k), \\
x(0) = x0, \theta(0) = \theta_0,\: \: \: \: \: \: \: \: \: (1)\]

where \(A_i \in R^{n × n}\) and \({\theta (k)}\) represents a Markov chain taking values in \({1, . . . , N},\) with transition probability matrix \(P = [pij].\) Here, \({\theta (k)}\) accounts for the random mechanism that models the abrupt changes (this is sometimes called the “operation mode”). Although an MJLS seems, prima facie, to be a simple extension of a linear equation, it carries a great deal of subtleties that distinguish it from the simple linear case, and it provides us with very rich structure.

A first analytical difficulty is that \({x(k)}\) is not a Markov process, although the joint process \({x(k), \theta (k)}\) is. Because stability is an important bedrock of control theory, a key issue was to work out an adequate stability theory for MJLSs. In earlier work stability was sometimes considered for each mode of the system, but it soon became clear that this approach could not adequately deal with the many nuances of MJLSs. This issue was adequately settled only after the introduction of the concept of mean-square stability for this class of systems.

To illustrate how MJLSs can surprise us and run counter to our intuition, we present three examples that unveil some of these subtleties in the context of stability. Of the several different concepts of stochastic stability, we simplify the presentation here by considering only the following: the homogeneous MJLS is mean-square stable (MSS) if for any initial condition \((x_0, \theta_0), E(\parallel x(k) \parallel^2) \to 0\) as \(k \to \infty\). It is shown in [2] that mean-square stability is equivalent to the spectral radius of an augmented matrix \(A\) being less than one or to the existence of a unique solution to a set of coupled Lyapunov equations, which can be written in four equivalent forms. This augmented matrix \(A\) is defined as \(A = CN\), where \(C = P’ \bigotimes\), \(I\) and \(N = diag[A_i \bigotimes A_i]\) (with \(\bigotimes\) representing the Kronecker operator). Our three examples illustrate only the equivalence between mean-square stability and the spectral radius of \(A\).

Example 1

Consider the following system with two operation modes, defined by matrices \(A1 = 4/3, A2 = 1/3\) (mode 1 is unstable, mode 2 stable). The transitions between these modes are given by the transition probability matrix 

\[P=\begin{bmatrix}
0.5 & 0.5 \\
0.5 & 0.5
\end{bmatrix}. \]

It is easy to verify that for this transition probability matrix we have 

\[ A=\frac{1}{2} \begin{bmatrix}
\frac{16}{9} & \frac{1}{9} \\
\frac{16}{9} & \frac{1}{9}
\end{bmatrix} \]

and \(r_\sigma (A) = 17/18 (< 1)\), and so the system is MSS. Suppose now that we have a different transition probability matrix, say

\[\bar{P} = \begin{bmatrix}
0.9 & 0.1 \\
0.9 & 0.1
\end{bmatrix};\]

the system will most likely stay longer in mode 1, which is unstable. Then 

\[A=\begin{bmatrix}
\frac{144}{90} & \frac{1}{10} \\
\frac{16}{9} & \frac{1}{9}
\end{bmatrix},\]

\(r_\sigma (A) = 1.61 (>1)\) and the system is no longer MSS. This evinces a connection between mean-square stability and the probability of visits to the unstable modes, which is translated in the expression for \(A\).

Our next two examples, borrowed from [9], illustrate how the switching between operation modes can play tricks with our intuition. As shown in these striking examples, an MJLS composed only of unstable modes can be MSS, and, alternatively, an MJLS composed only of stable modes can be unstable in the mean-square sense.

Example 2

Here we consider a non-MSS system with stable modes. The two operation modes are defined by matrices 

\[A_1= \begin{bmatrix}
0 & 2 \\
0 & 0.5
\end{bmatrix} \text {and} \enspace A_2= \begin{bmatrix}
0.5 & 0 \\
2 & 0
\end{bmatrix}\]

and the transition probability matrix 

\[P=\begin{bmatrix}
0.5 & 0.5 \\
0.5 & 0.5
\end{bmatrix}. \]

Both modes are stable. Curiously, \(r_\sigma (A) = 2.125 > 1\), which means that the system is not MSS. A brief analysis of the trajectories for each mode helps to clarify the matter.

We begin by considering only trajectories for mode 1. For initial conditions given by 

\[x(0)= \begin{bmatrix}
x_10 \\
x_20
\end{bmatrix}\]

the trajectories are given by

\[x(k)= \begin{bmatrix}
x_1(k) \\
x_2(k)
\end{bmatrix} = \begin{bmatrix}
2(0.5)^{k-1}x_{20} \\
0.5(0.5)^{k-1}x_{20}
\end{bmatrix}. \\
\text{for} \enspace k=1,2,...\]

With the exception of the point \(x(0)\), the whole trajectory thus lies along the line \(x_1(k) = 4x_2(k)\) for any initial condition. This means that if, in a given time, the state is not on this line, mode 1 dynamics will transfer it to the line in one time step and it will remain there thereafter. For mode 2, it is easy to show that the trajectories are given by

\[x(k)= \begin{bmatrix}
x_1(k) \\
x_2(k)
\end{bmatrix} = \begin{bmatrix}
0.5(0.5)^{k-1}x_{10} \\
2(0.5)^{k-1}x_{10}
\end{bmatrix}. \\
\text{for} \enspace k=1,2,...\]

Much as in the case for mode 1, if the state is not on the line \(x_1(k) = x_2(k)/4\), mode 2 dynamics will transfer it to the line in one time step. The equations for the trajectories also show that the transitions make the state switch between these two lines. Notice that transitions from mode 1 to mode 2 cause the state to move away from the origin in the direction of component \(x_2\), while transitions from mode 2 to mode 1 do the same with respect to component \(x_1\). Figure 1 (left) shows the trajectory of the system with mode 1 dynamics only, for a given initial condition; Figure 1 (right) does the same for mode 2. Figure 2 shows the trajectory for a possible sequence of switches between the two modes, an indication of the instability of the system.

Example 3

For our final example, we consider an MSS system with unstable modes in which

\[A_1= \begin{bmatrix}
2 & -1 \\
0 & 0
\end{bmatrix} \text{and} \enspace A_2= \begin{bmatrix}
0 & 1 \\
0 & 2
\end{bmatrix} \]

and the transition probability matrix 

\[P= \begin{bmatrix}
0.1 & 0.9 \\
0.9 & 0.1
\end{bmatrix}.\]

Although both modes are unstable, \(r_\sigma (A) = 0.4 (< 1)\).

■ ■ ■

The general conclusion we extract from these examples is that the stability of each operation mode is neither a necessary nor a sufficient condition for the mean-square stability of the system. Mean-square stability depends on a balance between the transition probability of the Markov chain and the operation modes. These and many other examples, in the context of stability illustrate peculiar properties of these systems, which can be included in the class of complex systems (roughly defined as systems composed of interconnected parts that as a whole exhibit one or more properties not obvious from the properties of the individual parts).

Other features that set MJLSs outside classical linear theory include the following: 

(i) The filtering problem is associated with more than one scenario. In the harder case of partial observations of \((x(k), \theta(k))\), the filter is infinite-dimensional; a separation principle for this setting is an open problem. 

(ii) In view of a set of coupled Riccati equations, which appears in some filtering and control problems, a fresh look at such concepts as stabilizability and detectability was necessary, giving rise to a mean-square theory for these concepts. 

(iii) With the various possible settings of the state-space of the Markov chain (e.g., finite, infinite countable, Borel space), the analytical complexity of the problem can change. In a nutshell, we can say that an MJLS differs from the linear case in many fundamental issues.

Other interesting instances and a compilation of ideas about MJLSs can be found in [1, 2, 3, 5, 13]. Due, in part, to an adequate set of concepts and mathematical techniques developed over the last decades, MJLSs have a well-established theory that provides systematic tools for the analysis of many dynamical systems subjected to abrupt changes, yielding a great variety of applications. 

Since the specialized literature on applications of the theory of MJLS is very large and rapidly expanding, we provide here only some representative references, including [16], on applications in robotics; [6] and [18], on problems of image enhancement (e.g., tracking and estimation); [4] and [19], on mathematical finance; [8, 14, 15] and [20], on communication networks (packet loss, fading channels, chaotic communication); [10], on wireless issues; [7], on flight systems (including electromagnetic disturbances and reliability; see also [17], for control of wing deployment in aircraft); [11, 12], on issues related to electrical machines. Additional references are given in [2] and [3].

Last but not least, we round out this note by mentioning that some MJLS-control problems belong to a select group of solvable stochastic control problems and are therefore of great interest in any course on stochastic control. In addition, despite the notable abundance of relevant reference materials on the subject, MJLSs stand firmly as a topic of intense research.

References
[1] E.K. Boukas, Stochastic Switching Systems: Analysis and Design, Birkhäuser, Boston, 2006.

[2] O.L.V. Costa, M.D. Fragoso, and R.P. Marques, Discrete-Time Markov Jump Linear Systems, Springer, New York, 2005.

[3] O.L.V. Costa, M.D. Fragoso, and M.G. Todorov, Continuous-Time Markov Jump Linear Systems, Springer, New York, 2013.

[4] J.B.R. do Val and T. Başar, Receding horizon control of jump linear systems and a macroeconomic policy problem, J. Econ. Dynam. Control, 23 (1999), 1099-1131.

[5] V. Dragan, T. Morozan, and A.M. Stoica, Mathematical Methods in Robust Control of Linear Stochastic Systems (Mathematical Concepts and Methods in Science and Engineering), Springer, New York, 2010.

[6] J.S. Evans and R.J. Evans, Image-enhanced multiple model tracking, Automatica J. IFAC, 35 (1999), 1769-786.

[7] W.S. Gray, O.R. González, and M. Doğan, Stability analysis of digital linear flight controllers subject to electromagnetic disturbances, IEEE Trans. Aerospace and Electronic Systems, 36(2000), 1204-1218.

[8] S. Hu and W.-Y. Yan, Stability robustness of networked control systems with respect to packet loss, Automatica J. IFAC, 43 (2007), 124311248.

[9] Y. Ji and H.J. Chizeck, Jump linear quadratic Gaussian control: Steady state solution and testable conditions, Contr. Theor. Adv. Tech., 6 (1990), 289-319.

[10] P.A. Kawka and A.G. Alleyne, Robust wireless servo control using a discrete-time uncertain Markovian jump linear model, IEEE Trans. Control Syst. Tech., 17 (2009), 733-742.

[11] K.A. Loparo and G.L. Blankenship, A probabilistic mechanism for small disturbance instabilities in electric power systems, IEEE Trans. Circuits Syst., 32 (1985), 177-184.

[12] R. Malhamé, A jump-driven Markovian electric load model, Adv. Appl. Prob., 22 (1990), 564-586.

[13] M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, New York, 1990.

[14] S. Roy and A. Saberi, Static decentralized control of a single-integrator network with Markovian sensing topology, Automatica J. IFAC, 41 (2005), 1867-1877.

[15] T. Sathyan and T. Kirubarajan, Markov-jump-system-based secure chaotic communication, IEEE Trans. Circuits Syst., 53 (2006), 1597-1609.

[16] A.A.G. Siqueira and M.H. Terra, A fault-tolerant manipulator robot based on H2, H∞, and mixed H2/H∞ Markovian controls, IEEE-ASME Trans. Mechatronics, 14 (2009), 257–263.

[17] A. Stoica and I. Yaesh, Jump-Markovian based control of wing deployment for an uncrewed air vehicle, J. Guid. Control Dynam., 25 (2002), 407-411.

[18] D.D. Sworder, P.F. Singer, R.G. Doria, and R.G. Hutchins, Image-enhanced estimation methods, Proc. IEEE, 81 (1993), 797-812.

[19] F. Zampolli, Optimal monetary policy in a regime-switching economy: The response to abrupt shifts in exchange rate dynamics, J. Econ. Dyn. Control, 30 (2006), 1527-1567.

[20] Q. Zhang and S.A. Kassam, Finite-state Markov model for Rayleigh fading channels, IEEE Trans. Commun., 47 (1999), 1688-1692.

O.L.V. Costa is a professor in  the Polytechnic School of the Universidade de São Paulo in the Department of Telecommunications and Control Engineering. M.D. Fragoso is a professor in the Department of Systems and Control at the National Laboratory for Scientific Computing, Petrópolis,  Rio de Janeiro, Brazil. 

blog comments powered by Disqus