SIAM News Blog
SIAM News
Print

Switching Diffusion Models and Their Many Applications

By George Yin and Chao Zhu

The current emphasis on modeling and analysis of many real-world applications has led to a resurgence of interest in switching diffusion. Such models, in contrast to existing differential equation-based dynamical systems models, are characterized by the coexistence of continuous dynamics and discrete events, as well as their interactions.

Figure 1. Sample path for switching system (X(t), α(t)).
As an illustration, we consider the switching dynamical system shown in Figure 1. In it, three continuous dynamical systems sit on three parallel planes. An additional random switching process takes three possible values. Corresponding to the discrete state \(i \in {1, 2, 3}\), the continuous state evolves on plane \(i\). The pair of processes (continuous process, discrete event) = \((X(t), \alpha(t))\). Suppose that initially, the process is at \((X(0), \alpha(0)) = (x, 1)\). The discrete-event process stays in discrete state 1 for a random amount of time; during this time, the continuous component evolves according to the continuous process specified by the dynamics associated with discrete state 1 until a jump in the discrete component occurs. At random moment \(\tau_1\), a jump to discrete state 3 occurs. The continuous component then evolves according to a continuous process whose dynamics are associated with discrete state 3. The process wanders around the third plane until another random jump at time \(\tau_2\). At \(\tau_2\), the system switches to the second parallel plane, and so on.

A switching diffusion is schematically like the illustration shown in Figure 1. However, the dynamic system on each parallel plane is a stochastic differential equation with different drift and diffusion coefficients. Thus, a switching diffusion can be regarded as a coupled system of diffusion processes. Mathematically, a switching diffusion can be described by

\[\begin{equation}\tag{1}
\mathrm{d} X(t) = b(X(t), \alpha(t))\mathrm{d} t + \sigma(X(t), \alpha(t)) \mathrm{d}W(t), (X(0), \alpha(0)) = (x, \alpha), \\
\mathbb{P}{\alpha(t + \Delta) = j | \alpha(t) = i, X(s), \alpha(s), s \leq t} \\
= q_{ij} (X(t))\Delta + o(\Delta).
\end{equation}\]

Here, \(X(t)\), residing in \(\mathbb{R}^r\), is the component representing the continuous state; \(\alpha (t)\) is the discrete-event process taking values in a finite set \(\mathcal{M} = \{1, . . ., m\}\) and having a generator \(Q(x) = (q_{ij} (x))\) that satisfies \(q_{ij} (x) \geq 0\) and \(\Sigma_{j} q_{ij} (x) = 0\) for each \(i \in \mathcal{M}; ~b(\cdot,\cdot) : \mathbb{R}^{r} \times \mathcal{M} \mapsto \mathbb{R}^r\) and  \(\sigma(\cdot,\cdot)  : \mathbb{R}^r \times \mathcal{M} \mapsto \mathbb{R}^{r \times r}\) are suitable functions; and \(W(\cdot)\) is a standard \(r\)-dimensional Brownian motion. When \(Q(x) = Q, \alpha (t)\) becomes a Markov chain independent of the Brownian motion.

In the more general setting, \(\alpha (t)\) depends on the continuous state \(x\) and so is not itself a Markov chain; only the two-component process \((X(t), \alpha (t))\) is a Markov process. Switching diffusions have drawn attention and become popular because of their ability to depict random environments via the switching process. Such models have been used in the stabilization of partially observed systems with hidden switching processes [2], hedging of options [4], mean-variance portfolio selections [20], discrete optimization and wireless communication [15], flexible manufacturing and production planning [10], optimal harvesting problems in random environments [11], ecological models [21], real options and irreversible investment decisions in duopoly games with a variable economic climate stopping-time game under Stackelberg leader–follower competition [1], among others.

Figure 2. A school of fish (goldband fusilier, Pterocaesio chrysozona) swim in a coordinated manner in a picture taken in Papua New Guinea by Brocken Inaglory. Image courtesy of Wikipedia.
Further applications include consensus control of multi-agent systems. In recent efforts, a number of processors (called mobile agents) participate in a task, with the goal of achieving a common objective, such as position, speed, or load distribution. In [14], in a proposed discrete-time model of autonomous agents, the agents can be viewed as points or particles, all moving in the plane at the same speed but in different directions. Each agent updates its direction using a local rule based on the average of its own and its neighbors’ directions. This is a version of a model introduced in [12] for simulating flocking and schooling behaviors; see also [3, 13]. Figure 2 shows the collective behavior of a school of fish. If, in lieu of a fixed configuration, the topology is allowed to vary randomly according to a continuous-time Markov chain, the result is a switching diffusion limit of a suitably scaled sequence [16]. The random switching is used to model inherent uncertainties, the time-varying nature of the system, and random environments.

Although seemingly similar to the usual stochastic differential equations, the behaviors of the underlying systems are quite different. Consider, for instance, the following switching ordinary differential equation, even without the Brownian perturbations. Suppose that we have two linear systems, both stable in the usual sense. When we combine them using a switching device, is the resulting switched system stable? The answer, in general, is no. To understand this, we consider the randomly switched linear system

\[\begin{equation}\tag{2}
\mathrm{d} X (t) / \mathrm{d}t = A (\alpha (t)) X( t),           
\end{equation}\]

where 

\[\begin{equation}A(1) =
 \left( \begin{array}{cc}
-10 & 2 \\
20 & -10 
\end{array} \right),
\end{equation}\]

\(A(2)\) is the transpose of \(A(1)\), and \(\alpha (t)\) is a continuous-time Markov chain with generator 

\[\begin{equation}
Q = \left( \begin{array}{cc}
-100 & 100 \\
100 & -100\end{array} \right).
\end{equation}\]

Figure 3. Trajectory of the Euclidean norm lX(t)l as a function of t for system (2).
It is easy to check that \((\mathrm{d}/\mathrm{d}t)X(t) = A(1)X(t)\) and \((\mathrm{d}/\mathrm{d}t)X(t) = A(2)X(t)\) are both stable. System (2), however, is unstable (see Figure 3). The system just described was presented in [18]; its behavior can be explained by the “averaging” effect. It is easy to see that \((A(1) + A(2))/2\) is an unstable matrix, one of whose eigenvalues has a positive real part. More detailed justification, using a perturbed Lyapunov function argument, can be found in [17, Section 5.6, pp. 229–233], in which it is also shown that two unstable systems combined with a switching process can produce a stable system; see also [5]. Similar behavior can be observed with the addition of a Brownian motion.

The study of stochastic stability can be traced back to [6] for systems with Markov chains. That line of study was substantially extended in [7] and [8] for stochastic differential equations driven by Brownian motion. As mentioned earlier, switching diffusion models have drawn increasing attention because of the wide range of applications; see [9, 19] and references therein, especially in control and optimization. Apart from the existence and uniqueness of solutions, properties of crucial importance include the following:

(1) To ensure well posedness, we want to know under what conditions the solutions of the switched stochastic differential equations will possess continuous and smooth dependence properties on the initial data. (2) For initial data \((X(0), \alpha (0)) = (x, i)\), with \(i \in \mathcal{M}\) and \(x\) in the exterior of an open set \(D\) with compact closure, we want to know if the system will return to the open set, which is known as “recurrence.” (3) The first return time is a random variable \(\tau\); if \(\mathbb{E}\tau < \infty\), then \((X(t), \alpha (t))\) is said to be “positive recurrent.” An important problem concerns necessary and sufficient conditions for positive recurrence, which can be shown to imply ergodicity. (4) With the invariant measure, we can study many long-term average optimization and control problems that require the use of invariant measures. (5) Can we design suitable feedback controls that will make the resulting systems stable? And that will ensure positive recurrence? (6) Can we design efficient algorithms to solve the switching stochastic differential equations? Can we design good algorithms for approximating many optimal control problems with dynamics represented by switching diffusions?

Much of the recent work in this direction is driven by pressing needs in modeling, analysis, and numerical computation. In addition to the applications of switching diffusion models identified earlier, numerous applications arise in multi-agent systems, management of power systems under random environments, cyber-physical systems with switching topology, system identification with a hidden switching process, and social network modeling and analysis. Given the diversity in application domains, detailed system descriptions vary substantially and diverse methodologies are needed to treat such systems. A common feature of the underlying problems, however, is the interaction of continuous dynamics and discrete events. It is conceivable that the many diverse applications will motivate new research in switching diffusion and related complex models with jumps of the discrete component to new levels and open new avenues for further applications.

References
[1] A. Bensoussan, S. Hoe, Z. Yan, and G. Yin, Real options with competition and regime switching, to appear in Math. Finance.
[2] B. Bercu, F. Dufour, and G. Yin, Almost sure stabilization for feedback controls of regime-switching linear systems with a hidden Markov chain, IEEE Trans. Automat. Control, 54 (2009), 2114–2125. 
[3] I.D. Couzin, J. Krause, N.R. Franks, and S.A. Levin, Effective leadership and decision-making in animal groups on the move, Nature, 43 (2005), 513–516.
[4] G.B. Di Masi, Y.M. Kabanov, and W.J. Runggaldier, Mean variance hedging of options on stocks with Markov volatility, Theory Probab. Appl., 39 (1994), 172–182.
[5] M.D. Fragoso and O.L.V. Costa, A unified approach for stochastic and mean square stability of continuous-time linear systems with Markovian jumping parameters and additive disturbances, SIAM J. Control Optim., 44 (2005), 1165–1191.
[6] I.I. Kac and N.N. Krasovskii, On the stability of systems with random parameters, J. Appl. Math. Mech., 24 (1960), 1225–1246.
[7] R.Z. Khasminskii, Stochastic Stability of Differential Equations, 2nd ed., Springer, New York, 2012.
[8] H.J. Kushner, Stochastic Stability and Control, Academic Press, New York, 1967.
[9] X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, 2006.
[10] S.P. Sethi and Q. Zhang, Hierarchical Decision Making in Stochastic Manufacturing Systems, Birkhauser, Boston, 1994.
[11] Q.S. Song, R. Stockbridge, and C. Zhu, On optimal harvesting problems in random environments, SIAM J. Control Optim., 49 (2011), 859–889.
[12] C.W. Reynolds, Flocks, herds, and schools: A distributed behavioral model, Computer Graphics, 21 (1987), 25–34.
[13] J. Toner and Y. Tu, Flocks, herds, and schools: A quantitative theory of flocking, Phys. Rev. E, 58 (1998), 4828–4858.
[14] T. Viseck, A. Czirook, E. Ben-Jacob, I. Cohen, and O. Shochet, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett., 75 (1995), 1226–1229.
[15] G. Yin, V. Krishnamurthy, and C. Ion, Regime switching stochastic approximation algorithms with application to adaptive discrete stochastic optimization, SIAM J. Optim., 14 (2004), 1187–1215.
[16] G. Yin, L.Y. Wang, and Y. Sun, Stochastic recursive algorithms for networked systems with delay and random switching: Multiscale formulations and asymptotic properties, SIAM J. Multiscale Model. Simul., 9 (2011), 1087–1112.
[17] G. Yin and Q. Zhang, Continuous-Time Markov Chains and Applications: A Two-Time-Scale Approach, 2nd ed., Springer, New York, 2013.
[18] G. Yin, G. Zhao, and F. Wu, Regularization and stabilization of randomly switching dynamic systems, SIAM J. Appl. Math., 72 (2012), 1361–1382.
[19] G. Yin and C. Zhu, Hybrid Switching Diffusions, Springer, New York, 2010.
[20] X.Y. Zhou and G. Yin, Markowitz mean-variance portfolio selection with regime switching: A continuous-time model, SIAM J. Control Optim., 42 (2003), 1466–1482.
[21] C. Zhu and G. Yin, On competitive Lotka–Volterra model in random environments, J. Math. Anal. Appl., 357 (2009), 154–170.

George Yin is a professor in the Department of Mathematics at Wayne State University. Chao Zhu is an associate professor in the Department of Mathematical Sciences at the University of Wisconsin-Milwaukee. The SIAM Activity Group on Control and Systems Theory provided this article.

blog comments powered by Disqus