# Mean Field Games 15 Years Later: Where Do We Stand?

Since its introduction nearly 15 years ago, the theory of mean field games has rapidly become an exciting source of progress in the study of large dynamic stochastic systems. In 2006, Jean-Michel Lasry and Pierre-Louis Lions proposed a methodology to produce approximate Nash equilibria for stochastic differential games with symmetric interactions and many players. These players feel the impact of other players’ states and actions through their empirical distributions only. Researchers extensively studied this type of interaction under the name “mean field interaction” — hence the terminology “mean field game” (MFG) that Lasry and Lions introduced [4]. Peter Caines, Minyi Huang, and Roland Malhamé simultaneously developed a similar approach, calling it the Nash certainty equivalence (NCE) principle [3]. Since its inception, this paradigm has evolved from its seminal principles into a fully-fledged field that attracts theoretically inclined investigators as well as applied mathematicians, engineers, and social scientists.

### Early Applications

Early contributors to the field introduced and analyzed some particular applications, primarily to illustrate the explanatory potential of MFG paradigm. To serve their pedagogical purpose, they only captured stylized facts from real-life applications, examples in the form of the following questions: *Why does the Mexican wave have universal features? When does a large meeting begin? Where do I put my towel on the beach?* Realistic engineering applications in wireless communications were concurrently recast as MFG models that showcased the relevance of the search for equilibria in such a framework. Nevertheless, one of MFGs’ main attractions is their ability to facilitate the modeling and investigation of large stochastic systems for which standard equilibrium analyses are intractable. Spectacular successes have already occurred (or are expected) in the study of populations in ecology and evolutionary biology—e.g., schooling fish, flocking birds, crowd motion, herding, and swarming—and in financial applications like trading in the presence of price impact (e.g., on high frequency markets) or the quest for better understanding of systemic risk.

However, the number of influential macroeconomic models that foreshadowed the MFG paradigm’s introduction is befuddling. Looking back at some of the fundamental works of S. Rao Aiyagari, Per Krusell, and Anthony Smith on macroeconomic growth in the late 1990s, it becomes apparent that these authors were introducing MFG models without identifying them as such. Instead, they proposed to numerically compute approximate solutions by iterating the forward and backward time-stepping of the Hamilton-Jacobi-Bellman (HJB) and Kolmogorov-Fokker-Planck (KFP) equations. In particular, these contributions are a convincing testimony of the importance of a common noise’s presence on top of the idiosyncratic sources of random shocks that are attached to each individual player.

### The Analytic Approach

In the spirit of the NCE’s original introduction, one can formulate MFGs as a family of standard stochastic control problems that are parameterized by flows of probability measures, which are followed by a fixed point problem on those flows (see Figure 1). This is the typical *search for a fixed point of the best response function *characteristic of Nash equilibrium. The cornerstone of the analytic approach involves identification of the control problems’ value functions as solutions to HJB equations, and then the optimal trajectories’ distributions as solutions of KFP equations. One can thus formulate MFGs as a forward-backward system of coupled partial differential equations (PDEs), the analysis of which faces subtle difficulties as the equations’ time evolutions run in opposite directions. While researchers may carry out short time analysis via standard contraction fixed point arguments, existence over arbitrary time intervals is much harder and was first completed by Lasry and Lions [4].

**Figure 1.**Mean field game (MFG) diagram.

**1a.**Optimization problem for each given input.

**1b.**Input is a flow of probability measures that describes the statistical state of the population’s players.

**1c.**Output is the flow of the optimal trajectories’ marginal laws. The Nash condition equalizes the input and output flows.

### The Probabilistic Approach

Probabilists employ several approaches when analyzing MFGs. One approach relies on the theory of backward stochastic differential equations (BSDEs), which are used to handle optimal control problems either through representation of the value process or the stochastic Pontryagin maximum principle. Combined with the Nash fixed point condition, this leads to the introduction of a new class of forward-backward stochastic differential equations (FBSDEs), called McKean-Vlasov (MKV) FBSDEs. MKV refers to the fact that the coefficients of the stochastic differential equations (SDEs) depend upon their own solutions’ distributions. Analysis of these MKV forward-backward equations was essentially nonexistent before MFGs highlighted their role. Their investigation is now a very active field of research.

Practitioners often use linear quadratic models as test beds in classical control and game theory. Their extensions to MFGs form a class of models that one can solve explicitly in the probabilistic approach by solving (possibly matrix) Riccati equations.

### Potential Games

MFGs share many similarities with a natural problem that has attracted much attention recently: optimal control of MKV SDEs, also known as mean field control (MFC) (see Figure 2). MFC corresponds to a population of individuals that contribute to an overall cost and take actions according to a control policy chosen by a central planner who minimizes that cost. MFC problems are hence intrinsically optimization problems, while the search for Nash equilibria in MFGs is more of a fixed point problem. Nevertheless, they are linked by their respective Pontryagin principles. Indeed, an MFC problem’s Pontryagin system is an MFG’s forward-backward system. MFGs that appear this way are called *potential games*, and their variational structure is very useful for both theoretical and numerical purposes.

**Figure 2.**Mean field games (MFGs) versus mean field control (MFC): a non-commutative diagram. In MFC, the mean field limit is taken before optimization is performed. In MFGs, equilibria are reached before the mean field limit is taken.

### The Master Equation

Whether the forward-backward system used to handle an MFG is comprised of PDEs or SDEs, one can regard it as the system of characteristics of a PDE, called the master equation. It is set on the product of the physical state space and the space of probability measures. The equation’s solution must be understood as the cost in equilibrium of a generic player, beginning from a given state under a given initial probability distribution for the population. This PDE’s well-posedness is a difficult question that requires equilibrium uniqueness. The standard condition for uniqueness is a monotonicity condition introduced by Lasry and Lions; monotonicity intuitively encourages players to move away from each other. In fact, it induces a form of strong stability that plays a key role in proving that the characteristics are smooth with respect to the initial condition, whether the latter is a probability measure (as in the PDE approach) or a random variable (as in the BSDE approach).

### The Convergence Problem

A mainstay of MFG theory is that one can inject any MFG solution into the \(N\)-player version of the game in the form of a distributed strategy (i.e., dependent only on each player’s own state, hence of a lower complexity). This provides an approximate equilibrium, the accuracy of which increases with \(N\).

The converse, which aims to show that equilibria of the \(N\)-player game converge towards an MFG solution, is known as the convergence problem (see Figure 3). It is much more difficult and remained open until researchers developed an approach based on the master equation [1, 2]. The proof is meant to take advantage of the regularity of the master equation’s solution to build an approximate solution to the Nash PDE system of the \(N\)-player game, which is the analogue of the HJB equation for games. This approach permits a sharp bound for the error, which leads to a central limit theorem and a large deviation principle for the empirical distributions of the finite player equilibria. In practice, these results produce estimates of finite size effects in the \(N\)-player game.

**Figure 3.**The two ways of connecting finite player games and mean field games (MFGs).

### MFGs with Common Noise

Important economic and engineering applications require the presence of an extra source of noise that is common to all players; equilibria become random in these cases. Natural extensions of the aforementioned results merely guarantee the existence of weak solutions that may not be adapted to the common noise. Fortunately, this lack of adaptivity cannot occur under the Lasry-Lions monotonicity condition. Indeed, the resulting MFG system can be uniquely solved by a continuation method — despite the fact that the HJB and KFP equations are stochastic. Outside the monotone regime, researchers seek to understand whether the common noise can contribute to uniqueness. This a subject of ongoing research, which raises the question of a possible vanishing viscosity method for selecting solutions to non-uniquely solvable MFGs (without common noise).

### Further Developments

Many extensions of MFGs’ original form exist. For instance, analysts have focused on the longtime behavior of finite horizon MFGs; they have established convergence towards a stationary MFG under monotonicity conditions, but recent examples indicate that oscillatory behavior may occur in the non-monotone case. Furthermore, both analysts and probabilists have investigated games involving interactions through the laws of the controls, as well as games that feature a major player who interacts with a continuum of minor players. Researchers have also adapted many of the preceding results to finite state games, which are naturally amenable to numerical computations.

Finally, we mention attempts at numerical analysis of MFGs despite their obvious complexity. Early on, finite difference schemes were shown to converge in various situations, and analysts have used optimization methods to solve the corresponding MFC problem. More recently, they have applied ideas from machine learning to parameterize the HJB/KFP system and the master equation via a neural network.

*The figures in this article were provided by the authors.*

**References**

[1] Cardaliaguet, P., Delarue, F., Lasry, J.M., & Lions, P.L. (2019). *The master equation and the convergence problem for mean field games*. *Annals of mathematical studies*. Princeton, NJ: Princeton University Press.

[2] Carmona, R., & Delarue, F. (2018). *Probabilistic theory of mean field games with applications: I & II*. Cham, Switzerland: Springer International Publishing.

[3] Huang, M., Caines, P., & Malhamé, R. (2006). Large population stochastic dynamic games: closed loop McKean-Vlasov systems and the Nash equivalence principle. *Comm. Inform. Syst., 6*(3), 221-252.

[4] Lasry, J.M., & Lions, P.L. (2007). Mean field games. *Japanese J. Math., 2*, 229-260.