Mean-field game theory


Mean-field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines and his co-workers and independently and around the same time by mathematicians and Pierre-Louis Lions.
Use of the term "mean field" is inspired by mean-field theory in physics, which considers the behaviour of systems of large numbers of particles where individual particles have negligible impact upon the system.
In continuous time a mean-field game is typically composed by a Hamilton–Jacobi–Bellman equation that describes the optimal control problem of an individual and a Fokker–Planck equation that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean-field games is the limit as of a N-player Nash equilibrium.
A related concept to that of mean-field games is "mean-field-type control". In this case a social planner controls a distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as dual adjoint Hamilton–Jacobi–Bellman equation coupled with Kolmogorov equation. Mean-field-type game theory is the multi-agent generalization of the single-agent mean-field-type control.

Linear-quadratic Gaussian game problem

From Caines, a relatively simple model of large-scale games is the linear-quadratic Gaussian model. The individual agent's dynamics are modeled as a stochastic differential equation
where is the state of the -th agent, and is the control. The individual agent's cost is
The coupling between agents occurs in the cost function.