Jean-François Mertens


Jean-François Mertens was a Belgian game theorist and mathematical economist.
Mertens contributed to economic theory in regards to order-book of market games, cooperative games, noncooperative games, repeated games, epistemic models of strategic behavior, and refinements of Nash equilibrium. In cooperative game theory he contributed to the solution concepts called the core and the Shapley value.
Regarding repeated games and stochastic games, Mertens 1982 and 1986 survey articles, and his 1994 survey co-authored with Sylvain Sorin and Shmuel Zamir, are compendiums of results on this topic, including his own contributions. Mertens also made contributions to probability theory and published articles on elementary topology.

Epistemic models

Mertens and Zamir implemented John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. They constructed a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace, which is the usual tactic in applications.

Repeated games with incomplete information

with incomplete information, were pioneered by Aumann and Maschler. Two of Jean-François Mertens's contributions to the field are the extensions of repeated two person zero-sum games with incomplete information on both sides for both the type of information available to players and the signalling structure.
In those set-ups Jean-François Mertens provided an extension of the characterization of the minmax and maxmin value for the infinite game in the dependent case with state independent signals. Additionally with Shmuel Zamir, Jean-François Mertens showed the existence of a limiting value. Such a value can be thought either as the limit of the values of the stage games, as goes to infinity, or the limit of the values of the -discounted games, as agents become more patient and.
A building block of Mertens and Zamir's approach is the construction of an operator, now simply referred to as the MZ operator in the field in their honor. In continuous time, the MZ operator becomes an infinitesimal operator at the core of the theory of such games. Unique solution of a pair of functional equations,
Mertens and Zamir showed that the limit value may be a transcendental function unlike the maxmin or the minmax.
Mertens also found the exact rate of convergence in the case of game with incomplete information on one side and general signalling structure.
A detailed analysis of the speed of convergence of the n-stage game value to its limit has profound links to the central limit theorem and the normal law, as well as the maximal variation of bounded martingales. Attacking the study of the difficult case of games with state dependent signals and without recursive structure, Mertens and Zamir introduced new tools on the introduction based on an auxiliary game, reducing down the set of strategies to a core that is 'statistically sufficient.'
Collectively Jean-François Mertens's contributions with Zamir provide the foundation for a general theory for two person zero sum repeated games that encompasses stochastic and incomplete information aspects and where concepts of wide relevance are deployed as for example reputation, bounds on rational levels for the payoffs, but also tools like splitting lemma, signalling and approachability. While in many ways Mertens's work here goes back to the von Neumann original roots of game theory with a zero-sum two person set up, vitality and innovations with wider application have been pervasive.

Stochastic games

were introduced by Lloyd Shapley in 1953. The first paper studied the discounted two-person zero-sum stochastic game with finitely many states and actions and demonstrates the existence of a value and stationary optimal strategies. The study of the undiscounted case evolved in the following three decades, with solutions of special cases by Blackwell and Ferguson in 1968 and Kohlberg in 1974. The existence of an undiscounted value in a very strong sense, both a uniform value and a limiting average value, was proved in 1981 by Jean-François Mertens and Abraham Neyman. The study of the non-zero-sum with a general state and action spaces attracted much attention, and Mertens and Parthasarathy proved a general existence result under the condition that the transitions, as a function of the state and actions, are norm continuous in the actions.

Market games: limit price mechanism

Mertens had the idea to use linear competitive economies as an order book to model limit orders and generalize double auctions to a multivariate set up. Acceptable relative prices of players are conveyed by their linear preferences, money can be one of the goods and it is ok for agents to have positive marginal utility for money in this case. In fact this is the case for most order in practice. More than one order can come from same actual agent. In equilibrium good sold must have been at a relative price compared to the good bought no less than the one implied by the utility function. Goods brought to the market are conveyed by initial endowments. Limit order are represented as follows: the order-agent brings one good to the market and has non zero marginal utilities in that good and another one. An at market sell order will have a zero utility for the good sold at market and positive for money or the numeraire. Mertens clears orders creating a matching engine by using the competitive equilibrium – in spite of most usual interiority conditions being violated for the auxiliary linear economy. Mertens's mechanism provides a generalization of Shapley–Shubik trading posts and has the potential of a real life implementation with limit orders across markets rather than with just one specialist in one market.

Shapley value

The diagonal formula in the theory of non-atomic cooperatives games elegantly attributes the Shapley value of each infinitesimal player as his marginal contribution to the worth of a perfect sample of the population of players when averaged over all possible sample sizes. Such a marginal contribution has been most easily expressed in the form of a derivative—leading to the diagonal formula formulated by Aumann and Shapley. This is the historical reason why some
differentiability conditions have been originally required to define Shapley value of non-atomic cooperative games. But first exchanging the order of taking the "average over all possible sample sizes" and taking such a derivative, Jean-François Mertens uses the smoothing effect of such an averaging process to extend the applicability of the diagonal formula. This trick alone works well for majority games. Exploiting even further this commutation idea of taking averages before taking derivative, Jean-François Mertens expends by looking at invariant transformations and taking averages over those, before taking the derivative. Doing so, Mertens expends the diagonal formula to a much larger space of games, defining a Shapley value at the same time.

Refinements and Mertens-stable equilibria

Solution concepts that are refinements of Nash equilibrium have been motivated primarily by arguments for backward induction and forward induction. Backward induction posits that a player's optimal action now anticipates the optimality of his and others' future actions. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium, where the latter three are obtained as limits of perturbed strategies. Forward induction posits that a player's optimal action now presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. In particular since completely mixed Nash equilibrium are sequential – such equilibria when they exist satisfy both forward and backward induction. In his work Mertens manages for the first time to select Nash equilibria that satisfy both forward and backward induction. The method is to let such feature be inherited from perturbed games that are forced to have completely mixed strategies—and the goal is only achieved with Mertens-stable equilibria, not with the simpler Kohlberg Mertens equilibria.
Elon Kohlberg and Mertens emphasized that a solution concept should be consistent with an admissible decision rule. Moreover, it should satisfy the invariance principle that it should not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. In particular, it should depend only on the reduced normal form of the game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties
of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs.
Kohlberg and Mertens defined tentatively a set-valued solution concept called stability for games with finite numbers of pure strategies that satisfies admissibility, invariance and forward induction, but a counterexample showed that it need not satisfy backward induction; viz. the set might not include a sequential equilibrium. Subsequently, Mertens defined a refinement, also called stability and now often called a set of Mertens-stable equilibria, that has several desirable properties:
For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game.
A stable set is defined mathematically by essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition entails more than the property that every nearby game has a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.

Social choice theory and relative utilitarianism

A social welfare function maps profiles of individual preferences to social preferences over a fixed set of alternatives. In a seminal paper Arrow showed the famous "Impossibility Theorem", i.e. there does not exist an SWF that satisfies a very minimal system of axioms: Unrestricted Domain, Independence of Irrelevant Alternatives, the Pareto criterion and Non-dictatorship. A large literature documents various ways to relax Arrow's axioms to get possibility results.
Relative Utilitarianism is a SWF that consists of normalizing individual utilities between 0 and 1 and adding them, and is a "possibility" result that is derived from a system of axioms that are very close to Arrow's original ones but modified for the space of preferences over lotteries. Unlike classical Utilitarianism, RU does not assume cardinal utility or interpersonal comparability. Starting from individual preferences over lotteries, which are assumed to satisfy the von-Neumann–Morgenstern axioms, the axiom system uniquely fixes the interpersonal comparisons. The theorem can be interpreted as providing an axiomatic foundation for the "right" interpersonal comparisons, a problem that has plagued social choice theory for a long time. The axioms are:
The main theorem shows that RU satisfies all the axioms and if the number of individuals is bigger than three, number of candidates is bigger than 5 then any SWF satisfying the above axioms is equivalent to RU, whenever there exist at least 2 individuals who do not have exactly the same or exactly the opposite preferences.

Intergenerational equity in policy evaluation

can serve to rationalize using 2% as an intergenerationally fair social discount rate for cost-benefit analysis.
Mertens and Rubinchik show that a shift-invariant welfare function defined on a rich space of policies,
if differentiable, has as a derivative a discounted sum of the policy, with a fixed discount rate, i.e., the induced social discount rate.
In an overlapping generations model with exogenous growth, relative utilitarian function
is shift-invariant when evaluated on policies around a balanced growth equilibrium.
When policies are represented as changes in endowments of individuals, and utilities of all
generations are weighted equally, the social discount rate
induced by relative utilitarianism is the growth rate of per capita GDP.
This is also consistent with the current practices described in the , stating: