Quantal response equilibrium is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues. In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely. The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.
Application to data
When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible.
Logit equilibrium
The most common specification for QRE is logit equilibrium. In a logit equilibrium, player's strategies are chosen according to the probability distribution: is the probability of player choosing strategy. is the expected utility to player of choosing strategy under the belief that other players are playing according to the probability distribution. Note that the "belief" density in the expected payoff on the right side must match the choice density on the left side. Thus computing expectations of observable quantities such as payoff, demand, output, etc., requires finding fixed points as in mean field theory. Of particular interest in the logit model is the non-negative parameter λ. λ can be thought of as the rationality parameter. As λ→0, players become "completely non-rational", and play each strategy with equal probability. As λ→∞, players become "perfectly rational", and play approaches a Nash equilibrium.
For dynamic games
For dynamic games, McKelvey and Palfrey defined agent quantal response equilibrium. AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.
Applications
The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. study overbidding in private-value auctions, Yi explores behavior in ultimatum games, Hoppe and Schmitz study the role of social preferences in principal-agent problems, and Kawagoe et al. investigate step-level public goods games with binary decisions.
Critiques
Non-falsifiability
Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations. The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations. However the authors say "this should not be mistaken for a critique of the QRE notion itself. Rather, our aim has been to clarify some limitations of examining behavior one game at a time and to develop approaches for more informative evaluation of QRE." This "non-falsifiability" is a result of showing multiple probability distributions for player strategies may be consistent with expected values from QRE, and that more conditions, such as requiring identically distributed and independent perturbations, are needed to guarantee a unique probability distribution for individual behavior such as a logit distribution. This is essentially the same as the refinement problem when multiple Nash equilibria occur.