We investigate equilibrium notions in game theory from the revealed preference approach. For extensive game forms with complete information, we derive a set of independent necessary and sufficient conditions for the observed outcomes to be rationalized by subgame-perfect Nash equilibrium. Journal of Economic Literature Classification Numbers: C72, C92.
In the past two decades we have witnessed a rapid development in noncooperative
game theory. Many new solution concepts, particularly refinements
of Nash equilibrium, have been introduced and successfully applied
to many fields in economics as well as other social sciences. Though game
theory has been applied to many fields, one important issue has not yet
received much attention in the literature. To model any situation as a game,
we first specify individuals’ preferences, then apply certain solution concept to predict the outcome. But what if the theoretical prediction differs from
the actual outcome? There could be two possible explanations: either individuals
do not play according to the theory, or individuals’ preferences are
not correctly specified. In fact, even when the theoretical prediction coincides
with the actual outcome, we should not rule out the possibility that
individuals do not play according to the theory if we are not sure about
the specification of individuals’ preferences. The question is, therefore: can
we, by any means, test whether individuals play according to the game
theoretic prediction, solely on the basis of the observations of individuals’
actual moves? If we could not find such a mean of testing, then the empirical
content of game theory would be virtually void. Surprisingly, the existing
literature contains hardly anything on this issue. Thus purpose of this paper
is to take a first step in addressing this issue.Instead of criticizing the predictive power of the solution concepts, in this
paper we try to rationalize the actual outcomes. We ask what are the necessary
and sufficient conditions for the observed outcomes to be rationalized
by the equilibrium solution concept. In a sense, our approach is similar to
the classical revealed preference theory initiated by Samuelson (1938).
The general problem under consideration here takes the following form.
Let be a collection of related game forms that several individuals play.
Suppose that we can observe an outcome OG for every game form G
in . What conditions must the observed outcomes satisfy so that OG
is the equilibrium outcome of a game associated with G, for every G?
Moreover, what are the necessary and sufficient conditions to allow the
observed outcomes to be rationalized by the equilibrium solution concept?
Notice that in our framework only outcomes are observed, not the preferences
of individuals. This is the important distinction between our work and
studies in experimental game theory. In the experimental studies, players’
preferences are assumed known to the researchers, and hence the theoretical
equilibrium outcome is known as well. Then the individuals’ actual plays
are compared with the theoretical equilibrium outcome. In our model, as
in any other model of revealed preference, only outcomes are observed.From these observations, we have to construct preferences of individuals
so that the observed outcomes can be rationalized by the equilibrium
notion with these constructed preferences. Also, note that we are imposing
no restriction on the preferences (except that they must be orderings). In
many contexts, it is natural to assume some a priori restrictions—for example,
monotonicity in payoffs when the outcomes are monetary payoffs to
the individuals. Any such restriction would indeed imply further conditions
for rationalizability of solutions.
Since in strategic situations players can move simultaneously or sequentially,
and in each case there are a variety of theoretical equilibrium
concepts, we have a rich class of models to investigate. Clearly, different structures of strategic interactions and different solution concepts will have
different implications on the observed outcomes.
In a recent and independent paper, Sprumont (2000) has taken up exactly
the same issue for normal form games. Sprumont considers finite sets of
actions, Ai, one for each player, i; the product set, A, is called the set of
joint actions. A joint choice function, f , assigns a nonempty set to every possible
subset B of the set of joint actions. A joint choice function is Nash
rationalizable if there exist preference orderings on A such that for every
B, f B coincides with the set of Nash equilibria for the game defined by
the set of actions B with the preferences. Sprumont provides a couple of
necessary and sufficient conditions (persistence under expansion and persistence
under contraction) for a joint choice function to be Nash rationalizable.
It is also worth mentioning that other authors, including Peleg and
Tijs (1996), have previously considered consistency conditions to characterize
Nash equilibrium. For any well-specified (normal form) game, in which
the players’ payoff functions are already given, they show that the players
must play Nash equilibria if (i) each player is a utility maximizer and (ii)
all players choose strategies consistently In this paper we consider situations in which the players move sequentially
with perfect information. Our paper is therefore a perfect complement
to that of Sprumont. We fix an extensive game form (tree) G with
complete information so that each player knows all of the previous moves
whenever he has to move. A reduced tree, G, is obtained from G by deleting
some branches of G. Suppose that we observe the players’ actual moves for
all reduced trees. If players always play subgame-perfect Nash equilibria,
then what conditions must these observed outcomes satisfy? We can derive
three necessary conditions (acyclicity of the revealed base relation, internal
consistency, and subgame consistency) and also prove that these three (independent)
conditions together are sufficient for subgame-perfection rationalization.
It should be emphasized that we do not claim here that intelligent
players must in practice play according to these conditions. These conditions
are just logical consequences of subgame perfection. It is possible that
these consistency conditions can be violated in real life situations.
In this paper we have considered equilibrium notions for games from the
revealed preference perspective. For game trees with perfect information,
we have derived necessary and sufficient conditions for the observed outcomes
to be rationalized by subgame-perfect Nash equilibrium. This result
extends standard results of revealed preferences to multiagent decision
making situations. However, in contrast to the classical results in individualrevealed
preference theory, the preference relations of agents in a game
context cannot be uniquely determined by the observed choices. If an agent
cannot change the outcome from a to b by a unilateral change of his
action, then his preferences over a and b can never be recovered from
the observed actions.Our result in this paper deals with a simple case only, and we have not
imposed any restrictions on the outcomes or the preferences. In more concrete
problems, one can think of introducing additional assumptions on the
preferences. For example, if the outcome is just a division of money, then
it may be reasonable to assume that agents’ preferences are monotonic in
their own monetary payoffs. Or if each agent’s actions can be represented
by a convex set in a Euclidean space, then it is often assumed (in the standard
game theory literature) that preferences are continuous and convex
(particularly in the context of normal form games). We would eventually
want to implement our study through some form of experiments. Since all
meaningful experiments are conducted in concrete contexts, we need to
establish results that are sensible to the specifics of the experiments. Hence
our paper is just the beginning of a broader research program which will
extend the basic result here to cover other structures of strategic interactions
and game theoretic solution concepts.We require that the outcomes of all possible reduced trees be observable.
There are many simple and interesting game situations, such as dividinga-
dollar, in which the players have infinitely many strategies but practically
only finitely many observations can be made. Hence one would like to
know whether our results can be extended to such cases with only limited observations.4 It is easy to see that our conditions are still necessary for
subgame-prefect rationalization, but sufficiency is far from obvious.