شناخت شناسی تعاملی پویا
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
21647 | 2004 | 32 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Games and Economic Behavior, Volume 49, Issue 1, October 2004, Pages 49–80
چکیده انگلیسی
The epistemic program in game theory uses formal models of interactive reasoning to provide foundations for various game-theoretic solution concepts. Much of this work is based around the (static) Aumann structure model of interactive epistemology, but more recently dynamic models of interactive reasoning have been developed, most notably by Stalnaker [Econ. Philos. 12 (1996) 133–163] and Battigalli and Siniscalchi [J. Econ. Theory 88 (1999) 188–230], and used to analyze rational play in extensive form games. But while the properties of Aumann structures are well understood, without a formal language in which belief and belief revision statements can be expressed, it is unclear exactly what are the properties of these dynamic models. Here we investigate this question by defining such a language. A semantics and syntax are presented, with soundness and completeness theorems linking the two.
مقدمه انگلیسی
It is well established both theoretically and empirically that strategic reasoning requires agents to form not just conjectures about each other’s actions, but also about each other’s knowledge and beliefs, which can then be used to infer what actions they might take. In particular, the implications of common knowledge of rationality, where all the agents are rational, all know they are all rational, all know that they know, and so on, have been extensively analyzed. More recently, epistemic foundations have been provided for game theoretic solution concepts such as Nash equilibrium (Aumann and Brandenburger, 1995).Comprehensive surveys of work in this area are provided by Dekel and Gul (1997) and Battigalli and Bonanno (1999). Much of this work is based around the Aumann structure model (see Aumann, 1976), in which each agent’s knowledge is represented by an information partition over a set of states, or possible worlds. For the purposes of the game theorist, however, Aumann structures have several important limitations. First, they describe a very strong concept of knowledge. An implication of modeling agents’ epistemic states with information partitions is that everything they know is true, and that they have complete introspective access to this knowledge, i.e. they know everything they know (positive introspection), and they know everything they do not know (negative introspection). Negative introspection in particular has widely been considered inappropriate. More generally, it has been thought important to analyze agents’ beliefs as well as their knowledge. And beliefs, unlike knowledge, can be false. These issues can be dealt with be replacing the information partitions with possibility correspondences (see e.g. Samet, 1990). Beliefs modeled by possibility correspondences at their most general do not satisfy any of the properties described above. By imposing certain restrictions on the correspondences we can recover these properties one by one. The second problem with using Aumann structures to model rational play in games is that they are essentially static: the epistemic states that they model are fixed, while in dynamic games1 agents have a chance to change their beliefs as the game progresses. In particular, conjectures about what strategies one’s opponents might be playing can be revised as moves are observed. A stark illustration of the importance of such revisions is given by Reny (1993), who shows that once the possibility of belief change is taken into account, the game-theoretic wisdom that common knowledge of rationality implies backward induction in games of perfect information is undermined. As long as the information that an agent learns is consistent with what she already knew or believed, this problem can be handled in the existing framework. The agent’s partition (or possibility correspondence) can be refined, in a manner analogous to Bayesian updating of probabilities, to take account of the new information. But, like Bayes rule, this process is not well defined when the information learned is incompatible with the agent’s previous beliefs, i.e. she is surprised. And modeling the response to such surprises is crucial: to evaluate the rationality of strategies in a dynamic game, we must have a theory about what the players would believe at every node in the game, even though some of these nodes will typically be ruled out by the players on the basis of the information they possess at the beginning of the game. Models of dynamic interactive reasoning have thus been developed. Stalnaker (1996) replaces the information partitions of the Aumann structure with plausibility orderings on the set of possible worlds, which encode information not just about each agent’s current beliefs, but also about how these beliefs will be revised as new information is learned, even if this new information is a surprise (e.g. it takes the form of an unexpected move made by one’s opponent). This seems to be a satisfactory resolution to the problem, and models of this kind have been used by Stalnaker and others to analyze rational play in dynamic games. From a philosophical point of view, however, there is something unsatisfactory about the Aumann structure model and all its extensions, as identified by Aumann (1999) himself: “. . . the whole idea of ‘state of the world,’ and of a partition structure that reflects the players’ knowledge about the other players’ knowledge, is not transparent. What are the states? Can they be explicitly described? Where do they come from?” (p. 264). Fagin et al. (1999) elaborate further: “If we think of a state as a complete description of the world, then it must capture all of the agents’ knowledge. Since the agents’ knowledge is defined in terms of the partitions, the state must include a description of the partitions. This seems to lead to circularity, since the partitions are defined over the states, but the states contain a description of the partitions” (p. 332). Economists have developed an alternative model of interactive beliefs which seems to avoid this circularity. The hierarchical approach (Mertens and Zamir, 1985; Brandenburger and Dekel, 1993) takes as its starting point a set of states of nature, which describe facts of interest about the physical world, such as which strategy profile will be played. Each agent’s beliefs about the state of nature is represented by a probability distribution over the set of states of nature; their beliefs about these beliefs are then represented by a probability distribution over these distributions and the set of states of nature; and so on. In this way, we build up an infinite hierarchy of beliefs for each player, called her type (after Harsanyi, 1968). In contrast to the Aumann structure approach,where the infinite hierarchy of beliefs is generated implicitly by partitions over obscure states of the world, here it is explicitly constructed from levels of probability distributions over clearly defined states of nature. The question remains, however, as to whether a state of nature together with a description of each agent’s type provides a satisfactory description of a state of the world. For it is not clear that an agent’s type gives a complete description of her beliefs. Her type specifies what she believes about all the finite-level beliefs of her opponents, but does it actually describe what she believes about their types, what she believes about what they believe about her type, and so on? It turns out that as long as the types satisfy certain coherency conditions, we can answer this question in the affirmative. These coherency conditions amount to assuming that the agents satisfy positive and negative introspection, and guarantee that the belief hierarchies are closed. Furthermore, the hierarchical model can be extended to deal with the problem of belief revision. Battigalli and Siniscalchi (1999) have shown how to construct hierarchies of conditional probability systems; the level-0 probability systems describe each agent’s (probabilistic) beliefs about the physical world as before, but they also encode information about how these beliefs are revised. The level-1 systems represent the agents’ beliefs over these level-0 systems, and so on. Again, as long as the appropriate coherency conditions are satisfied, these hierarchies are closed and each agent’s type describes all of her beliefs. Any extra clarity these hierarchical constructions might bring, however, is paid for at a price of greatly-increased complexity. The complexity of these models may well be self defeating: Aumann (1999) describes them as “cumbersome and far from transparent. . . In fact, the hierarchy construction is so convoluted that we present it here with some diffidence” (pp. 265, 295). In addition, two more specific problems arise. The first concerns the coherency conditions that are required for closure of the hierarchies. As we have already discussed, it may not always be appropriate to assume that agents have complete introspective access to their epistemic states; this remains true even if we are dealing with belief rather than knowledge. In the case of conditional probability systems, the coherency assumption becomes even stronger: here it is assumed that agents have complete introspective access to their belief revision schemes as well. Ideally we would like to have a system that is flexible enough to work with or without positive and negative introspection. The second problem arises when we consider the non-probabilistic analogue of these belief hierarchies, where each level in the hierarchy describes simply which members of the previous level the agent considers possible, rather than assigning probabilities to each (the former is not generally derivable from the latter: a world may be considered possible even if it is assigned zero probability). In this case it turns out that, even with the appropriate coherency conditions, the infinite hierarchy does not in general provide a complete description of an agent’s uncertainty; that is, it does not tell us which types of her opponents she considers possible (Fagin, 1994; Heifetz and Samet, 1998; Brandenburger and Keisler, 1999). Thankfully there is a path between this Scylla and Charybdis, between the obscurity of Aumann structures and the complexity of belief hierarchies. Epistemic logic is based on a formal language which can express statements about the world and what agents believe about the world and about each other. The language is built up from a set of primitive formulas by means of an inductive rule. The primitive formulas and each step of the inductive process are entirely transparent. Hintikka (1962) showed how Kripke structures (Kripke, 1963) can be used be provide a semantics for this language, i.e. a set of rules for determining the truth or falsity of every sentence or formula in the language. Hence there is no issue about whether or not these structures provide a complete description of the agents’ uncertainty: the language itself defines the limits of what we can and cannot say about the agents’ beliefs. There is a very close connection between Kripke structures and Aumann structures: the former are a general version of the latter, where the information partitions are replaced by possibility correspondences (traditionally referred to as accessibility relations), plus the addition of an interpretation which assigns truth values to the primitive formulas. Kripke structures are general enough to model knowledge or belief, with or without the introspection assumptions. Certain properties of Kripke structures correspond to various axioms and rules governing the behavior of formulas in the language: these axioms and rules, jointly referred to as an axiom system, give us a precise characterization of sets of formulas which are true in different types of Kripke structure, and hence an elucidation of the particular concept of knowledge or belief that is being modeled. The axiom system and language form a syntax for the logic. But there is a gap still to be filled. In order to extend the results just described to structures such as Stalnaker’s, we must develop a language that is richer than that of epistemic logic. In Section 2 of this paper, we define such a language by adding revised belief operators to the standard language. Thus, if Biφ is a formula of the language, then so is B φ i ψ, to be interpreted “i believes that ψ on learning that φ.” We then present a semantics for this language consisting of belief revision structures, which look much like a generalized version of Stalnaker’s structures. A theorem links these structures to an axiom system which describes how these revised belief operators, and the rest of the language, behave. This axiom system is essentially the most basic axiom system of epistemic logic augmented by additional axioms and rules that correspond to some of the AGM axioms of belief revision (Alchourrón et al., 1985). These axioms are reproduced in Appendix A. Several extensions to the model, including the introduction of introspection and consistency axioms, and common belief operators, are developed in Section 3, and Section 4 comments on some issues which are not treated by our formalism. Before we start, however, we should comment more carefully on the relevance of this work for game theory. The importance of higher-order beliefs in strategic reasoning is well understood,2 and in the dynamic setting it is essential to model how these beliefs change as agents learn new information. Battigalli and Siniscalchi (2002) have used their hierarchical models of belief revision to provide an analysis of forward-induction reasoning in its various guises (including the intuitive criterion of Cho and Kreps, 1987), as well as an epistemic characterization of backward induction. The logical approach adopted here, although less direct in application than the hierarchies of Battigalli and Siniscalchi, forms the basis of an alternative, complementary, framework for analyzing dynamic games, and offers simplicity at the same time as transparency. The simplicity comes from the semantic structures that are used to provide truth conditions for formulas of the formal language: these structures are easily adapted to provide epistemic models of games (see Section 5 for an example). Unlike the constructions of Battigalli and Siniscalchi, which are infinite by definition, these models can be very small. And the axiom system and the language itself, which provide the syntax of the logic, are straightforward to interpret and give us transparency. This syntax lays out the “rules of argument” and allows nothing to be hidden in the formalism. Soundness and completeness theorems establish equivalence between what is true in every structure and what can be proved in the axiom system: the notoriously tricky task of proving that a formula can be derived from a given set of axioms and rules is thus reduced to the mathematical problem of checking that our structures have a particular property. This methodology is adopted by Board (2002a) in a companion paper. Other papers which adopt the logical approach to analyze dynamic games include Clausing (2001) and Feinberg (2001); we discuss the logical components of those papers, along with other related literature, in Section 6.
نتیجه گیری انگلیسی
The aim of this paper has been to develop a dynamic model of interactive reasoning which combines analytical simplicity with clarity of interpretation. Belief revision structures are similar to the models used very successfully by Stalnaker to analyze rational play in extensive form games (Stalnaker, 1996), and to shed light on the forward and backward induction procedures (Stalnaker, 1998). These structures provide truth conditions for a formal language. Soundness and completeness theorems establish tight connections between the formulas that are true in various classes of belief revision structure, and those that are provable in certain axiom systems, thereby giving us a precise understanding of what the structures mean.