Note: This is not to be thought of as the agenda for the event. It's organized chronologically so that you can find the abstracts more readily.
8:30 am -- 9 am: Brief introductions by Jerry Sabloff, Doug Erwin, & David Wolpert
9 am -- 11:30: TUTORIAL on Game Theory by Joel Sobel
12:30 pm -- 3 pm: TUTORIAL on Information Theory, by Andy Fraser
3.30 pm -- 5.30 pm TUTORIAL summarizing some of what is known
about the overlap between information theory and game
theory, by Nils Bertschinger and Eckehard Olbrich
9:00 am -- 9:30 am: Kevin Leyton-Brown
"Beyond Equilibrium: Predicting Human Behavior in Normal Form Games"
It is standard in multiagent settings to assume that agents will adopt Nash equilibrium strategies. However, studies in experimental economics demonstrate that Nash equilibrium is a poor description of human players’ initial behavior in normal-form games. In this talk, I'll describe a wide range of widely-studied models from behavioral game theory (BGT). For what we believe is the first time, we evaluated each of these models in a meta-analysis, taking as our data set large-scale and publicly-available experimental data from the BGT literature. We also analyzed the parameters of the best-performing model, and identified ways of modifying it--and, indeed, simplifying it--to improve performance. In the end, our work demonstrates two surprising facts: one BGT model was consistently the best, and people are smarter than behavioral game theorists had thought.
9:30 am -- 10:30 am: David Wolpert
"Unawareness, Information Theory, and Multiagent Influence Diagrams"
One of the enduring challenges of game theory is how to model unawareness, in which a player does not simply assign low probability to an event, but does not even know that that event is possible (``unknown unknowns"). Clearly any change in a player's unawareness changes what information they have about the game. Yet without a formal model of unawareness, it is impossible to analyze such changes in their information. Among other things, this makes it impossible to use information theory to compare information that changes a player's unawareness with information that they acquire via conventional information channels.
In this talk I present a way to represent games using Bayes nets, which is related to Multiagent Influence Diagrams. I prove that this new representation is equivalent to the strategic form (and therefore extensive form) representations of games. I also show how to extend it to model player unawareness. I then show how to use this extension to analyze the information theoretic aspects of changes in player awareness. This allows us to use information theory to analyze how information acquired via changes in awareness is related to information acquired via conventional information channels.
Throughout this talk I emphasize some of the issues concerning information in games that are still poorly understood.
11:00 pm -- 12:00 pm: Tobias Galla, Doyne Farmer
Complex dynamics in complicated games (joint work with Doyne Farmer) 1: In this work we analyze the outcome of learning of high-dimensional games and show that chaotic motion is a frequent outcome. Using path-integral methods are able to distinguish learnable from unlearnable games. Chaotic motion limits the usefulness of conventional concepts such as the notion of Nash equilibria, these are not the norm in learning. We therefore argue that conventional game theory is not applicable for a wide range of games and learning algorithms.
The role of noise in game dynamical learning 2: I characterize the outcome of learning of repeated games in the face of stochasticity and imperfect information on moves made by a player’s opponents. This intrinsic noise can affect the outcome of learning, turning fixed points into coherent cycles, sustained by noise and limiting the effectiveness of game learning. Methods from statistical physics can be used to characterize these cycles analytically. Similar quasi-oscillations are seen in evolutionary games in finite populations.
Complexity measures, information geometry and multi-particle correlations 3: We study complexity measures based on information geometry as recently proposed by Olbrich et al, and show that these measures can increase under local transformations. In particular we show that within this framework multi-particle correlations can be introduced by transformations acting on single particles only, thereby questioning the interpretation of these measures as a quantifier for complexity and correlation. We propose a refined definition, and apply it to systems such as coupled chaotic maps. Work on the characterization of multi-particle distributions generated by quantum systems using information geometry and exponential families is currently in progress.
1 Complex dynamics in learning complicated games Tobias Galla, J. Doyne Farmer, http://arxiv.org/abs/1109.4250
2 Intrinsic Noise in Game Dynamical Learning, T. Galla, Phys. Rev. Lett. 103, 198702 (2009)
3 Complexity measures, emergence, and multiparticle correlations, T.Galla, O. Gühne, Phys. Rev. E 85, 046209 (2012)
"Information and Games of Chance"
1:00 pm -- 1:30 pm:
"Machine Learning, Markets, Information Theory and Games"
One fundamental issue with a minimax approach to games is that it makes assumptions regarding other game players that are unbounded: the other players are assumed to choose to follow optimal paths. The result is that the possibility of suboptimal play is not considered, despite the fact that sub-optimality is generally a computational necessary - often neither computers or people have the computational grasp for optimal play. To deal with bounded rationality a game theoretic model can be augmented with an opponent or player model. For agents to build an opponent model, the opponent state - the set of opponent beliefs and/or action choices must now form part of the state space for a player. Such models can then be formulated as multi-agent Markov Decision Processes. However the massive state expansion involved is often problematic. Independence assumptions regarding other players help in establishing maximum entropy distributions for opponent play, that can be used in a utility maximising framework. By considering the whole system as a Markov decision process we can reformulate this as a problem of probabilistic inference via KL divergence minimization . An alternative approach is a bounded communication theory for game theoretic interaction. If the knowledge and interaction regarding other agents action choices are limited to some summary statistics, then conditioned on those statistics, the agents act independently. This relates to market formalisms: in standard markets the cost of goods is the communication mechanism, but this can be extended to any interaction summaries. In previous work [2, 3] we have established relationships between standard machine learning mechanisms and prediction markets of Arrow-Debreu securities. Particularly we have established the market of agents endowed with isoelastic utilities generate alpha mixtures of agent beliefs: alpha mixtures  are derived from the minimization of R´enyi Divergence between the empirical and target distribution. An individual maximising utility in the absence of other agents will choose a pure strategy. However, a pure strategy imparts very high information about the action the agent will take. In the presence of others who might exploit the knowledge of the individual actions to that individual’s deficit, it may make sense to instead choose a mixed strategy which reduces the potential to be exploited by reducing predictability. The amount of information communicated is measured by the entropy of the mixed strategy, and such an approach results in entropy penalized utility maximization. I will discuss the formalisms in which this idea makes sense.
 S. Amari. Integration of stochastic models by minimizing -divergence. Neural Computation,19(10):2780–2796, 2007. A.J. Storkey. Machine learning markets. In Proceedings of Artificial Intelligence and Statistics,volume 15. Journal of Machine Learning Research W&CP, 2011. A.J. Storkey, J. Millin, and K.J. Geras. Isoelastic agents and wealth updates in machine learningmarkets. In Proceedings of ICML 2012, 2012. M. Toussaint, A.J. Storkey, and S. Harmeling. Expectation-maximization methods for solving (PO) MDPs and optimal control problems. In Silvia Chiappa
1:30 pm -- 2:30 pm: PANEL DISCUSSION ONE: Chris Sims, Kenneth Arrow, Tali Tishby, John Geanakoplos, (moderator - Simon DeDeo)
5:45 pm Talk by Jennifer Dunne
"Human Impacts on Ecosystems: A Network Approach"
Networks of complex species interactions reveal the architecture underpinning the flow of energy and resources through ecosystems. While quantitative analysis and modeling of food webs has been around since the 1970s, it is only in the last decade that ecological network research has really taken off. A wide range of scientists are increasingly employing network approaches for a diverse set of questions that relate to the structure, dynamics, evolution, stability and robustness of ecosystems. I will point to some advances in understanding from the last decade, and then highlight a few of the many exciting empirical, analysis and modeling frontiers of the next decade and beyond, with a focus how food webs are being used to examine human impacts on ecosystems.
9 am -- 10 am: Tali Tishby
"How to Discount Information? On the Connection Between Information and Time in Sequential Sensing and Acting"
Long interactions of an organism with its environment are characterized by a balance between sensory information rate and control complexity. The typical information flows can be described by an extension of the standard Bellman equation which optimizes a linear combination of value and information about the future (analogous to "free energy"). One can distinguish between two types of information exchange. The first, we call "metabolic information", is the ongoing flow of sensory information required to maintain ingoing stationary interaction with the environment. This type of information flow has a constant average rate determined by the reward rate and the control complexity. This type of information flow characterizes autonomous agents with no learning or long term adaptation. A more interesting information flow is the one associated with exploration and learning. This flow is not stationary in any finite state environment and is grows sub-linearly with time. It poses difficulties with the constant balance of information and reward. An interesting way to maintain the tractable Bellman formulation for this case is by a renormalization of time to non-uniform units with equal average information processing rate, that imposes renormalization of the states and the emergence of sensory and planning hierarchies.
I will describe this formalism and some of its theoretical and possibly cognitive implications.
1 pm -- 2 pm: James Wright, Jim Crutchfield
"Evaluating Set-Valued Predictions: An Information Theoretic Approach"
Some solution concepts, including both rationalizability and Nash equilibrium, can be understood as making a qualitative statement about which actions are likely, rather than providing a quantitative probability distribution over actions. In evaluating such solution concepts, we must trade off accuracy, which prefers large sets, against specificity, which prefers smaller sets. Furthermore, solution concepts which consistently report large sets of "likely" actions may be very informative when the actions which are excluded are consistently very rare (as in a single iteration of dominated strategy removal). We explore the use of techniques from information theory to evaluate set-valued predictions quantitatively; in particular, we formulate a scoring rule based on mutual information that appears to have some attractive properties.
"Information Dynamics: Evolutionary Innovation, Children's Games, and Iterated Learning"
I'll introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream “teacher” and then pass samples from the model to their downstream “student”. It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.
4 pm -- 5 pm: Melanie Moses
"How Ants Turn Information Into Food, a Case Study in Scalable Distributed Search"
Many distributed search problems require identifying a rare and previously unseen event and producing a rapid response. Biological systems have evolved solutions to distributed search and response under uncertainty. Immune systems and ant colonies efficiently scale up massively parallel search with automated response in highly dynamic environments, and both do so using distributed coordination without centralized control. This talk will focus on field studies and computer simulations that show how ants exploit information to improve foraging for seeds in various spatial distributions. We analyze ants in the field and simulated ants whose behavior was selected to maximize foraging rate using genetic algorithms. In both cases, foraging rates increase systematically as seeds are more clumped. More specifically, the average time to collect a seed is proportional to the amount of information required to specify the distribution of seeds. The increase in foraging rate as food is more clumped was indistinguishable across a wide range of colony sizes. Computer simulations demonstrate how information about seed locations encoded in communication and memory of individual ants contribute to successful search.
9 am -- 10 am: PANEL DISCUSSION TWO: Doyne Farmer, Ben Golub,
Luis Bettencourt, Carl Bergstrom, (moderator - David Wolpert)
1 pm -- 2.30 pm: Ole Peters, John Geanakoplos, David Sherrington
"Utility vs. Dynamics"
Game theory and economics provide numerous set-ups for decision-making under uncertain conditions. In many of these, positivity of expected net changes in wealth is a poor predictor of human decision-making. The problem is classically resolved by introducing non-linear utility functions that translate money into usefulness. The utility function is chosen so as to match human behavior, but its precise form can typically not be derived from first principles and constitutes a loose end in the formalism. In a few recent publications we have explored a different perspective, where instead of computing expectation values we compute time averages assuming multiplicative dynamics of wealth. This procedure, while conceptually different, turns out to be formally equivalent to using logarithmic utility. It clarifies under what circumstances logarithmic utility is expected to yield realistic predictions and has led to the discovery of various errors in classic treatments of these problems. In this speculative talk I will present the basic rationale behind the new approach and indicate connections to information theory and statistical mechanics. Different dynamics lead to different time average growth rates and to different equivalent utilities (linear utility = additive dynamics, logarithmic utility = multiplicative dynamics). The growth rates are formally similar to entropies, and I would like to invite the audience to explore the relationship between non-standard entropies such as those summarized by Hanel, Thurner, Gell-Mann and different types of dynamics.
"Games of Money and Status"
"Use of Dynamical Generating Functional Methods for the Analysis of the Macroscopic Behavior of Multi-agent Games in the Limit of a Large Number of Players."
Methods developed for studies of spin glasses are applicable to analytic study of multi-agent games in the limit of large numbers of players, where the individual propensities and strategies are drawn from intensive distributions, with or without stochasticity in operation, with or without detailed balance. Exact mappings for typical behaviour of the initial many-body systems can be made to effective single player stochastic ensembles with memory and coloured noise (not just single – representative - players) in situations in which the networks on which they sit and the distributions of the interactions thereon are determined without spatial constraints, as for example for systems driven by distance-independent information, such as provided on the internet, radio, television, telephone, news etc.
Currently most work has been performed for systems with single underlying microscopic timescales but it is interesting to consider extensions in which there are multiple timescales to include co-evolution of strategies, group memberships etc., typically on slower timescales (and with possibly different effective stochastic noise levels), along with faster individual game-plays. These can, in principle, be either supervised or unsupervised, either “knowing” what is being aimed for or discovering it without assistance.
2 pm -- 3 pm: SUMMARY TALK, by Simon DeDeo