Overview
The adoption and diffusion of new technologies, including those associated with energy distribution and management, requires a deep appreciation of the properties and diversity of social systems. Without an understanding of values, incentives, rewards, and beliefs, any new technology is likely to fail in its intended market. We introduce a new paradigm, Emergent Engineering, whose central objective is to engineer minimal mechanisms (MM) that coordinate with maximal adaptive systems(AS) to achieve a function or objective. The minimal mechanism could be a signal, policy, platform, algorithm, or social structure that is able to align with the incentives and rewards of locally adaptive communities. We introduce the general framework and present five foundational projects and application areas aimed at providing a solid scientific underpinning for efforts to integrate a diversity of local opinions into the pursuit of a robust, sustainable, and equitable society. These include:
Emergent engineering solutions seek to introduce a minimally engineered "classical" device into an adaptive system (see Adaptation in: [8]). The adaptive system then performs much of the desired work. In this way a potentially harmonious division of labor can be achieved. This approach has several distinct advantages over purely classical interventions. The first is that the engineering mechanism, by virtue of potentially well established principles and minimality, can be rapidly generated and modified. The second is that minimal mechanism can be widely distributed and applied with low transfer costs across populations and possibly related application areas. The third is that preexisting properties of the adaptive system exploit the minimal mechanism toward their own ends. Hence misalignment of interest are reduced compared to more cumbersome and complete solutions that are imposed from above and are often "gamed" into obsolescence. The fourth is that the mechanism does not come with a fixed objective but a pair of "mechanisms", each of which responds to the needs of the local adaptive system. Emergently engineered solutions can therefore be proportionately as diverse as the individuals and communities that adopt them.
The implementation differences between classical and emergently engineered technologies Can be understood as follows:
- Ideal classical machines move through an appropriate coordinate space over time governed by principled equations of motion. This motion typically conserves a critical quantity, such as energy, and can be described as following a stationary path that extremizes the action. The Euler-Lagrange equation describes this path.
- Adaptive systems on the other hand move through a coordinate space described in terms of complicated fitness landscapes. These are rarely stationary over long intervals of time. Movement is achieved through the mechanics of two nearly irreversible processes - mutation and selection. Together these processes transition the system through a set of metastable states generating entropy and on average increasing fitness. Whereas in the classical case, the action concept when applicable, implies a near singular path of high predictability, in the adaptive case the contingencies of mutation and selection generate considerable uncertainty.
The key idea behind Emergent Engineering is to exploit the action of an introduced classical system into a larger adaptive context to either: (1) shape the adaptive fitness landscape or (2) promote adaptive variability, whether by exploring a high dimensional space, or increasing selection into favoring random variability that follows the desired path "designed" into the mechanism.
Throughout human history, the global urban population has grown continuously. More than half of the global population is currently urbanized, placing cities at the center of human development. It is estimated that by 2030, the number of mega-cities, cities with more than 10 million inhabitants, will increase from 10 to approximately 40. As a result of the concentration of humans in cities, and their associated energy needs, cities are major contributors to climate change. According to a recent estimate by UN Habitat, cities consume 78 per cent of the world’s energy and produce more than 60 percent of greenhouse gas emissions. This is true despite the fact that cities account for less than 2 percent of the Earth’s surface. There is an obvious need for a quantitative and possibly predictive theory for how larger urban areas affect a wide variety of city features, dynamics and outcomes. Perhaps most critically, we must understand the emergently engineered outcomes of urban policy. How it is larger cities, understood in classical terms (space and infrastructure), both positively and negatively affect socioeconomic outcomes and the quality of life of individuals.
One important aspect of urban features that remains under-explored in the urban scaling framework is economic inequality. Inequality has fundamental implications for individuals’ quality of life and the productivity and stability of societies. Past research has heightened debate about economic inequality and its relationship with economic growth and general welfare. And there is growing concern about the negative effects of urban inequality on political stability, crime and corruption. It is well documented that greater regional inequality correlates with higher murder rates, and reduced economic growth. Economic inequality is usually measured in terms of the dispersion in the distribution of income or wealth, such as in the Gini coefficient. Some past research has noted larger cities are correlated with increasing Gini coefficient in income distribution, but it remains unclear if there are systematic relationships between other features of the income distributions and urban area size. Furthermore, characterizing distributions by a singlemetric may lose important information, for example, does being poor in bigger cities correspond to a higher or lower standard of living than being poor in a smaller city? All of these considerations highlight why understanding the interface of materials, regulations, behavior, and outcomes, will be of critical research and practical importance to society in the coming decades. Building on the theory of scaling we are proposing new methods and models to study the scaling of inequality.
This works seeks to analyze the underlying causes of alarming patterns that we have discovered when analyzing total income scaling in population percentiles. These results include the discovery that income in the least wealthy decile (10%) scales almost linearly with city size, while that in the most wealthy decile scales with a significantly super-linear exponent [23]. This result illustrates that the benefits of larger cities are increasingly unequally distributed, and for the poorest income deciles, city growth has no positive effect on income growth over the null expectation of a linear increase. And we have found that these results hold after adjusting for cost of living as proxies for housing cost. Cities are, therefore, not only disproportionally pollute the earth’s surface, but increasingly, they’re engines of wealth generation for the few and inequality for the many. Urban planning, redistricting, and growth policies need to take account of possible effects on adaptive social networks and agents for whom wealth and poverty is the largely unanticipated outcome.
Primary Project Objectives:
- Explore the mechanics of urban scaling that increases wealth production while increasing inequality. Seeking means of breaking this correlation.
- Exploring those behavioral incentives that connect innovation to negative externalities to include pollution and excessive energy use.
- Identify commonalities and differences in the scaling of function diversity across organizations as a means of classifying the relative effectiveness of top-down versus bottom-up configurations.
Learn more about SFI's ongoing research on Cities, Scaling, and Sustainibility
Collective decisions are central to human societies, from small-scale social systems such as families and communities, through to larger-scales inclusive of city planners, democratic governments, and international organizations. Some of the most pressing challenges facing humanity, including, addressing climate change by adopting new energy sources and technologies, mitigating heterogeneous threats of global pandemics, and addressing economic inequality, critically depend on collective decisions. A central concern regarding these systems is the large scale effect of social learners. These are defined as individuals who adopt other people’s opinions and behaviors rather than exploring the optimal options on their own.
In order to be effective when introducing a new technology or new policy we need to understand how social learning operates at a multitude of different scales - from local organizations and communities, through to cities and states. It is not enough to introduce a "better" or more economical option and simply assume that it will be broadly adopted. Research on the effect of social learning on collective decision outcomes has come to mixed conclusions. Some find social learners impair collective performance, some find them beneficial, and some argue they depend on the network structure, adaptability, or the level of network effects. The question is challenging to address because collective decision outcomes depend on the insufficiently understood interactions of multiple cognitive and social factors, including cognitive strategies relying on individual or social learning, task properties, and the social influence processes. Few attempts have been made at developing overarching mathematical frameworks capable of integrating these complex interactions into parsimonious theories. What is required is an approach that can explicitly compare different assumptions within an overarching mathematical framework that enables exploration of the dynamics underlying collective performance. This needs to include both adaptive agents described in terms of cognitive strategies (individual vs. social learning), the classical rewards and incentives as encoded in task properties (relative merit of options), and structures supporting the social influence processes (normative vs. informational conformity). Collective decision making is grounded in imperfect, local information that drives local learning outcomes. Engineered mechanisms of consensus generation and information diffusion need to take account of the emergent properties of groups and social circuits.
Primary Project Objectives:
- Characterize how social networks and individual beliefs interact so as to increase or reduce the acceptance of new energy policies.
- Discover optimal group sizes and how they might be encouraged in order to maximize the efficiencies of group decision making.
- Model the adaptive dynamics of learners in communities and their relative success in relation to a variety of different incentives and sources of information provided at different organizational scales.
Two notable characteristics of political and economic life are stasis and transformation. The Annals historian, Fernand Braudel, described the heterogeneous movements of history as ‘what moves rapidly, what moves slowly, and what appears not to move at all.’ Over the course of years, some patterns of belief, theories, fashions, firms, and political views are fickle whereas others appear frozen. One key to understanding these dynamic patterns is to think of individuals in relation to the institutions that they create, and how these in time, come to govern their lives. It is not enough to change public opinion. It is necessary to change the institutional context in which opinions are exchanged and collective decisions reached. In relation to Emergent Engineering, institutions should provide a minimal mechanism enhancing local adaptability and promote social inclusion.
The idea of the “institution”, defined very generally by the Nobel Prize winning economist Douglas North as the humanly devised constraints that structure political, economic, and social interactions, includes a range of diverse phenomena from social norms and laws to firms, political doctrines, and scientific theories. This space might further be organized into institutions supported by codified laws and policies including firms, markets, rule-based abstractions such as scientific theories, fashions, political beliefs, and social norms, encoded largely through collective perception. In an influential 2012 work on institutional economics, Why Nations Fail, sociologists Acemoglu and Robinson, explored the idea that institutional structures are the primary determinant of national welfare. They sought to show how institutions tend to outweigh historical inertia, a diversity of cultural factors, and geographical position, in explaining differences in local living standards. For Acemoglu and Robinson, institutions are a set of formal and informal rules and mechanisms for coercing individuals to comply with a larger set of rules that exist in society. And these can either be extractive, thereby excluding certain populations from income, or inclusive promoting greater fairness. Within this framework social revolution is the primary driver of large-scale change. Therefore, finding some principled means of promoting their evolution, without requiring upheavals on the scale of national revolutions, would be of obvious importance.
Institutions are hybrid organizations: part adaptive actor, part legal, economic, and social rule systems. Frameworks are required that integrate these components without excessive simplification - assumptions of equilibrium - and allow for ongoing learning and reconfiguration. We would to like describe, or perhaps create, institutions in terms of emergently engineered rule systems that promote both variability and adaptability. These institutions need not be coercive but rewarding and even generative of new societal rules. The question is how to deploy emergent engineering in institution building as a minimal mechanism of sensing and transformation?
Primary Project Objectives:
- Empirical analysis of multi-scale institutional dynamics with a focus on minimal regulatory systems and policies with a demonstrable effect on processes of equitable governance.
- Extension of theoretical framework to include more explicit models of computation for describing candidate distributed institutional ledgers.
- Normative implications and possible actions to take in order to promote institutional transitions.
Models and algorithms are not crystal balls. Predicting the future needs and behaviors of humans is extremely difficult, whether in the criminal justice system, the progress of an epidemic, or urban growth and neighborhood change. But models can help us articulate the mechanisms behind social phenomena, and the range of possible scenarios that the future might hold. Participatory modeling is a practice where community members build a model together, explore what forces they think are at play, and explore what futures these forces might lead to. This can clarify the discussion of a city’s future by asking participants to state, not just what they hope or fear will happen, but what behaviors or incentives will lead to that future — and what interventions might make it more or less likely. In other words, participatory modeling directly engages with mechanisms generating variability, and clarifying the rewards to future actions. Models that focus on the behavior of individual agents, or networks of individuals that interact with each other on an equal basis, are good at explaining “bottom-up” or “self-organizing” phenomena. But they typically fail to include historical and systemic effects such as government policy or the influence of powerful central figures. While formalisms like network theory and agent-based models and currently popular — and of undeniable use — they should always be applied with a grain of salt, and a heavy dose of critical thinking. In a network, what (or who) are the nodes, and what are the links between them? In an agent-based model, who are the agents, and what agency do they have over their lives? If agency is limited than the crucial feature of adaptability is lost.
We need to democratize the models and algorithms that decision makers use, to make sure they serve our communities as well as possible. Algorithms that are proprietary and opaque should be avoided, especially in sectors such as housing where discrimination is a central concern. More broadly, when city planners, lenders, social service providers, and others collect data, we need to consider whether this data means what we think it means in the criminal justice context. Data collection and modeling are not ends in themselves: they question is not just how best to use them, but if and when to use them. Software engineered algorithms, parameterized by population data, need to operate within the realistic setting of local populations of diverse adaptive agents. All stakeholders need to understand the strengths and weaknesses — including potential errors — of the models we use, and those which are used to make decisions about us.
Primary Project Objectives:
- Recognize and measure potential feedback effects in the use of algorithms that advise human decision-making.
- Designing mechanisms that restore adaptability in communities concurrent with the introduction of new policies.
- Find ways to make algorithms used in high-stakes decisions such as crime and housing transparent and auditable, and to make their design broadly participatory.
This research is part of a three year project sponsored by the Robert Wood Johnson Foundation Grant #81366