Program Overview
Many challenges in the world today – disease dynamics, collective and artificial intelligence, belief propagation, financial risk, national security, and ecological sustainability – exceed traditional academic disciplinary boundaries and demand a rigorous understanding of complexity. Complexity science aims to quantitatively describe and understand the adaptive, evolvable and thus hard-to-predict behaviors of complex systems. SFI's Complex Systems Summer School has provided early-career researchers with formal and rigorous training in complexity science and integrated them into a global research community. Through this transdisciplinary, highly collaborative experience, participants are equipped to address important questions in a range of topics and find patterns across diverse systems. This program took place June 9 – July 5, 2019 in Santa Fe, New Mexico, USA.
Group Projects
Ian LimHousing and Development Board (SG) |
John S. SchulerGeorge Mason University (US) |
There are many theories of the 2008 financial crisis which began in the United States and spread throughout the world. The rhetoric around these theories often suggests that they are mutually exclusive even though there is no logical reason this must be the case. The goal is to construct an agent-based model where agents compete for housing and authorities decide how many permits to issue and where. Further developers decide to build when it is profitable to do so. Thus, we have an endogenous market to which we can apply policy shocks and compare the outcomes of different policy regimens. The goal of this model is a basis on which to add a diversity of preferences and other behavioral properties of agents, possibly motivated by research in psychology and bounded rationality. These results can then be extended to multiple housing markets. Finally, if we add in financial markets, we have a platform on which to study the financial crisis.
Jacqueline BrownYork University (CA) |
Dakota MurrayIndiana University Bloomington (US) |
Kyle FurlongMITRE Corporation (US) |
Emily CocoNew York University (US) |
Fabian DablanderUniversity of Amsterdam (NL) |
A Breeding Pool of Ideas: Analyzing Interdisciplinary Collaborations at the Complex Systems Summer School
Interdisciplinary research is increasingly viewed as necessary to break down disciplinary silos and advance knowledge in areas that span multiple fields of study. Consequently, it is important to understand the factors that facilitate cross-disciplinary collaboration. In this research, we examine the formation of self-organized project groups and the structure of collaboration networks at the Santa Fe Institute’s Complex Systems Summer School for graduate students and professionals from around the world. Our study includes all editions of the summer school from 2005-2019, a dataset comprising 822 participants and 322 projects. We used several methods to evaluate the factors influencing group formation. Our analysis of group homophily by participant discipline suggests that no one discipline is more prone to interdisciplinary collaboration than others. Furthermore, most people tend to work with people from disciplines that are different from their own, indicating a willingness to engage in interdisciplinary collaboration among participants of all backgrounds. We also used a series of null models to examine how country of study, gender, position, institutional prestige, and discipline of participants influenced group formation. Our results for the latter four were consistent with random mixing, however, country of study presented evidence for higher than expected rates of collaboration between U.S. and non-U.S. participants. In examining the proportion of projects in each discipline in comparison to the number of participants from that discipline, we found that social and behavioral sciences projects were significantly overrepresented. In contrast, physical sciences, engineering, and mathematics and statistics projects were significantly underrepresented. This could be due to several factors, such as higher level of baseline interest in or knowledge of social and behavioural sciences or the common application of methods from physical sciences, engineering, and mathematics to study topics in other disciplines. Finally, we conducted a survey that returned 167 responses from alumni of the Complex Systems Summer School, the results of which illustrate the program’s profound impact on participants. On the whole, our study suggests that programs such as the Complex Systems Summer School can successfully foster interdisciplinary collaboration by selecting participants from a wide variety of backgrounds and creating space and time for self-organized research.
Travis R. MooreUniversity of Wisconsin, Madison (US) |
Winnie PoelHumboldt Universität zu Berlin (DE) |
Glory Dee A. RomoUniversity of the Philippines Mindanao (PH) |
Ian LimHousing and Development Board (SG) |
Robert W. S. CoulterUniversity of Pittsburgh (US) |
There is an abundance of community-based research literature that incorporates complex system science concepts and techniques. However, currently there is a gap in how these concepts and techniques are being used, and, more broadly, how these two fields complement one another. The debate on how complex systems science meaningfully bolsters the deployment of community-based research has not yet reached consensus, therefore, we present a protocol for a new scoping review that will identify characteristics at the intersection of community-based research and complex systems science. This knowledge will enhance the understanding of how complex systems science, a quickly evolving field, is being utilized in community-based research and practice. Note: Helena VonVille, University of Pittsburgh (US), contributed substantially to this project.
Shihui FengThe University of Hong Kong (HK) |
Kate WoottonSwedish University of Agricultural Sciences (SE) |
Alec KirkleyUniversity of Michigan (US) |
Hunter WapmanUniversity of Colorado, Boulder (US) |
In this study, we explore the structure and dynamics of social interactions among the participants of the 2019 Santa Fe Complex Systems Summer School. Using a unique data set collected through online surveys at five different time intervals, we construct a weighted, directed network based on reported interaction strengths among attendees. Nodal metadata, including relevant occupational and demographic characteristics, was recorded in addition to tie strengths, allowing for a quantitative analysis of mixing patterns in the network over time. We analyze the dynamics of social interactions among all the participants during- and post-program to investigate the role of individual characteristics in establishing these social and collaborative research relationships, finding that collaborations are quite diverse and weakly segregated by demographic information. We observe that, despite relatively heterogeneous populations of various attributes, there is relatively weak preferential mixing between participants with respect to these attributes.To extend the work here, it would be interesting to see whether one can use statistical generative models for networks to try and infer missing data from the surveys, which could be tested using cross validation on a subset of known network data. This preliminary analysis illustrates the strong interdisciplinary character of the program, and we hope that future analysis will yield more insight into the underlying factors driving diversity in research collaborations.
Marjoriikka YlisiuruaUniversity of Helsinki (FI) |
Winnie PoelHumboldt Universität zu Berlin (DE) |
Many cultural change processes may take decades, so separating any transformative change from “change as normal” typically happens post-fact. Yet if cultures are perceived as self-sustaining organisms that are subject to constant change, a repeating, transformative change that should be recognizable is erosion of a culture. In nature, erosion is a gradual process where external forces dissolve material and move it from one location to another, but it also involves mass wasting, when the gravitational force acting on an eroded soil slope exceeds its resisting force at a critical point. In culture, erosion is usually seen to be caused by external influence that undermines cultural values, like when globalization erodes local or indigeneous cultures. Cultural erosion is often marked with a sense of social instability, loss and anger towards the forces causing the erosion. Yet, cultural erosion can also be seen in a neutral light, as a transformation generated by the ordinary operation of structure, a sequence of occurrences that result in transformations of structures, or as "history preparing the sandpile in a state that is far from equilibrium". Online communities appear a promising target to assess mechanisms of cultural erosion processes, but such focus is scarcely present in previous research. Gradual cultural change is often studied as an outcome of goal-oriented political-social movements with clear ideological boundaries, and strategic and contentious mobilization. Yet there are examples of less contentious, gradual online community erosion, such as when online communities lose their users through natural death so that even Facebook may soon be filled with more dead than living people. In this paper, we present our theoretical ideas, current operationalisation of variables and interim analysis on mech- anisms that lead to cultural reproduction and erosion. We analysed a full history (2001-2017) of the largest Finnish anonymous online discussion board, Suomi24.fi. The board serves as an example of transient and ephemeral online communities that through lifestyle movements, a particular less deliberately political mode of sociopolitical coordination, can easily drift into more visible forms of collective action and cultural change. Online communities serve animal activists reviewing vegan restaurants at Yelp, but also individuals asking questions on python at Stack Overflow, and following a film star’s Instagram or Twitter. Metaphorically, we compare online communities to ecological river banks; transitional boundaries between water and land, that frequently change under naturally dynamic conditions. We are intrigued by the potential power and societal function of these threadbarest of social ties, that might serve an integral sustaining function in society.
Jack ShawYale University (US) |
Kate WoottonSwedish University of Agricultural Sciences (SE) |
Emily CocoNew York University (US) |
Dries DaemsUniversity of Leuven (BE) |
Andrew Gillreath-BrownWashington State University (US) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Ecological communities are complex systems, often composed of thousands of interacting species. Advances in mathematical methods are beginning to permit quantitative insights into ecosystem functioning. Network analyses, in particular, of those interactions have unlocked critical features of community structure, stability, and responses to perturbations (e.g., climate change). The geological record offers a multitude of case studies on how life responded to perturbations across different time scales. Studies of ancient interaction networks, as evidenced by fossils, reveal important paleo- ecological processes. However, the effects of information loss on ancient interaction networks are severely understudied. Many paleo-ecological studies do not account for key differences between modern and ancient interaction data, including the selective loss of pelagic and soft-bodied taxa during fossilization. To address this topic, we applied an information loss pipeline, modelled on the selective loss of non- biomineralizing taxa during fossilization, to six modern trophic networks (five marine systems and one freshwater system). The structures of these “artificially fossilized” networks were compared with three ancient trophic networks (two marine systems and one lake system) preserved in fossil deposits. A comparison of community-level composition and topology of the food webs as well as node-specific network metrics showed that the effects of artificial fossilization on network structure were highly dependent on initial structure and the characters of taxa within the network. Some metrics, such as trophic position and omnivory index, displayed unique responses to selective information loss whereas others, including connectance and degree, were indistinguishable from random loss. It is clear that specific taphonomic biases must be factored into analyses of the ecological structure of food webs based on fossil evidence.
Paula ParpartWarwick Business School (PL) |
Mikaela AkreniusIndiana University Bloomington (US) |
Economic decisions have often been shown to be inconsistent and sometimes displaying preference reversals, while human perceptual classification performance approaches that of an ideal observer. One recent proposal has been that suboptimal choices may result from efficient coding of decision-relevant information, a strategy that processes expected inputs with higher neural gain than unexpected inputs, using concepts from information theory. In this account, deviations from optimality can be explained with the robust decisions that result from efficient coding. Recent work showed that in fact both perceptual and economic decisions can appear to be supoptimal and context-dependent in environments that are volatile. We propose that the same computational framework, efficient coding, may be able to capture variations in decision making strategies. Precisely, we argue that it may depend on the volatility of the environment (stable/variable) whether efficient coding gives rise to decision behaviour that looks more like a full-information strategy (logistic regression) or more like a heuristic (Take-The-Best). Thereby, we provide a novel explanation for cognitive strategies that relies on a bottom-up approach using only mechanism - efficient coding - and environmental properties. We propose a simulation study to test our novel predictions that arise, and an experimental learning study to test the account on empirical data. Implications of this work would be that heuristics arise as a natural by-product of the neural codes’ mechanism. Further, it would imply that robust decision-making need not contradict a Bayes-optimal account.
Gen LiCalifornia Institute of Technology (US) |
Alec KirkleyUniversity of Michigan (US) |
Dan KrofcheckUniversity of New Mexico (US) |
Brennan KleinNortheastern University (US) |
Mountainous river networks are critical landform systems, hosting 12% of the world’s population and a range of ecosystems, delivering soil and nutrients to downstream floodplains, while also generating hazards threatening mountainous communities. Mountainous rivers typically exhibit tributary channel networks that govern how sediment and fluids are transferred across steep terrains. By adapting concepts from graph theory and information theory, we develop a novel approach to characterize the dynamics of water and sediment transport in mountainous river drainage basins. We calculate the non-local entropy rate (nER), a metric that has been applied to quantify diversity in flux transport in distributive delta networks, of a range of small, first-order river catchments located in different tectonic settings (e.g. Figure 1). We find that the river networks in tectonically active regions are featured by low-to-moderate nER, suggesting that the transport of water and sediment is dominated by limited channels and associated sub-catchments and hillslopes. In contrast, those river networks in tectonically quiescent regions have high nER, pointing to a more spread supply and transport behavior. We suggest that this reveals the differences in geomorphic processes in those settings: in rapidly uplifting, steep mountains, landslides that occur only in a small proportion of the landscape play a major role governing fluxes and landform change, whereas in inactive mountains, diffusive transport over the whole landscape controls fluxes and landforms. If this the case, our findings could be used to delineate and rank areas that are prone to landslide hazards. Overall, this study provides new insights into the delivery pathway and dynamic transport processes of sediment and water in mountainous river systems, as well as a new measure of the form and function of river networks.
Jessica Audrey LeeGlobal Viral (US) |
Kirtus LeybaArizona State University (US) |
Adam Z. ReynoldsUniversity of New Mexico (US) |
Ritwika VPSUniversity of California, Merced (US) |
Daniel BorreroWillamette University (US) |
Pam MantriCognitive Tools Limited LLC (US) |
Evolution of phenotypic diversity and co-operation in a spatially explicit model of microbial populations
Two common explanations for phenotypic heterogeneity in clonal microbial populations are bet-hedging and division of labour. The former posits that it is beneficial for a subset of the population to have reduced fitness in normal environments to increase the probability of survival in stressful environments; a popular example is the existence of antibiotic persister cells. On the other hand, division of labour attributes phenotypic diversity to different subpopulations carrying out different sets of tasks. While bet-hedging can be thought of as a strategy in anticipation of temporally heterogeneous environments, division of labour does so in space. However, realistic microbial populations experience both spatial and temporal heterogeneity. In this study, we use a spatially explicit agent-based model (ABM) conceptualised in NetLogo, to investigate the balance between bet-hedging and division of labour as potential drivers of phenotypic heterogeneity in the evolution of a microbial population subjected to periodic toxin pulses as well as dilution and transfer to a new environment. Our phenotype of interest is the rate at which each bacterial cell degrades toxin in the patch that it occupies. Interactions between the agents and their environment are encoded in the form of a noisy toxin signal which in turn is sensed by the cell with some error (referred to as the cell’s response error), and the rate of toxin diffusion across the simulation space. Each microbial cell is characterised by its health (which declines in the presence of toxin), toxin degradation rate, growth rate, phenotype switching rate, as well as the response error. At each simulation time step, each cell has to apportion energy towards three tasks: sensing the toxin signal (and switching the phenotype appropriately), degrading toxin, and reproducing. Here, we present results from an initial set of in silico experiments on our model run for 10,000 time steps to test the effect of three environmental properties – environmental noise εe, diffusion rate k, and toxin concentration τ – on the survival and evolution of the microbial population. As expected, toxin concentration is a major predictor of population survival, though environments with low diffusion were better able to survive high toxin. For populations that survived the entirety of a model run, we found that the toxin degrade rate (largely) stabilised to a single value while the phenotypic switching rate fell to zero, resulting in less diversity in the population than we expected. While more comprehensive parameter sweeps would be necessary to tease out how the evolution of toxin degradation rate as well as phenotypic switching rate depends on spatial and temporal heterogeneity, our initial results suggest that it is generally advantageous to have a high toxin degradation rate in high-toxin environments (see figure). In the future, we will refine our model to better study evolution under a larger set of environmental conditions to aid our understanding of what drives the rise of phenotypic diversity.
Erwin KnippenbergCooper/Smith (US) |
Andrew Gillreath-BrownWashington State University (US) |
Dan KrofcheckUniversity of New Mexico (US) |
Pam MantriCognitive Tools Limited LLC (US) |
Fabian DablanderUniversity of Amsterdam (NL) |
Alexander BakusShopify (US) |
Over 820 million people currently suffer from hunger, which has continued to increase since 2015. Climate change is a major driver in world hunger, particularly in Sub-Saharan Africa. We aim to reconcile resilience of agricultural spaces and resilience of people’s well-being, which are inherently inter-related. To better understand the dynamics that underpin household and community resilience to weather shocks, we relate physical shock dynamics (e.g., flood power law) to social dynamics. We test these concepts using household and village data from southern Malawi (the Chikwawa district). We combine economic, ecological engineering, and archaeological perspectives of resilience to understand the unique conditions under which food security changes through time, and how we might better predict food security. We use statistics, machine learning, remote sensing data, and archaeological data to work on this problem. We focus on the use of the Reduced Coping Strategy Index (rCSI), which is a measure of hunger. A lower rCSI means that a household or village is more food secure, as they have more ways to handle shocks to the system (Figure 1). Since facing climate shocks and other disturbances to a food system, such as conflict and war, is not a new issue, we consider the archaeological record, which provides time depth on different variables that have impacted food security throughout time around the world. In Malawi, some people have access to other resources (e.g., shorter distances to markets) and options that allow them to cope when there are shocks to the system. This research is important for understanding what causes high rCSI scores and how the relationship between social and ecological systems impacts rCSI. Thus, as we move forward, we aim to better predict future rCSI to mitigate and offer solutions for food security.
Mikaela AkreniusIndiana University Bloomington (US) |
Ethan NadlerStanford University (US) |
Mark ChuColumbia University (US) |
A plethora of work in cognitive science and cognitive psychology has studied the interdependency of perception and higher level cognition, sug- gesting that perceptual cues can influence processing at higher levels of cognition and that higher level cognition is used to organize perception. In addition, it is widely known that training in any domain, such as art, can shape perceptions within that domain. Given these findings, could knowledge in a domain influence the aesthetic perception of knowledge representations in that domain? In other words, would experts perceive visual information differentially from non-experts not only in terms of informational content, but also in terms of aesthetics? To set out to study this, we collected a sample of 25 pilot participants (11 male, 13 female, 1 NA; ages 22–58, mean 30.43) recruited from the SFI CSSS19 participant pool and IU Bloomington Cognitive Science and Psychology graduate students. Participants were asked to rate the visual appeal and informational content of 25 images of 2-dimensional strange attractors and to answer a brief background survey that assessed their experience with and education in physics, nonlinear dynamical systems, and art and visual design. The attractors were sampled from five different attractor classes, varied in their informational content, and presented in separate, randomized blocks of trials for the two types of ratings. The majority of participants (18 out of 25) showed a strong positive correlation between aesthetic and informational ratings (ρ ε [0.40, 0.90], μ(ρ) = 0.68, p < .05 or lower), indicating that more visually appealing images were also perceived as more informative. This result persisted across participant groups, with no statistically significant difference in the strength of the correlation or in the proportion of participants exhibiting a strong positive correlation between groups. The within-subject variance of aesthetic ratings was significantly higher for participants who had taken a class in nonlinear dynamics (N = 12) than for participants who had not taken a class in nonlinear dynamics (N = 13), t(22) = –2.27, p = 0.03, and increased with experience in nonlinear dynamics (F(1, 23) = 7.21, p = 0.01) and art and visual design (F(1, 23) = 6.03, p = 0.02), suggesting a potential connection between experience or education and the precision of aesthetic perception. No similar effects were found for the within- subject variance of informational content ratings, nor for the overall magnitude of the ratings. According to individual subject reports, many participants interpreted images in light of their area of expertise (e.g. as topologies or algae formations) and gave both ratings in terms of these interpretations, potentially explaining the positive correlation between ratings regardless of experience with nonlinear dynamics. In addition, actual informational content had a similar correspondence to rated visual appeal and rated infor- mational content (see Figure 1) across all participant groups, suggesting that images of strange attractors could exemplify visual properties that are commonly perceived as both informational and aesthetically pleasing e.g. due to their resemblance to properties of the natural environment.
Anshuman SwainUniversity of Maryland, College Park (US) |
Jordi PiñeroUniversitat Pompeu Fabra (ES) |
Jack ShawYale University (US) |
Massive extinctions have distinctively shaped the evolutionary history of the biosphere. The fossil record shows that, upon extinction events, many genera undergo a dwarfing process. This phenomenon has been dubbed the Lilliput Effect. In this paper we explore the possible underlying mechanisms behind the trend towards smaller size displayed both at the genus and the family level. We focus on ideas based on current ecological theory, such as metabolic scaling theory and niche models, to explore and explain how extinction events in complex ecosystems might induce the Lilliput effect. We also collect data from multiple previously published studies about size changes and compare the size changes as a function of the severity of the biotic crises.
Henri KauhanenThe University of Manchester (UK) |
Ritwika VPSUniversity of California, Merced (US) |
Harun ŠiljakTrinity College Dublin (IE) |
Kenzie GivensIndiana University (US) |
Pablo M. FloresUniversity of California, Davis (US) |
Mathematical models of the cultural evolution of language are typically formulated either on a macroscopic level of abstraction, permitting the analytical solution of the model’s governing equations or, more commonly, on a microscopic level, requiring simulation on the computer. In this paper, we show that such models may be successfully formulated on an intermediate mesoscopic level, where the model remains mathematically tractable (numerically soluble) but complex enough to approximate the complex real-life dynamics of human language in a meaningful way. Our model takes into account four known processes— population dynamics, diffusion dynamics, mutation dynamics and interaction dynamics. We assume a number φ of binary linguistic features, spanning a sequence space of N = 2φ distinct languages. To model spatial dynamics in a general way, we assume the existence of a spatial network of n nodes: languages reside at the nodes of this network, and spatial diffusion occurs along the network edges. The abundance of language i in node v (roughly, “the number of speakers of this language at this location”) is written as xiv and is a positive real number. Each node is assumed to have a population carrying capacity Kv depending on local circumstances. The model takes the form of a set of differential equations (one for each language–node combination). These equations feature a logistic growth process governing the population dynamics of the speakers of the language, a diffusion process describing the migration dynamics of the speakers, a single-point mutation term representing endogenous change in language, and a nonlinear interaction dynamics term we developed, whereby linguistic interaction is weighted by the Hamming distance of the interacting languages. To explore the behaviour of the model, we constructed a network representation of habitable places in Australia, with nodes determined by Delaunay triangulation. We then selected a random sample of 200 nodes and solved the system numerically for 32 languages. We observed a quick transition into a steady state of mixed abundancies across the nodes, consistent with the linguistic diversity observed in present-day speech communities (Figure 1). These preliminary results demonstrate that a mesoscopic treatment of the complex processes of language dynamics is possible. Future work will include a systematic exploration of the model’s behaviour across the parameter space, and rigorous testing against empirical data. Of particular relevance will be model behaviour when the network is initialized without language apart from a single node with a proto-language. The properties of the transient leading up to the steady state could, in principle, be compared against phylogenetic trees reconstructed in historical linguistics using the comparative method.
John F. MalloyArizona State University (US) |
Dakota MurrayIndiana University Bloomington (US) |
Ritwika VPSUniversity of California, Merced (US) |
Christina Boyce-JacinoCarnegie Mellon University (US) |
Kyle FurlongMITRE Corporation (US) |
Mackenzie M. JohnsonUniversity of Texas, Austin (US) |
Andrew Gillreath-BrownWashington State University (US) |
|
With the recent rise and popularity of faux-science belief systems, such as the Flat-Earth conspiracy or climate change denial, it is essential that scientists communicate valid information to non-scientists. In addition to the act of communication on the part of scientists, the information must be understood by those receiving the correspondence. Here, we investigate the amount of information that is lost, and how information is changed, between scientific papers and various communication methods. We analyze a large database, HarriGT, of linked news stories from the United Kingdom and scientific publications and establish differences in information across three levels: word level, topic level, and semantic level. We operationalize word level differences using information theoretic measures of entropy and K-L divergence, topic level distributions using differences in topic models, and semantic differences using machine learning models of semantic similarity. We also will explore the transfer of information (using the same methods) in a number of case studies, from dissemination of a single scientific paper to congressional testimonies.
Levi FussellUniversity of Edinburgh (UK) |
Kirtus LeybaArizona State University (US) |
Jessica A. LeeGlobal Viral (US) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Algorithms that implement information exchange and the optimization of computational processes find a particularly interesting use case in distributed, asynchronous systems with constraints on communication and computational capabilities. There are many instances where such challenges must be addressed, such as the consensus of values in networks with tenuous or even adversarial connections, or the coverage of an environment by mobile autonomous vehicles. It has been shown that many distributed tasks can be unified as the optimization of a specialized form of a more fundamental cost function. The coordination and collective behavior of simplistic robotic agents comes with the potential for robust, affordable, and adaptable systems. These challenges fall under the active research field of swarm robotics. Past work has shown that probabilistic models can be adapted to useful control schemes. For example, a swarm might effectively transport objects larger than any single member. While not generally thought of as swarm robots, the related self-organizing particle systems have been shown to effectively optimize various cost functions to achieve a desired behavior even in completely distributed and asynchronous settings. Such behaviors include but are not limited to compression and expansion, leader election, and universal coating. These previous approaches are well suited to individual tasks, or being abstracted to larger related task classes. The design of an ideal system of a quickly adapting and robust robotic swarm that generalizes to multiple task classes is still a seemingly daunting problem. For some tasks, tools have been developed to adapt the behavior of the swarm to be specialized for specific environments. The ant inspired central-place foraging algorithm (CPFA) has been evolved with a genetic algorithm (GA) such that a robotic swarm performs with parameters suited to a specific environment, resource distribution, or environmental noise level. The process of evolving entirely new behaviors instead of parameters of those behaviors is a difficult challenge, and whether such a method can be successful is still an open question. Biology- inspired optimization of machine behavior has been applied to other systems outside of robotics. Neural networks have successfully been evolved using genetic algorithms in the well-known NEAT project. Furthermore, it has been shown that neural topologies could be effectively evolved in an approach that is agnostic to the weights of the neural networks. Genetic algorithms that operate in a structured space have had success in certain problem cases. A major challenge in this project is to design an effective evolutionary algorithm that operates in real- time and accounts for the distributed nature of the robot team. This project investigates the question of how a robot population with limited computational and communication capabilities can simultaneously optimize their individual performance and share effective behaviors with other robots. This challenge has many applications in robotics, such as pattern formation in swarms, rendezvous algorithms in localization and mapping, and convergence and consensus algorithms in distributed and noisy environments. We look to biology for inspiration to design a system that has several potentially useful traits. The robots are behaviorally heterogeneous, opening up the possibility of implicit task allocation in multi-task problems. The robots also optimize themselves via simulated evolution. Our approach must account for the fact that the robot population does not grow or decrease, which are important tools in biological evolution. Additionally, the system has redundancies because genetic patterns are mixed into the population, which may improve redundancy to robot failures or sensor error. In this work, we seek to explore such scenarios using inspiration from the biological phenomenon of horizontal gene transfer. Horizontal gene transfer (HGT) refers to the sharing of genetic material between organisms that are not in a parent–offspring relationship (where transmission of genes from parent to offspring is termed "vertical transmission").
Dries DaemsUniversity of Leuven (BE) |
Doug ReckampNational Defense University (US) |
Ethan NadlerStanford University (US) |
Human beings are constantly confronted with new stimuli, information and noise. If we were to actively engage with all of this input, our brains would instantly overload. Instead, we filter incoming information in order to create predictable and understandable patterns that allow us to make sense of the world around us. Science operates very much in the same way. To gain knowledge of whatever it is we want to study, we use metaphors and models as pattern-seeking devices. Metaphors and models should not be seen as radically different or mutually exclusive elements. Instead, they are heuristic devices that can be placed on a wide range of formalism and specificity, from vague and general-purpose to detailed and formalised. The difference is therefore one of degree rather than qualification. Metaphors are commonly used to express descriptive views of reality, often in colourful language and using a high degree of abstraction. Models, on the other hand, are more often expressed in mathematical or quantitative terms, using a high degree of formalism to express (causal) relationships. However, as they operate on the same range rather than holding qualitative differences, this also means that metaphors can be translated into models when causal relationships are specified and subjected to empirical tests. LINK
Mikaela AkreniusIndiana University Bloomington (US) |
Elissa CohenAmerican University (US) |
Ahyan PanjwaniYale University (US) |
The contraction of the leverage cycle played a significant role in the collapse of the United States housing market. How consumers drove the expansion and ultimate collapse of this market, however, is less understood. This paper explores how investors’ perceptions about the future state of the housing market impacted the stability of the housing market. We accomplish this by incorporating psychological probability weighting functions into a partial equilibrium leverage cycle model. From our model, we demonstrate how misperceptions about the probability of a downturn resulted in larger declines in equilibrium market prices. Our model also suggests these misperceptions may have reduced the extent of the leverage contraction in the market during the crash. Policy prescriptions based on macroprudential policymaking and dynamic limits on leverage are also discussed.
Jessica BrumleyNational Research Council (US) |
Catherine BrinkleyUniversity of California, Davis (US) |
Gen LiCalifornia Institute of Technology (US) |
Ian LimHousing and Development Board (SG) |
Water use and urban development are tightly coupled, but the scaling properties of this relationship have not been explored despite the sustainability implications for food supply and urban growth. We show that water withdrawals (Q0 ) scale with both population but agricultural demand scales with available area (r2 = 0.4976; F (2, 2009) = 997.1, p<0.0001). Available flow rate (Qp ) scales with rainfall depth, mean rainfall and the county population (r2 = 0.6511; F (2, 2009) = 1878, p < 0.0001). Where Q0 and Q p , intersect, we the limits to available water use. We find that the average absolute upper bound (intersection for Q0 and Q p ) for the nation was (12.379, 9.616). This indicates that the nationwide upper bounds for water use and availability have not yet been surpassed. Certain counties, however, have reached or exceeded their thresholds. Such counties rely more on rapidly depleting groundwater sources. Our findings point out important bounds and tradeoffs for growing food production and urban areas in the context of limit fresh water supplies.
Alexander BakusShopify (US) |
Erwin KnippenbergCooper/Smith (US) |
Chris QuarlesUniversity of Michigan (US) |
Patrick SteinmannWageningen University and Research (NL) |
Kate WoottonSwedish University of Agricultural Sciences (SE) |
Food webs represent the flow of energy and material along food chains. Humans are often omitted from food webs, as they affect the interactions of other entities, such as by protecting livestock from predators, or interact with other entities in complex manners such as milking, animal husbandry or sustainable harvesting. Such actions do not fit into conventional webs, but their omission makes it difficult to investigate the ecological impacts of humans in complex ecosystems. We propose extending conventional unipartite food webs, representing entities, to bipartite webs, representing both entities and actions. This allows non-comsumptive actions to be represented and quantified along many common metrics for food webs. We demonstrate our proposed method with a case study based on synthetic data from a popular video game.
April S. KleppeUniversity of Münster (DE) |
Mackenzie M. JohnsonUniversity of Texas, Austin (US) |
Ludvig HolmérStockholm School of Economics (SE) |
Keith SmithUniversity of Edinburgh (UK) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Brennan KleinNortheastern University (US) |
Laura StolpUniversity of Amsterdam (NL) |
Douglas ReckampNational Defense University (US) |
In evolutionary biology, much attention has been given to how evolution alters traits by adaptive evolution. In a classic understanding of adaptive evolution, mutations will gradually alter existing features over long time spans. However, any alteration to a functioning system also carries the risk of ruining the system at hand. There is an evolutionary balance act- between maintaining functionality and simultaneously acquiring novelty- in order to adapt to constantly alternating environments. Much work has been dedicated to disentangling how evolutionary novelty may emerge from existing features, without wrecking the cellular environment already in place. However, while much research addresses wherefrom novel features emerge (e.g. gene duplication, de novo), little attention has been given to how novelty may become integrated as part of the cellular system. Whether or not a protein is able to engage with a given network, without ruining the network's biological function, should at least in part depend on the network's topological resilience. In other words, the evolutionary success of a novel protein should in part be reflected by the system-level proprieties of the existing network. We make use of network science in order to infer the protein interaction network's resilience, see attached figure. As shown by a computational study of Zitnik et al 2019, we predict that biological resilience enables tolerance to perturbations. We compute the change in the resilience of the networks in the presence of newly-added nodes, under three different node addition mechanisms. We show that adding nodes in a biologically-inspired manner (as opposed to random or degree-based attachment) preserves the original resilience of the network structure. Further, this holds in the three species regardless of i) the different distributions of gene expression values and ii) different network community organization. These findings introduce a network-general notion of prospective resilience, which highlights the key role that network structure can play in building our understanding of the evolvability of given phenotypic trait of a species.
Arta CikaUniversity of Oxford (UK) |
Elissa CohenAmerican University (US) |
Germán KruszewskiFacebook (US) |
Luther SeetCentre for Liveable Cities (SG) |
Patrick SteinmannWageningen University and Research (NL) |
Wenqian YinThe University of Hong Kong (HK) |
Complex systems can exhibit autopoiesis - a remarkable capability to reproduce or restore themselves to maintain existence and functionality. We explore the resilience of autopoietic patterns - their ability to recover from shocks or perturbations - in a simplified form in Conway’s Game of Life. We subject a large number of autopoietic patterns in the Game of Life to various perturbations, and record their responses using multiple resilience metrics. Our results show that while resilience is rare, we are able to identify structural features improving patterns’ resilience. We also draw several parallels between the resilience of patterns in the Game of Life to real-world complex systems. Our work may be useful both for improved searching for resilient patterns in the Game of Life, and for exploring specific and general resilience in complex systems.
Fabian DablanderUniversity of Amsterdam (NL) |
Anton PichlerUniversity of Oxford (UK) |
Arta CikaUniversity of Oxford (UK) |
Andrea BacilieriUniversity of Oxford (UK) and Institute for New Economic Thinking |
Dynamical systems such as lakes and the climate can exhibit sudden shifts from one stable state to another, and recent work points to the existence of supposedly generic early warning signals that precede such shifts. There has been renewed interest in a dynamical systems perspective from psychologists who have begun to conceptualize mental disorders such as depression as an alternative stable state. Several researchers have argued that early warning signals — most prominently critical slowing down — might hold exceptional promise for predicting transitions into depression or other psychiatric disorders. Moreover, dynamic resilience indicators, which are intimately tied to critical slowing down, have been proposed. The theory behind critical slowing down is nuanced, however, and explanations are scattered across the ecology and physics literature. Moreover, the statistical application of early warning signals is complicated and lacks standards. In this paper, we (a) provide a gentle explanation of the theory behind critical slowing down as well as its limitations; (b) review statistical challenges in applying early warning signals in practice and provide guidelines; (c) re-evaluate previous claims supporting critical slowing down in psychopathology in light of this statistical background; and (d) study the potential of various early warning signals to anticipate critical transitions by simulating from a bi-stable dynamical system, varying crucial features such as sampling frequency, length of baseline data, and speed of approaching the tipping point. In the process, we suggest methodological improvements compared to the standard application of early warning signals. In light of our results, we discuss the challenges and opportunities of applying early warning signals in psychology.
William BraaschrDartmouth College (US) |
Pavel ChvykovMassachusetts Institute of Technology (US) |
Levi FussellUniversity of Edinburgh (UK) |
Xin RanDartmouth College (US) |
Chiara SemenzinThe University of Edinburgh (UK) |
While agent-based models (ABMs) are becoming a dominant tool for studying emergence of social phenomena, they still seem inaccessibly far from capturing the immense complexity of social realities. Thus, ever more complicated ABMs are developed, hoping to bridge this gap by better accounting for complexity of individuals. In this work, we illustrate an opposing view, suggesting that complexity can arise from the appropriate choice of collective variables to study, even in the simplest ABMs -- shifting the focus from generating the data to interpreting the data. In particular, we simulate a very simple ABM -- similar to Axelrod model, but with no homophily -- and rather than analyzing the data directly, focus on the semantic network emergent in the culture. We play with how social network topology affects emergence of ideological structures, and entertain questions regarding complex concept formation, "memetic" evolution, and influence of globalization. This way we try to illustrate that a variety of rich sociological questions can be addressed even in a simple model by taking the appropriate perspective on the data. LINK
Douglas GuilbeaultThe University of Pennsylvania (US) |
Ethan O. NadlerStanford University (US) |
Mark ChuColumbia University (US) |
Donald Ruggiero Lo SardoMedical University of Vienna (AT) |
Aabir Abubaker KarIndependent |
Bhargav Srinivasa DesikanUniversity of Chicago (US) |
Extant methods in automated content analysis focus almost exclusively on textual associations, and are therefore limited in their ability to detect semantic associations among colors and abstract concepts. The few approaches that incorporate image data in word comparisons leverage many (often uninterpretable) image features, and operate in standard colorspaces that do not reflect human perception. Here we develop a novel unsupervised method for automated content analysis that uses Google Image data to show that color is encoded into the online images associated with abstract concepts from three domains—academic disciplines, emotions, and music genres. We find that statistical relationships among color distributions capture the underlying semantic structure of each domain. Crucially, we measure color using a novel transformation of the colorspace that emulates human color perception. Our results support recent theories of multimodal cognition, which argue that humans harness sensory information like color to represent abstract concepts. Color is used in everyday language to describe both concrete and abstract concepts. However, current methods in linguistic analysis are limited in their ability to detect semantic associations between color and abstract concepts. Although some recent approaches extend traditional natural language processing models to include image data, these algorithms leverage many (often uninterpretable) image features and measure color in ways that differ from human perception. In this study, we develop an unsupervised approach to automated content analysis that associates words with the color distributions of their Google Image search results. Crucially, we measure color distributions in a way that resembles human color perception. We find that words within three semantic domains—academic disciplines, emotions, and music genres—cluster in a statistically significant fashion according to their Google Image color distributions. Moreover, using the lexical database WordNet, we show that this clustering is semantically coherent. In particular, we find that images associated with more abstract words exhibit higher variability in colorspace; and semantically similar words have more similar color distributions. These findings are consistent with theories of multimodal cognition, which invoke the use of color in common linguistic metaphors to argue that abstract concepts are represented using concrete sensory information.
Ernest AignerVienna University of Economics and Business (AT) |
Jackie BrownYork University (CA) |
Kyle FurlongMITRE Corporation (US) |
David GierUniversity of California, Davis (US) |
Ludvig HolmérStockholm School of Economics |
Ritwika VPSUniversity of California, Merced (US) |
We simulate climate change belief dynamics on a social network using agents who are connected using a degree-corrected stochastic block model. Belief dynamics are modeled with internal updating rules where an individual’s propensity to change their belief, the magnitude of social influences and the particular social rules that individuals use to consider new beliefs can all be varied. Informed by research into determinants of beliefs in climate change, we consider different model scenarios. Under each scenario we examine the impact of extreme weather events on the average belief in climate change. The results show that different network structures strongly influence simulation outcomes and climate shocks cause a noticeable increase in belief in all cases, which is persistent for the random copying social rule and fades for the average belief social rule.
Andrew Gillreath-BrownWashington State University (US) |
Jeongki LimParsons School of Design (US) |
Harun SiljakTrinity College Dublin (IE) |
Science fiction is immensely popular, particularly over the last two decades where over half of the top domestic grossing movies of the 2010s were science fiction. Many scientists also share this enthusiasm for Sci-fi, including the authors, so why not apply various research techniques to different Sci-fi worlds? We want to create a Sci-fi computational community, where we can explore different worlds and understand how different phenomena emerge in these worlds. Sci-Fi Agent-based Modeling Anthology is an open-source and fully volunteer-based project; thus, we also hope to find new collaborators who can take on different stories with their unique approach. The participants explore their favorite science fiction stories with new mediums like agent-based modeling or even differential equations. We believe that this project will get the attention of a new audience and bring them into the science fiction genre. Here, we explore our favorite science fiction stories (i.e., Fahrenheit 451 and 1984) with new mediums and hope to bring them to a new audience. In the beginning of the 1984 model, there are big spikes in thoughtcrime, which could easily bring about successful revolutions (Fig. 1). However, as time passes, the system is much more stable, and the probability of a big unrest decreases. So, if we wake up in an 1984 world, then we must act immediately. In the Fahrenheit 451 model, the amount of surveillance and number of people defecting can have a major impact on the number of bibliophiles. We hope that the application of computational complexity science to these Sci-fi dystopian worlds will help us to learn more about our own world, our own histories, or even future potential trajectories. For example, how easily could our society turn into a 1984 world or vice versa? These models give you a sense of control, where you can change the variables. Therefore, you are no longer passively watching science fiction, but you are actively engaging in it, you are changing it, seeking to understand something meaningful. LINK
Pam MantriCognitive Tools Limited LLC (US) |
Glory Dee RomoUniversity of the Philippines Mindanao (PH) |
Wenqian YinThe University of Hong Kong (HK) |
Heretofore, the complexity in Complex Systems (physical, biological or social-technical) has predominantly been studied as intrinsic to the evolving system. It could, however, also be considered as an epistemological transient. This project surveys Prof. Novak's Concept Mapping approach [1] as an epistemological means towards taming complexity. Concept Mapping may also be viewed from a Complex Adaptive Systems framework [2]. Examples that illustrate the concept mapping approach include chapters 1-7 from Prof. Mitchell’s book Complexity A Guided Tour [3], bio-chemistry of respiration and photosynthesis, electrical power-systems, small-business management, and value-chain modelling. By creating better tooling, that which is complex today, may one day be child’s play.
Christopher QuarlesUniversity of Michigan (US) |
Wenqian YinThe University of Hong Kong (HK) |
J. Pablo FrancoUniversity of Melbourne (AU) |
Jordi PiñeroUniversitat Pompeu Fabra (ES) |
Brennan KleinNortheastern University (US) |
Every entity has a limited capacity to process information. So, when there is too much information, entities need to make conscious or unconscious decisions about what information to exclude. At various times throughout natural and human history, the amount of information available has increased substantially. For example, in the past 100 years, we have moved from a more place-based society to a a more connected one. This project explores what happens to a group of people when there are increases in the amount of information available. As available information increases, we expect individuals will need to be more selective about receiving the information that is the most useful. In addition, individuals need to decide who to trust [1], which makes the decision-making process even more complex. Does this filtering lead to increased segregation and/or specialization in a social system and/or biological system? We examine these questions using a network model, where nodes update their beliefs. Our results suggest that beliefs converge in the long run. However, the rate of convergence is highly related to the communication structure in the group.
Travis MooreUniversity of Wisconsin, Madison (US) |
Anton PichlerUniversity of Oxford (UK) |
Xin RanDartmouth College (US) |
Keith SmithUniversity of Edinburgh (UK) |
Yuka SuzukiOkinawa Institute of Science and Technology Graduate School (JP) |
The Erdös-Rényi (E-R) random graph ensemble is of fundamental importance in modern graph theory and network science. It is constructed by selecting m links uniformly at random across n nodes. It has straightforward statistical properties and it has been shown that, for large enough n, realisations of the E-R random graph ensemble are uniform over the graph isomorphism classes with n nodes and m links. Given this, one quite surprising observation about this ensemble is that there is a strong degree of agreement between topological measurements of any given realisation and the expected topological value over the entire ensemble even though it is possible to construct, for given n and m, specific examples of graphs with wildly varying topological measurements. This suggests that the vast majority of isomorphism classes of graphs take up a tiny amount of ‘interesting’ topological space and, contrarily, that most interesting graphs, such as lattices, regular graphs, star graphs, and real-world networks, are highly unusual. In lieu of this, a view of graph topology concerned with interesting and practically relevant topological traits, indexed by topological metrics, is commonly adopted. With respect to this view the E-R random graph is not topologically uniform at all, but, it can be deduced, rather biased to selecting topologies from a very small region of interesting topological space. In juxtaposition to having a narrow topology, determining missing links of an E-R random graph is the worst-case scenario since all links are determined independently and uniformly at random. That is, if one were to remove a link from an E-R random graph and ask someone to guess where the missing link is, every single non-existent link would be equally likely simply from the probabilistic definition of the ensemble. Here, we detail an ensemble with precisely the opposite such attributes to the E-R random graph. That is, we construct an ensemble whose realisations have highly variable topologies while having collectively determined links such that if one were to remove a link, the generative mechanism of the ensemble would provide the guesser with the information necessary to precisely locate from where it was taken. The ensemble is based on a binary sequence encoding the Fibonnacci sequence which is truncated to fit the number of possible edges in a given graph with n nodes. To generate a graph, a random point of this sequence is chosen and the sequence is cut like a deck of cards, the entries after the chosen point are cut to the front of the sequence. Then the new sequence is folded into a graph adjacency matrix of size n. The outcome is a graph for which a single missing link can be completely determined by checking the adjacency matrix and yet whose topology is highly dissimilar to the other members of the ensemble, based on standard topological metrics, Fig 1. We shall extend this work by considering topological diversity of graph subsamples and the accuracy of graph subsample topology for explaining global graph topology, applying these concepts to aid our understanding of the diversity across and within real-world networks.
Kunaal JoshiPurdue University (US) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Kazuya HoribeOsaka University (JP) |
The mechanisms by which cells control their size, maintain size homeostasis and decide the time of division are important biological problems which are currently unsolved. In past literature, cell-size homeostasis and cellular division have been explored in through two dominant viewpoints: 'sizer', where cells track their size actively and trigger the cell cycle once they cross a certain critical size, and 'timer', where cells try to grow for a stipulated time before division. These ideas, along with 'growth law' and quantitative model of bacterial cell-cycle, inspired numerous theoretical models and experimental investigations, from predictions and exploration of growth to linking cell cycle and size control. However, experimental evidence involved difficult-to-verify assumptions or population-averaged data, which allowed different interpretations or limited conclusions. In particular, population-averaged data and correlations are inconclusive as the averaging process masks causal effects at the cellular level. Recent experiments have shown most cells follow neither of these models and their dependence on initial size is not trivial. This dependence is very important for maintaining cell size homeostasis, but these models cannot adequately explain the ecological and biological reasons behind cell division and what advantage these can bring to the cells. Moreover, different types of cells have different dependence on initial size for the decision to divide, and currently we do not know why these differences exist. In this work, we aim to find out why these different models for division arise within different cell types and whether cells obtain some evolutionary advantage from following these models in their natural environment. To achieve this objective, we simulated different scenarios corresponding to the natural environments of different cell types and used growth and metabolism laws collected from existing literature to simulate the growth of cells in these environments. We used a neural network decision making scheme that can mutate upon cell division to understand the process. In particular, we explore what will be the final size of the cell after division as a function of the initial size. We expect that after a long time, the surviving cells will be the fittest in the given environment and their division dynamics will help shed light on our problem.
Levi FussellUniversity of Edinburgh (UK) |
Emily CocoNew York University (US) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Leaves serve an important function by providing an interface between plants and their environment for gas exchange, light exposure and thermoregulation, and therefore, have a major contribution to plant fitness. This intricate relationship between leaf and its environment has led to massive diversification in leaf morphology, which will therefore result from a balance between maximization of energy uptake and resource utilization while minimizing damage by environmental stresses. Leaf shape varies between species which are subjected to different environmental conditions. For instance, the extent of leaf margin dissection has, for long, been found to inversely correlate with the mean annual temperature, such that paleo-botanists have used models based on leaf shape to predict the paleo-climate from fossil flora. Leaf growth is not only dependent on temperature but is also regulated by many other environmental factors such as light quality, nutrient availability and humidity, along with many genetic factors. These morphological patterns are usually codified manually and are standalone ways of trying to depict interconnected and complex parameters. In this work, we utilize complexity measures, which can compress many features of leaf shape into one statistic and use it to further delve into how these morphological changes affect or are affected by plant traits. These statistics have been mostly used in the past for differentiating leaf specimens at a species level and thus, can be used an identification metric in practical circumstances in field. The ability of these statistics, especially fractal dimensionality, to be able to inherently distinguish various qualities of leaf morphology could enable us to explore the ecology and evolution of some of these shapes in relation to the environmental factors that might have played some role in determining it. In this work, we find spatial, phylogenetic and trait correlations with fractal dimensionality of leaves taken from trees across the continental United States.
Merveille Koissi SaviaZentrum für Entwicklungsforschung (DE) |
Bhartendu PandeyYale University (US) |
Anshuman SwainUniversity of Maryland, College Park (US) |
Jeongki LimParsons School of Design (US) |
In the last two decades, tremendous efforts have been made by governments and private investors in West African (WA) endemic regions to reduce malaria incidence rate. Due to the high rates of urbanization in the region and known abstract associations between urbanization and malaria disease epidemiology, we suspect a change in the disease pattern. However, the association between urbanization and the spatio-temporal pattern of malaria is insufficiently documented. Ghana is located in the endemic WA region and is rapidly urbanizing. This study aims to assess the influence of urbanization on malaria incidences in Ghana and to inform current and future decision-making. We used self-reported malaria cases time-series dataset (2015-2018) from the District Health Information Management System aggregated by sex and age groups, at the district level. We then applied a series of aspatial and spatial quantitative analysis methods on the dataset. Our results show significant heterogeneity in malaria incidences across time and space. We find that the number of malaria cases is increasing by an average rate of 3,061 incidences per month. Our results show that for each district, self-reported cases are highest for children aged under-five for most districts in the country. In contrast, we find that in large urban centers such as Kumasi and Greater Accra Metropolitan Region, females aged between 20 and 34 had the highest cases. Our results show a statistically significant correlation between degree of urbanization, measured using satellite and census data, and self-reported total cases, and total cases for age group 20 to 34. Additionally, we find that in urban areas inter-age and inter-age-sex disparities in self-reported cases are lower. This points to the greater efficiency of urban areas in serving health care needs of the population. In contrast, as the population of Ghana urbanizes, it also suggests that healthcare systems in urban areas may be more stressed in the future unless urban healthcare infrastructure also expands in the future. In summary, our study stresses that cities in Ghana will need to adapt to the changing social and physical environments to address the increasing complexities in malaria disease dynamics.
Jeongki LimParsons School of Design (US) |
Christina Boyce-JacinoCarnegie Mellon University (US) |
Dakota MurrayIndiana University Bloomington (US) |
John F. MalloyArizona State University (US) |
Ignacio GarnhamParsons School of Design (US) |
Pablo M. FloresUniversity of California, Davis (US) |
Douglas ReckampNational Defense University (US) |
Ethnographers, designers, and other qualitative researchers often analyze small-scale text content, such as interview transcripts. However, tools for text analysis are geared towards the study of large-scale textual corpora, and so have not been applied towards qualitative text analysis. Here, we propose a method using pre-trained word2vec—an algorithm for generating continuous vector representations of words from large corpora—for the visualization of small texts. Specif- ically, we use these embeddings to construct semantically-organized networks. These network visualizations are intuitive, tuneable, and offer a new means by which to gain insights about the semantic structure of a text. We demonstrate the utility of this approach by applying it to the text output of a design workshop hosted at the Santa Fe Institute in the Summer of 2019. The result- ing semantically-organized network reveals the key structure of the topics and terms that resulted from the workshop which would not have been apparent from traditional means. We compare the semantically-organized network to spatial embeddings, and find advantage in the intuitive rela- tional structure of the network. We discuss how this technique might be applied beyond design to the visualization of interviews, political speeches, online content analysis, and more. LINK
Alex SchaeferUniversity of Arizona (US) |
Dries DaemsUniversity of Leuven (BE) |
Ignacio GarnhamParsons School of Design (US) |
Kazuya HoribeOsaka University (JP) |
In this paper, we discuss the convoluted field of cultural evolution and competing hypotheses regarding the main drivers of social complexity through evolutionary means. We compare and discuss the core tenets of two main schools: the Sociobiological Position and the Autonomous Culture Position, arguing respectively for cultural and biological primacy as driver of complexity.Each of these positions offers different predictions as to the pace and structure of cultural evolution. To explore the possibility space of various hypotheses related to this topic, we draw upon a recently compiled data set from the Seshat Project that measures and compares the social complexity of various societies. We use this dataset to discuss three elements in detail: (1) the pace of complexity; (2) patterns of cultural transmission; (3) cultural variance. Although this paper does not undertake the task of rigorous hypothesis testing, it does draw on the social complexity data set, as well as evolutionary theory, to propose various hypotheses worth examining in a further stage. We propose that an in-depth analysis of these three topics would allow scientists to adjudicate between competing visions of cultural evolution. A first assessment of the data provided by the Seshat project appears to suggest that the Autonomous Culture Position provides a better fit for the observed patterns. However, further in-depth analysis of the data is still needed. The Seshat data set provides a unique and exciting opportunity for quantitative analysis on the drivers and long-term patterns of socio-cultural complexity and evolution, and its full potential has so far not been tapped. LINK