Cristopher Moore
Professor
Cris works on problems at the interface of physics, computer science, and mathematics. This interdisciplinary nexus is where phase transitions live, like when a disease jumps from small outbreaks to an epidemic, or when a computational problem suddenly becomes hard or impossible to solve. They occur in data science and machine learning as well: when data becomes too noisy, the “landscape" of possible models for it becomes rugged and hard to navigate, with local optima separated by high barriers. Using techniques from spin glass theory, network theory, and computational complexity, Cris seeks to locate these transitions and understand the hardness they create. This work is both negative and positive: what are the limits of algorithms, and how can we design optimal algorithms that reach these limits?
Cris also works on transparency in AI, and its use in “consequential decisions” that affect people’s lives and rights — such as risk assessment in the criminal justice system. At its worst, AI is an unaccountable black box: it can be biased, inaccurate, or both. But at its best, AI can shine a light on human decision-making, and create new forms of accountability where biases can be detected and corrected. In order to play this positive role, AI needs to be transparent and auditable, so that we can have a democratic discussion about whether, when, and how to use it.
Cris follows these analogies and connections wherever they lead: from AI policy to quantum computing, from games and puzzles to the stability of financial markets, and from blockchains to the three-body problem.