*This event is closed.*

**Meeting Description**: While modern datasets such as social and biological networks are massive, they are also noisy and incomplete. The amount of data is large, but so is the number of variables or parameters we are trying to infer. For instance, in genomics, we often have a large number of genes that might be correlated with a disease, but a relatively small number of independent observations of each one. Similarly, in sparse networks, we want to learn something about n vertices based on only O(n) links.

In settings like these, it can be difficult to find the underlying pattern in the high-dimensional space of possible patterns, or even tell if there is any pattern at all — increasing the danger of overfitting, and contributing to the current crisis of irreproducibility. This is especially important in network analysis, where we need to know if communities produced by a community detection algorithm are statistically significant (something about which most such algorithms give no guarantees) and in cybersecurity, where we need to avoid false positives as much as possible.

Over the past few years, a growing body of work has shown that many problems of this kind possess phase transitions: critical values of the signal-to-noise ratio at which finding the ground truth, even approximately, suddenly becomes impossible or infeasible. Some of these transitions are information-theoretic, where the dataset ceases to convey enough information about the ground truth. Others are computational, leading to regimes where inference is possible but is conjectured to take an exponential amount of computation.

Much of this work began using compelling but non-rigorous techniques from statistical physics. Some of these techniques have now been made mathematically rigorous, but many open questions remain. This working group will bring together a select group of people from mathematics, physics, and theoretical computer science to resolve these questions, and to produce both positive and negative results: both to understand the fundamental limits on our ability to find patterns in noisy data, and to devise optimal algorithms that succeed all the way up to these limits.