In July, the National Science Foundation announced a $100 million investment in AI research, spread over six Research Institutes. Three SFI researchers will participate as leaders and collaborators in two of these groups.
Developing science for safe AI assistants in mental health
In the U.S., more than one in five people lives with a mental health condition. While effective treatments exist, barriers to access, from high cost to geographic availability to social stigma, prevent many people from getting help. For better or for worse, patients are already turning to artificial intelligence chatbots to fill the gap. While AI assistants may one day play an important role in making mental healthcare more accessible and effective, we don’t have the science to back them up yet.
SFI Professor Melanie Mitchell and External Professor Melanie Moses (UNM) are part of a new collaboration of researchers, funded by a $20 million National Science Foundation grant, that aims to develop that science over the next five years.
Led by researchers at Brown University, the AI Research Institute on Interaction for AI Assistants (ARIA), will engage experts from a dozen universities and institutions in fields spanning computer science and machine learning, cognitive and behavioral science, law, philosophy, and education to establish rigorous methods for evaluating AI being used in mental-health contexts, and to investigate approaches for improving such systems.
“Our goal is to establish strong scientific evidence about the capabilities, benefits, and risks of using AI in mental-health contexts,” says Mitchell, who serves as scientific co-director for the project. “If patients and their mental-healthcare providers decide they want to incorporate AI, we want those tools to have been built on sound science and to be safe and effective.”
Researchers have already begun exploring ways that AI could help human mental-health providers better diagnose and treat illness. The consortium does not propose that AI chatbots replace human therapists, but rather will explore ways in which AI systems can augment human capabilities. Existing chatbots have severe limitations, from being unpredictable and untrustworthy, lacking metacognition, and being overly sycophantic or, on the other spectrum, encouraging harmful behavior and generating biased information.
Effective mental-health AI tools would need to resolve those concerns, and also be regulated, employ strong privacy guardrails, and be subject to an accepted standard for evaluating their benefits and harms.
Moses will work with other researchers from the University of New Mexico on ARIA’s questions about AI systems’ understanding of human reasoning, community standards, and principles of justice.
“The law is how we address conflicts in our society, but it is difficult for the law to keep up with the rapid pace of change in computing and AI,” Moses said in a statement issued by UNM. “In this project, we have the opportunity to design trustworthy AI using computational methods, while considering the social and legal implications from the start.”
Developing the framework for a safe and effective AI system that can respond to an individual’s needs and operate within legal guidelines is an inherently interdisciplinary undertaking.
“Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” Ellie Pavlick, ARIA project lead and Brown University associate professor of computer science said in a statement. “At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high-stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”
Foundations of machine learning for AI accuracy and reliability
A separate $20 million grant offers continued funding for the Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin. SFI Professor Cris Moore will continue his role as Senior Personnel, offering expertise on connections between machine learning and statistical physics.
The IFML first received NSF funding in 2020. Over the past five years, the IFML has worked to lay the groundwork for more accurate next-gen AI systems, “from the mathematics of diffusion models to denoise images, to algorithms that improve the speed and accuracy of magnetic resonance imaging (MRI), to biotech innovations set to revolutionize drug discovery and therapeutics,” according to a UT Austin statement.
These diffusion models have been foundational to major public-facing generative AI tools like Stable Diffusion 3 and Flux. This renewed funding will support work in new domains, with applications in protein engineering, clinical imaging, and other health contexts.