Noyce Conference Room
Workshop

All day

 

Our campus is closed to the public for this event.

 

This workshop will build on a 2018 SFI workshop entitled “Artificial Intelligence and the Barrier of Meaning”, which focused on several questions related to the ability of AI systems to “understanding” or “extract meaning” in a humanlike way. In the four years since the original workshop, AI has been transformed due to the rise of so-called large language models (LLMs). Many in the AI community are now convinced that humanlike language understanding by machines (as well as understanding of the physical and social situations described by language) has either already been achieved or will be achived in the near future due to scaling properties of LLMs. Others argue that LLMs cannot possess understanding, even in principle, because they have no experience or mental models of the world; their training in predicting words in vast collections of text has taught them the form of language but not the meaning.

The key questions of the debate about understanding in LLMs are the following: (1) Is talking of understanding in such systems simply a category error, namely, that these models are not, and will never be, the kind of things that can understand? Or conversely, (2) do these systems actually create something like the compressed “theories'” that are central to human understanding, and, if so, does scaling these models create ever better theories? Or (3) If these systems do not create such compressed theories, can their unimaginably large systems of statistical correlations produce abilities that are functionally equivalent to human understanding, that is “competence without comprehension”?

Organizers

Melanie MitchellMelanie MitchellSFI Davis Professor of Complexity (Fractal Faculty), Science Board Co-chair & Science Steering Committee
Tyler MillhouseTyler MillhouseProgram Postdoctoral Fellow