Mitchell, Melanie

In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning” (Rota 1986). Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: Humans are able to actually understand the situations they encounter, whereas even the most advanced of today’s artificial intelligence systems do not yet have a humanlike understanding of the concepts that we are trying to teach them. This lack of understanding may underlie current limitations on the generality and reliability of modern artificial intelligence systems. In October 2018, the Santa Fe Institute held a three-day workshop, organized by Barbara Grosz, Dawn Song, and myself, called Artificial Intelligence and the Barrier of Meaning. Thirty participants from a diverse set of disciplines — artificial intelligence, robotics, cognitive and developmental psychology, animal behavior, information theory, and philosophy, among others — met to discuss questions related to the notion of understanding in living systems and the prospect for such understanding in machines. In the hope that the results of the workshop will be useful to the broader community, this article summarizes the main themes of discussion and highlights some of the ideas developed at the workshop.