Santa Fe
Institute
  • Research
    • Themes
    • Projects
    • SFI Press
    • Researchers
    • Publications
    • Library
    • Sponsored Research
    • Fellowships
    • Miller Scholarships
  • News + Events
    • News
    • Newsletters
    • Podcasts
    • SFI in the Media
    • Media Center
    • Events
    • Community
    • Journalism Fellowship
  • Education
    • Programs
    • Projects
    • Alumni
    • Complexity Explorer
    • Education FAQ
    • Postdoctoral Research
    • Education Supporters
  • People
    • Researchers
    • Fractal Faculty
    • Staff
    • Miller Scholars
    • Trustees
    • Governance
    • Resident Artists
    • Research Supporters
  • Applied Complexity
    • Office
    • Applied Projects
    • ACtioN
    • Applied Fellows
    • Studios
    • Applied Events
    • Login
  • Give
    • Give Now
    • Ways to Give
    • Contact
  • About
    • About SFI
    • Engage
    • Complex Systems
    • FAQ
    • Campuses
    • Jobs
    • Contact
    • Library
    • Employee Portal

Science for a Complex World

Events

Here's what's happening

Give

You make SFI possible

Subscribe

Sign up for research news

Connect

Follow us on social media

© 2026 Santa Fe Institute. All rights reserved. This site is supported by the Miller Omega Program.

Home / News

Could AI ever truly "understand"?

Goldfish swimming in an aquarium of binary code (image: Abha Eli Phoboo/Craiyon)
April 10, 2023

ChatGPT knows how to use the word “tickle” in a sentence but it cannot feel the sensation. Can it then be said to understand the meaning of the word tickle the same way we humans do? 

In an ongoing debate, AI researchers are teasing apart whether large language models (LLMs) like ChatGPT and Google’s PaLM understand language in any humanlike sense. The relationship between embodiment and understanding is one question, along with the nature of intelligence and understanding. Should concepts of meaning, understanding, and intelligence be revisited to create a distinction between how humans and machines understand the world?

SFI researchers Melanie Mitchell and David C. Krakauer survey “The debate over understanding in AI’s large language models”  in their paper published in the Proceedings of the National Academy of Sciences on March 21 (available on arXiv). The authors examine the characteristics that make LLMs impressive but also susceptible to unhumanlike errors and note the “fascinating divergence” emerging in how we humans think about understanding in intelligent systems.

“Humans do all kinds of experiments to learn about the world. Our embodiment is fundamental to our intelligence,” says Mitchell.  “Large language models have the appearance of understanding but do not have experiences.” 

LLMs are pre-trained on large datasets. Human understanding is based on a set of mental concepts that we map from our experiences as we interact with the world. This underlines the stark difference between mental models that rely on statistical correlations, such as what LLMs use, versus those that rely on causal mechanisms.

“Large language models are fact-rich like a big library and more autonomous than an abacus. And like an abacus, they are tools that can be used to augment our intelligence — a kind of steampunk mechanical library. But we cannot confuse having this tool with having an understanding,” says Krakauer. 

The paper also takes into account the many threads of debate in the AI research community, including the familiar human tendency to “attribute understanding and agency to machines with even the faintest hint of humanlike language and behavior” and the mystery behind how LLMs are able to give the appearance of humanlike reasoning.

“We really wanted to report on what people are talking about, to summarize the different modes of discussions. It is apparent that we need a new vocabulary to talk about it,” says Mitchell. 

Read the paper "The debate over understanding in AI’s large language models" in PNAS (March 21, 2023): https://doi.org/10.1073/pnas.2215907120

####

Templeton World Charity Foundation Grant Award No. 2021-20650 "Building Diverse Intelligences Through Compositionality and Mechanism Design"





Share
  • Sign Up For SFI News
News Media Contact

Santa Fe Institute

Office of Communications
news@santafe.edu
505-984-8800



  • Tags
  • Research


More SFI News

View All News

Looking at AGI through the lens of natural intelligence

A simple baseline for AI forecasting in machine learning

Constantino Tsallis to co-chair the 2027 Nobel Symposium on Statistical Mechanics

How novelty arrives: Review of “The Origins of the New”

Working group asks, what’s the benefit of a brain?

Measuring irreversibility in gene transcription

ACtioN Academy engages industry leaders on AI and complexity

Arguing for a complex adaptive power grid

Mark Newman Awarded 2026 SIAM John von Neumann Prize

Review: Nonesuch, by SFI Miller Scholar Francis Spufford

Laurent Hébert-Dufresne to receive Young Scientist Award

What does it mean to compute?

Reassessing the scientific method

SFI External Professor Santiago Elena elected to the American Academy of Microbiology

From cells to companies: Study shows how diversity scales within complex systems

SFI Press launches “The Economy as an Evolving Complex System IV”

New dataset reveals how U.S. law has grown more complex over the past century

Boldness is key to avoiding self-censorship, model shows

SFI welcomes Program Postdoctoral Fellow Jordan Kemp

Disentangling the Boltzmann brain hypothesis: Memory, entropy, and time