- Description
- Transcript
What is intelligence?
Alison Gopnik: It's like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that's not really the right question.
[THEME MUSIC]
Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.
Melanie Mitchell: I’m Melanie Mitchell.
Abha: And I’m Abha Eli Phoboo.
[THEME MUSIC FADES OUT]
Abha: Today’s episode kicks off a new season for the Complexity podcast, and with a new season comes a new theme. This fall, we’re exploring the nature and complexity of intelligence in six episodes — what it means, who has it, who doesn’t, and if machines that can beat us at our own games are as powerful as we think they are. The voices you’ll hear were recorded remotely across different locations, including countries, cities and work spaces. But first, I’d like you to meet our new co-host.
Melanie: My name is Melanie Mitchell. I’m a professor here at the Santa Fe Institute. I work on artificial intelligence and cognitive science. I’ve been interested in the nature of intelligence for decades. I want to understand how humans think and how we can get machines to be more intelligent, and what it all means.
Abha: Melanie, it’s such a pleasure to have you here. I truly can’t think of a better person to guide us through what, exactly, it means to call something intelligent. Melanie’s book, Artificial Intelligence: A Guide for Thinking Humans, is one of the top books on AI recommended by The New York Times. It’s a rational voice among all the AI hype in the media.
Melanie: And depending on whom you ask, artificial intelligence is either going to solve all humanity’s problems, or it’s going to kill us. When we interact with systems like Google Translate, or hear the buzz around self-driving cars, or wonder if ChatGPT actually understands human language, it can feel like artificial intelligence is going to transform everything about the way we live. But before we get carried away making predictions about AI, it’s useful to take a step back. What does it mean to call anything intelligent, whether it’s a computer or an animal or a human child?
Abha: In this season, we’re going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare. And in the sixth episode, I’ll sit down with Melanie to talk about her research and her views on AI.
Melanie: To kick us off, we’re going to start with the broadest, most basic question: what really is intelligence, anyway? As many researchers know, the answer is more complicated than you might think.
[DRUM TRANSITION]
Melanie: Part One: What is intelligence?
[DRUM TRANSITION]
Alison: I'm Alison Gopnik. I'm a professor of psychology and affiliate professor of philosophy and a member of the Berkeley AI research group. And I study how children manage to learn as much as they do, particularly in a sort of computational context. What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe?
Abha: Alison is also an external professor with the Santa Fe Institute, and she’s done extensive research on children and learning. When babies are born, they’re practically little blobs that can’t hold up their own heads. But as we all know, most babies become full-blown adults who can move, speak, and solve complex problems. From the time we enter this world, we’re trying to figure out what the heck is going on all around us, and that learning sets the foundation for human intelligence.
Alison: Yeah, so one of the things that is really, really important about the world is that some things make other things happen. So everything from thinking about the way the moon affects the tides to just the fact that I'm talking to you and that's going to make you change your minds about things. Or the fact that I can pick up this cup and spill the water and everything will get wet. Those really basic cause and effect relationships are incredibly important. And they're important partly because they let us do things. So if I know that something is gonna cause a particular effect, what that means is if I wanna bring about that effect, I can actually go out in the world and do it. And it underpins everything from, again, as I say, just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science. But at the same time, those causal relationships are kind of mysterious and always have been. How is it? After all, all we see is that one thing happens and another thing follows it. How do we figure out that causal structure?
Melanie: So how do we?
Alison: Yeah, good question. So that's been a problem philosophers have thought about for centuries. And there's basically two pieces. And anyone who's done science will recognize these two pieces. We analyze statistics. So we look at what the dependencies are between one thing and another. And we do experiments. We go out, perhaps the most important way that we understand about causality is you do something and then you see what happens and then you do something again and you say, wait a minute, that happened again. And part of what I've been doing recently, which has been really fun is just look at babies, even like one year olds. And if you just sit and look at a one year old, mostly what they're doing is doing experiments. I have a lovely video of my one-year-old grandson with a xylophone and a mallet.
Abha: Of course, we had to ask Alison to show us the video. Her grandson is sitting on the floor with the xylophone, while his grandfather plays an intricate song on the piano. Together, they make a strange duet.
[VIDEO AUDIO FADES UP]
And it's not just that he makes the noise. He tries turning the mallet upside down. He tries with his hand a bit. That doesn't make a noise. He tries with a stick end. That doesn't make a noise. Then he tries it on one bar and it makes one noise. Another bar, it makes another noise. So when the babies are doing the experiments, we call it getting into everything. But I increasingly think that's like their greatest motivation.
Abha: So babies and children are doing these cause and effect experiments constantly, and that’s a major way that they learn. At the same time, they’re also figuring out how to move and use their bodies, developing a distinct intelligence in their motor systems so they can balance, walk, use their hands, turn their heads, and eventually, move in ways that don’t even require much thinking at all.
Melanie: One of the leading researchers on intelligence and physical movement is John Krakauer, a professor of neurology, neuroscience, physical medicine, and rehabilitation at the Johns Hopkins University School of Medicine. John’s also in the process of writing a book.
John Krakauer: I am. I've been writing it for much longer than I expected, but now I finally know the story I want to tell. I've been practicing it.
Melanie: Well, let me ask, I just want to mention that the subtitle is Thinking versus Intelligence in Animals, Machines and Humans. So I wanted to get your take on what is thinking and what is intelligence.
John: Yes. Yes. My gosh, thanks Melanie for such an easy softball question.
Melanie: [Laughs] Well, you're writing a book about it.
John: Well, yes, so… I think I was very inspired by two things. One was how much intelligent adaptive behavior your motor system has even when you're not thinking about it. The example I always give is when you press an elevator button before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy, that if you didn't do that, you'd fall over because your center of gravity has shifted. So there are countless examples of intelligent behaviors. In other words, they're goal-directed and accomplish the goal below the level of overt deliberation or awareness. 9:32 And then there's a whole field, what are called long latency stretch reflexes, these below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it's still involuntary.
Abha: There’s a lot that we can do without actually understanding what’s happening. Think about the muscles we use to swallow food, or balance on a bike, for example. Learning how to ride a bike takes a lot of effort, but once you’ve figured it out, it’s almost impossible to explain it to someone else.
John: And so it's what, Daniel Dennett, you know, who recently passed away, but was very influential for me with what he called, competence with comprehension versus competence without comprehension. And, you know, I think he also was impressed by how much competence there is in the absence of comprehension. And yet along came this extra piece, the comprehension, which added to competence and greatly increased the repertoire of our competences.
Abha: Our bodies are competent in some ways, but when we use our minds to understand what’s going on, we can do even more. To go back to Alison’s example of her grandson playing with a xylophone, comprehension allows him, or anyone, playing with a xylophone mallet to learn that each side of it makes a different sound. If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end might make a noise if we knocked it against a musical bar. We’re aware of it. Over time we internalize these observations so that every time we see a xylophone mallet, we don’t need to think through what it is and what the mallet is supposed to do.
Melanie: And that brings us to another, crucial part of human intelligence: common sense. Common sense is knowing that you hold a mallet by the stick end and use the round part to make music. And if you see another instrument, like a marimba, you know that the mallet is going to work the same way. Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired.
John: Well, I mean, to me, common sense is the amalgam of stuff that you're born with. So you, you know, any animal will know that if it steps over the edge, it's going to fall. Right. What you've learned through experience that allows you to do quick inference. So in other words, you know, an animal, it starts raining, it knows it has to find shelter. Right? So in other words, presumably it learns that you don't want to be wet, and so it makes the inference it's going to get wet, and then it finds a shelter. It's a common sense thing to do in a way. And then there's the thought version of common sense. Right? It's common sense that if you're approaching a narrow alleyway, your car's not gonna fit in it, right? Or if you go to a slightly less narrow one, your door won't open when you open the door. Countless interactions between your physical experience, your innate repertoire, and a little bit of thinking. And it's that fascinating mixture of fact and inference and deliberation. And then we seem to be able to do it over a vast number of situations, right? In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical world, and then we seem to be able to think with those facts. And those innate awarenesses. That, to me, is what common sense is. It's this almost language -like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day. I know that's a bit waffly. I'm sure Melanie can do a much better job at me than that, but that's how I see it.
Melanie: No, I think that's actually a great exposition of what it means. I totally agree. I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning, and a lot of very basic knowledge that's not really written down anywhere that we happen to know because we exist in the physical world and we interact with it.
Melanie: So, observing cause and effect, developing motor reflexes, and strengthening common sense are all happening and overlapping as children get older.
Abha: And we’re going to cover one more type of intelligence that seems to be unique to humans, and that’s the drive to understand the world.
John: It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable, and manipulatable. The side effect of understanding the world is understandable, is you begin to understand sunsets and why the sky is blue and how black holes work and why water is a liquid and then a gas. It turns out that these are things worth understanding because you can then manipulate and control the universe. And it's obviously advantageous because humans have taken over entirely.
I have a fancy microphone that I can have a Zoom call with you with. A understandable world is a manipulable world. As I always say, an arctic fox trotting very well across the arctic tundra is not going, hmm, what's ice made out of? It doesn't care. Now we, for some point between chimpanzees and us, started to care about how the world worked. And it obviously was useful because we could do all sorts of things. Fire, shelter, blah blah blah.
Abha: And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition. If we go back to the xylophone, metacognition is thinking, “I’m here, learning about this xylophone. I now have a new skill.” And metacognition is what lets us explain what a xylophone is to other people, even if we don’t have an actual xylophone in front of us. Alison explains more.
Alison: So the things that I've been emphasizing are these kind of external exploration and search capacities, like going out and doing experiments. But we know that people, including little kids, do what you might think of as sort of internal search. So they learn a lot, and now they just intrinsically, internally want to say, what are some things, new conclusions I could draw, new ideas I could have based on what I already know?
And that's really different from just what are the statistical patterns in what I already know. And I think two capacities that are really important for that are metacognition and also one that Melanie's looked at more than anyone else, which is analogy. So being able to say, okay, here's all the things that I think, but how confident am I about that? Why do I think that? How could I use that learning to learn something new? Or saying, here's the things that I already know. Here's an analogy that would be really different, right? So I know all about how water works. Let's see, if I think about light, does it have waves the same way that water has waves? So actually learning by just thinking about what you already know.
John: I find myself constantly changing my position on the one hand, this human capacity to sort of look at yourself computing, a sort of meta-cognition, which is consciousness not just of the outside world and of your body, it's consciousness of your processing of the outside world and your body. It's almost as though you used consciousness to look inward at what you were doing. Humans have computations and feelings. They have a special type of feeling and computation which together is deliberative. And that's what I think thinking is, it's feeling your computations.
Melanie: What John is saying is that humans have conscious feelings — our sensations such as hunger or pain — and that our brains perform unconscious computations, like the muscle reflexes that happen when we press an elevator button. What he calls deliberative thought is when we have conscious feelings or awareness about our computations. You might be solving a math problem and realize with dismay that you don’t know how to solve it. Or, you might get excited if you know exactly what trick will work. This is deliberative thought — having feelings about your internal computations. To John, the conscious and unconscious computations are both “intelligent,” but only the conscious computations count as “thinking”.
Abha: So Melanie, having listened to John and Alison, I'd like to go back to our original question with you. What do you think is intelligence?
Melanie: Well, let me recap some of what Alison and John said. Alison really emphasized the ability to learn about cause and effect. What causes what in the world and how we can predict what's going to happen. And she pointed out that the way we learn this, adults and especially kids, by doing little experiments, interacting with the world and seeing what happens and learning about cause and effect that way. She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way. And this underlies what we would call our common sense, that is our basic understanding of the world.
Abha: Yeah, that example of the xylophone and the mallet, that was very intriguing. As both John and Alison said, humans seem to have a unique drive to gain an understanding of the world, know, via experiments like making mistakes, trying things out. And they both emphasize this important role of metacognition or reasoning about one's own thinking. What do you think of that? You know, how important do you think metacognition is?
Melanie: It's absolutely essential to human intelligence. It's really what underlies, I think, our uniqueness. John, you know, made this distinction between intelligence and thinking. To him, you know, most of our, what he would call our intelligent behavior is unconscious. It doesn't involve metacognition. He called it competence without comprehension. And he reserved the term thinking for conscious awareness of what he called one's internal computations.
Abha: Even though John and Alison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full, complete understanding of how human intelligence works, right?
Melanie: Yeah, we're far from that. But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that, as they say, will match or exceed human intelligence. So how close are they to succeeding? Well, in part two, we'll look at how systems like ChatGPT learn and whether or not they're even intelligent at all.
[DRUM TRANSITION]
Abha: Part two: How intelligent are today’s machines?
[DRUM TRANSITION]
Abha: If you’ve been following the news around AI, you may have heard the acronym LLM, which stands for large language model. It’s the term that’s used to describe the technology behind systems like ChatGPT from OpenAI or Gemini from Google. LLMs are trained to find statistical correlations in language, using mountains of text and other data from the internet. In short, if you ask ChatGPT a question, it will give you an answer based on what it has calculated to be the most likely response, based on the vast amount of information it’s ingested.
Melanie: Humans learn by living in the world — we move around, we do little experiments, we build relationships, and we feel. LLMs don’t do any of this. But they do learn from language, which comes from humans and human experience, and they’re trained on a lot of it. So does this mean that LLMs could be considered to be intelligent? And how intelligent can they, or any form of AI, become?
Abha: Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI. AGI has become a buzzword, and everyone defines it a bit differently. But, in short, AGI is a system that has human level intelligence. Now, this assumes that a computer, like a brain in a jar, can become just as smart, or even smarter, than a human with a feeling body. Melanie asked John what he thought about this.
Melanie: You know, I find it confusing when people like Demis Hassibis, who's the founder, one of the co -founders of DeepMind, and he said on an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do.
John: Yes.
Melanie: And he said he expects that there's a 50% chance we'll have AGI within a decade. Okay, so I emphasize that word cognitive task because that term is confusing to me. But it seems so obvious to them.
John: Yes, I mean, I think it's the belief that everything non -physical at the task level can be written out as a kind of program or algorithm. I just don't know... and maybe it's true when it comes to, you know, ideas, intuitions, creativity.
Melanie: I also asked John if he thought that maybe that separation, between cognition and everything else, was a fallacy.
John: Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there's a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. The problem for me with that is that we, like this conversation we're having right now, are capable of open-ended, extrapolatable thought. We go beyond what we're talking about.
I struggle with it but I'm not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension. So maybe we're kind of a dead end. We — comprehension is a great trick, but maybe it's not needed. But if comprehension requires feeling, then I don't quite see how we're going to get AGI in its entirety. But I don't want to sound dogmatic. I'm just practicing my… my unease about it. Do you know what I mean? I don't know.
Abha: Alison is also wary of over-hyping our capacity to get to AGI.
Alison: And one of the great old folk tales is called Stone Soup.
Abha: Or you might have heard it called Nail Soup — there are a few variations. She uses this stone soup story as a metaphor for how much our so-called “AI technology” actually relies on humans and the language they create.
Alison: And the basic story of Stone Soup is that, there's some visitors who come to a village and they're hungry and the villagers won't share their food with them.
So the visitors say, that's fine. We're just going to make stone soup. And they get a big pot and they put water in it. And they say, we're going to get three nice stones and put it in. And we're going to make wonderful stone soup for everybody. They start boiling it. And they say, this is really good soup. But it would be even better if we had a carrot or an onion that we could put in it. And of course, the villagers go and get a carrot and onion. And then they say, this is much better. But you know, when we made it for the king, we actually put in a chicken and that made it even better. And you can imagine what happens. All the villagers contribute all their food. And then in the end, they say, this is amazingly good soup and it was just made with three stones. And I think there's a nice analogy to what's happened with generative AI. So the computer scientists come in and say, look, we're going to make intelligence just with next token prediction and gradient descent and transformers. And then they say, but you know, this intelligence would be much better if we just had some more data from people that we could add to it. And then all the villagers go out and add all of the data of everything that they've uploaded to the internet. And then the computer scientists say, no, this is doing a good job at being intelligent. But it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not. And all the humans say, OK, we'll do that. And then and then it would say, you know, this is really good. We've got a lot of intelligence here. But it would be even better if the humans could do prompt engineering to decide exactly how they were going to ask the questions so that the systems could do intelligent answers. And then at the end of that, the computer scientists would say, see, we got intelligence just with our algorithms. We didn't have to depend on anything else. I think that's a pretty good metaphor for what's happened in AI recently.
Melanie: The way AGI has been pursued is very different from the way humans learn. Large language models, in particular, are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood. The stone soup method uses brute force to shortcut our way to something akin to human intelligence.
Alison: I think it's just a category mistake to say things like are LLM's smart. It's like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that's not really the right question. Yeah, so one of the things about humans in particular is that we've always had this great, this great capacity to learn from other humans. And one of the interesting things about that is that we've had different kinds of technologies over history that have allowed us to do that. So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do.
Alison: My view is that the LLMs are kind of the latest development in our ability to get information from other people. But again, this is not trivializing or debunking it. Those changes in our cultural technology have been among the biggest and most important social changes in our history. So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world. At the moment, as people have pointed out, the fact that I have in my pocket a device that will let me get all the information from everybody else in the world mostly just makes me irritated and miserable most of the time. We would have thought that that would have been like a great accomplishment. But people felt that same way about writing and print when they started too. The hope is that eventually we'll adjust to that kind of technology.
Melanie: Not everyone shares Alison’s view on this. Some researchers think that large language models should be considered to be intelligent entities, and some even argue that they have a degree of consciousness. But thinking of large language models as a type of cultural technology, instead of sentient bots that might take over the world, helps us understand how completely different they are from people. And another important distinction between large language models and humans is that they don’t have an inherent drive to explore and understand the world.
Alison: They're just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and finding out something new.
Melanie: This is in contrast to the one-year-old saying —
Alison: Huh, the stick works on the xylophone. Will it work on the clock or the vase or whatever else it is that you're trying to keep the baby away from? That's a kind of internal basic drive to generalize, to think about, okay, it works in the way that I've been trained, but what will happen if I go outside of the environment in which I've been trained? We have caregivers who have a really distinctive kind of intelligence that we haven't studied enough, I think, who are looking at us, letting us explore. And caregivers are very well designed to, even if it feels frustrating when you're doing it, we're very good at kind of getting this balance between how independent should the next agent be? How much should we be constraining them? How much should we be passing on our values. How much should we let them figure out their own values in a new environment? And I think if we ever do have something like an intelligent AI system, we're going to have to do that. Our role, our relationship to them should be this caregiving role rather than thinking of them as being slaves on the one hand or masters on the other hand, which tends to be the way that we think about them. And as I say, it's not just in computer science, in cognitive science, probably for fairly obvious reasons, we know almost nothing about the cognitive science of caregiving. So that's actually what I’m, I just got a big grant, what I'm going to do for my remaining grandmotherly cognitive science years.
Abha: That sounds very fascinating. I've been curious to see what comes out of that work.
Alison: Well, let me give you just a very simple first pass, our first experiment. If you ask three and four year olds, here's Johnny and he can go on the high slide or he can go on the slide that he already knows about. And what will he do if mom's there? And your intuitions might be maybe the kids will say, well, you don't do the risky thing when mom's there because she'll be mad about it, right? And in fact, it's the opposite. The kids consistently say, no, if mom is there, that will actually let you explore, that will let you take risks, that will let you,
Melanie: She's there to take you to the hospital.
Alison: Exactly, she's there to actually protect you and make sure that you're not doing the worst thing. But of course, for humans, it should be a cue to how important caregiving is for our intelligence is that we have a much wider range of people investing in much more caregiving. So not just mothers, but, my favorite post -menopausal grandmothers, but fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids. And it's having that range of caregivers that actually seems to really help. And again, that should be a cue for how important this is in our ability to do all the other things we have, like be intelligent and have culture.
Melanie: If you just look at large language models, you might think we’re nowhere near anything like AGI. But there are other ways of training AI systems. Some researchers are trying to build AI models that do have an intrinsic drive to explore, rather than just consume human information.
Alison: So one of the things that's happened is that quite understandably the success of these large models has meant that everybody's focused on the large models. But in parallel, there's lots of work that's been going on in AI that is trying to get systems that look more like what we know that children are doing. And I think actually if you look at what's gone on in robotics, we're much closer to thinking about systems that look like they're learning the way that children do. And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems. So to have systems that aren't just trying to do whatever it is that you programmed it to do, like open up the door, but systems that are looking for novelty, that are curious, that are trying to maximize this value of empowerment, that are trying to find out all the range of things they could do that have consequences in the world. And I think at the moment, the LLMs are the thing that everyone's paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that's in those beautiful little fuzzy heads. 53:30 And I should say we're trying to do that. So we're collaborating with computer scientists at Berkeley who are exactly trying to see what would happen if we say, give an intrinsic reward for curiosity. What would happen if you actually had a system that was trying to learn in the way that the children are trying to learn?
Melanie: So are Alison and her team on their way to an AGI breakthrough? Despite all this, Alison is still skeptical.
Alison: I think it's just again, a category mistake to say we'll have something like artificial general intelligence, because we don't have natural general intelligence.
Melanie: In Alison’s view, we don’t have natural general intelligence because human intelligence is not really general. Human intelligence evolved to fit our very particular human needs. So, Alison likewise doesn’t think it makes sense to talk about machines with “general intelligence”, or machines that are more intelligent than humans. Instead —
Alison: What we'll have is a lot of systems that can do different things, that might be able to do amazing things, wonderful things, things that we can't do. But that kind of intuitive theory that there's this thing called intelligence that you could have more of or less of, I just don't think it fits anything that we know from cognitive science. It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it's still true that their view is so different from the people who are actually studying biological intelligences.
Melanie: John suspects that there’s one thing that computers may never have: feelings.
John: It's very interesting that I always used pain as the example, in other words, what would it mean for a computer to feel pain? And what would it mean for a computer to understand a joke? So I'm very interested in these two things, we have this physical, emotional response. We laugh, we feel good, right? So when you understand a joke, where should the credit go? Should it go to understanding it? Or should it go to the laughter and the feeling that it evokes? And to my sort of chagrin or surprise or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain. He also wrote a whole book on humor. Right? So in other words, he understood the size of the mystery and the problem. And I agree with him, if I understood his pain essay correctly, and it's influential on what I'm going to write, I just don't know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh. To me, it's a category error. Now, if thinking is the combination of feeling… and computing, then there's never going to be deliberative thought in a computer.
Abha: While talking to John, he frequently referred to pain receptors as the example of how we humans feel with our bodies. But we wanted to know: what about the more abstract emotions, like joy, or jealousy, or grief? It’s one thing to stub your toe and feel pain radiate up from your foot. It’s another to feel pain during a romantic breakup, or to feel happy when seeing an old friend. We usually think of those as all in our heads, right?
John: You know, I'll say something kind of personal. A close friend of mine called me today to tell me… that his younger brother had been shot and killed in Baltimore. Okay. I don't want to be a downer. I'm saying it for a reason. And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling. (01:11:56.263) And, I was thinking, what can I say with words to do anything about that pain? And the answer is nothing. Right? Other than just to try. But seeing that kind of grief and all that it entails, even more than seeing the patients that I've been looking after for 25 years, is what leads to a little bit of testiness on my part when one tends to downplay this incredible mixture of meaning and loss and memory and pain. Right? And to know that this is a human being who knows, forecasting into the future, that he'll never see this person again. It's not just now. Part of that pain is into the infinite future. Now, all I'm saying is we don't know what that glorious and sad amalgam is, but I'm not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years. Do you see? I find it just slightly enraging, actually. And I just feel that… as a doctor and as a friend, we need to know that we don't know how to think about these things yet. Right? I just don't know. And I am not convinced of anything yet. So I think that there is a link between physical pain and emotional pain, but I can tell you from the losses I felt, it's physical as much as it is cognitive. So grief, I don't know what it would mean for a computer to feel grief. I just don't know. I think we should respect the mystery.
Abha: So Melanie, I noticed that John and Alison are both a bit skeptical about today's approaches to AI. I mean, will it lead to anything like human intelligence? What do you think?
Melanie: Yeah, I think that today's approaches have some limitations. Alison put a lot of emphasis on the need for an agent to be actively interacting in the world as opposed to passively just receiving language input. And for an agent to have its own intrinsic motivation in order to be intelligent. Alison interestingly sees large language models more like libraries or databases than like intelligent agents. And I really loved her stone soup metaphor where her point is that all the important ingredients of large language models come from humans.
Abha: Yeah, it's such an interesting illustration because it sort of tells us everything that goes on behind the scene, you know, before we see the output that an LLM gives us. John seemed to think that full artificial general intelligence is impossible, even in principle. He said that comprehension requires feeling or the ability to feel one's own internal computations. And he didn't seem to see how computers could ever have such feelings.
Melanie: And I think most people in AI would disagree with John. Many people in AI don't even think that any kind of embodied interaction with the world is necessary. They'd argue that we shouldn't underestimate the power of language. In our next episode, we'll go deeper into the importance of this cultural technology, as Alison would put it. How does language help us learn and construct meaning? And what's the relationship between language and thinking?
Steve: You can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking.
Abha: That’s next time, on Complexity.
Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure. Our theme song is by Mitch Mignano, and additional music from Blue Dot Sessions.
I’m Abha, thanks for listening.