What is the relationship between a physical embodied experience and awareness of what is like to be? An SFI working group explores possibility of embodied agents gaining this kind of awareness. (image: Anton Maksimov 5642.su/Unsplash)

Generative AI has garnered widespread attention over the last year with the viral success of ChatGPT. Those selling the hype and doomers alike suggest that these AI models will soon be conscious or self-aware. Cognitive scientists, however, are not convinced.

“A key feature that is currently missing in AI is the causal understanding of the world that we as humans have,“ says SFI External Professor Nihat Ay (Hamburg University of Technology). This is in contrast to humans who understand the cause and effect of their interactions with the physical world.

The difference between the two is embodied or extended intelligence. The human brain offloads some of the computation to the body and environment. Among other ways, it shows up as the feeling in your gut and why some people think better when walking.

Embodied agents bring this kind of intelligence to the world of AI. Think of robots or virtual assistants that interact with the physical world. As with humans, the mental constructs of these agents are shaped by their bodies.

But can these technologies become conscious one day?

Ay and his graduate students Carlotta Langer and Jesse van Oostrum are organizing an SFI working group called "The Active Self" scheduled March 11–15, that will focus on the possibility of embodied agents gaining an awareness of what it is like to be. The meeting is inspired by recent research on the free-energy principle, a concept that posits that the brain uses internal models to reduce uncertainty in any situation.

According to the principle, an embodied agent leverages its understanding of its body to pick the most optimal option from a pool of choices. It considers the physical consequences of its actions, including on itself. During the working group, researchers will explore the link between how embodied agents make choices and what subjective experiences they can gain, including a sense of self.

Ay says that the behavioural patterns resulting from embodied intelligence are more interesting than those traditionally studied in reinforcement learning, a type of machine-learning approach that places an agent in a highly simplified world. This could pave the way for the development of truly artificial intelligent systems that are closer to human intelligence than existing AI models.