AI, embodiment and neuroscience
From pushing an object to see if it falls, to talking to our friends, we learn about the world from our interactions with it. Some argue the human experience is dependent upon movement – it's our entire output, whether that is speech or getting where we are going.
No AI, currently, learns this way, and most are body-less. We asked leading neuroscientists, all previous SWC seminar speakers, for their thoughts on the current rise of AI and what AI might be missing when it comes to embodiment.
Interviewees also discuss advances in AI and how these might be compared to what we know about the human brain, even when they operate in such different contexts.
Purpose
"Some people I know, whose work I admire, have shared this perspective, and I tend to agree: We humans need purpose or agency in our lives. While large models like GPT-4, trained on vast amounts of human knowledge, are incredibly capable and can even generate creative solutions, they excel primarily at solving problems that can be verbalised. And they excel because they were literally trained on us!
Animals like mice or birds clearly possess purpose in their lives. They exhibit remarkable intelligence; for example, crows can use tools creatively to solve novel problems they've never encountered before. One can find examples of this on YouTube, such as crows using tools to retrieve food or even crafting hooks. These animals have goals, whether it's obtaining food or engaging in play. This is what I think makes them intelligent in a special way. In contrast, computers are essentially algorithms that we execute. They lack inherent goals or agency.
Part of me believes that having a physical body is also crucial for artificial human-like general intelligence. But on the other hand, if you exist in a super realistic digital realm, you can still find purpose there—whether it's acquiring knowledge or gaining power, as depicted in movies like Terminator or The Matrix, where entities seek to dominate or control the world.
Perhaps in such digital environments, one might not need a physical body, but I think they may need goals and a world to interact with. The need of a body is something that I often think about when people ask me during public talks whether consciousness alone can be uploaded. I believe that our physical bodies play a significant role in shaping our experiences and emotions. For example, it's remarkable how our physical state can influence our mental state, our emotional well-being, and even our decisions. If we were to transfer our consciousness into a purely digital realm, we might lose this vital aspect of our existence and we may stop being “ourselves” or the selves we are with a body at least."
Dr Juan Alvaro Gallego, Imperial College London
AI can’t generalise
"I think AI is certainly missing embodiment. It lacks many fundamental features crucial for understanding an environment, which could be addressed through embodiment. Having multiple modalities of sensing is essential. We learn about the world through different senses, such as vision and touch, which contribute to our shared understanding of the environment.
Training models solely on text or images misses out on this deeper understanding of how these modalities interact. Embodiment, such as placing neural networks in robots, could greatly enhance their understanding by allowing them to navigate and learn from real-world experiences.
Regarding their meaning, AI algorithms primarily identify statistical regularities, but they may struggle to generalise well without embodiment. Another key difference between humans/animals and AI is the amount of training data required for good performance.
Unlike AI, we don't need vast amounts of data to learn language or understand the world around us. This raises interesting questions about how to make AI more efficient and relatable to animal brains. Insights from neuroscience, including network construction and information encoding, will play a vital role in achieving this goal."
Dr Matthew Whiteway, Columbia University
"While AI systems are trained on vast amounts of human history and possess extensive knowledge, they still lack flexibility and struggle to mimic real social interactions effectively. Despite their knowledge base, AI systems often fail to utilise this information appropriately, highlighting a significant gap in their ability to interact like humans.
The question arises: how generative can AI truly be? While advanced AI systems can generate content, they still struggle to reach the level of human cognition and language proficiency. Studies comparing human performance to AI performance in high-level cognitive tasks reveal fundamental limitations in current AI capabilities. There is ongoing debate surrounding whether deep neural networks accurately model the functioning of the human brain or if they represent an oversimplification.
Developmentally, there are disparities between how humans learn language and how AI systems acquire linguistic abilities. Humans demonstrate a faster rate of learning and adaptability compared to AI systems. Thus, while AI output may resemble human behaviour superficially, the underlying processes and mechanisms may differ significantly, suggesting that current AI models may not fully capture the intricacies of human cognition and language development."
Professor Steve Chang, Yale University
"It's difficult to make definitive conclusions about whether AI models accurately represent the brain, as there are several factors to consider. Firstly, AI models are often trained on specific tasks and environments, such as identifying the content of images or driving on roads, which may not fully capture the adaptability and versatility of human or animal behaviour in novel environments.
Humans, for example, can navigate unfamiliar terrain without specific training, showcasing our ability to generalise knowledge across varied contexts.
Additionally, AI models are typically engineered with specific goals in mind, such as driving safely or identifying objects, which may result in representations that diverge from those observed in real brains.
From a neuroscience perspective, it's essential to examine how closely AI representations align with biological systems. While some similarities may exist, such as in basic functionalities like object recognition, there are also significant differences, particularly in sensitivity to noise and other factors.
Therefore, while AI is a valuable engineering tool, its utility as a model for understanding the brain is still unclear, in my mind.
Anecdotal evidence from interactions with AI developers underscores the importance of real-world neuroscience research. For example, a conversation I had with a CTO of an AI company revealed that their spatial-navigation model assumed that certain neuronal types, like place cells, would exhibit specific characteristics. However, when presented with our data showing multi-field, multi-scale representations in the brain, they realised their assumptions were inaccurate, highlighting the importance of real-world neuroscience data in informing AI development.
In summary, while AI has great strengths as an engineering tool, its fidelity as a model for understanding the brain remains uncertain. Real-world neuroscience research is crucial for providing relevant data and insights that can inform AI development and ensure that AI models better reflect the complexities of biological systems."
Professor Nachum Ulanovsky, Weizmann Institute of Science
Constraints mean AI and brains are fundamentally different
"I think it’s great to have all this excitement in AI development and potential applications for neuroscience. It’s great to have this new technology. But do we see a difference between artificial intelligence and biological intelligence? Absolutely. It’s implemented with different constraints. So just because there is a new algorithm in AI, that doesn’t mean the brain functions the same way. That’s probably my prediction at this time. But I guess time will tell.
This discussion also reminds me, and it may in part reflect my ignorance of the field: AI is mostly an engineering problem where you want to develop more efficient, better artificial intelligence to solve a particular problem. This doesn’t mean the solution is the same as the one that came about during evolution.
An analogy would be an aeroplane – for long-haul flights you need a big jet that flies across continents. Planes can carry so much weight going from one continent to another in about 10 hours. They are probably better than any bird that exists in nature, but that doesn’t mean that is the way birds implement their flight. They serve a different purpose. The purpose of planes is to carry passengers from one country to another and the purpose of birds was to adapt to the environment through evolution. Maybe that reflects the difference between AI and biological intelligence. The constraints are different."
"I saw a talk by Wei Ji Ma and he said something non-controversial in many ways, that AI networks are not trying to mimic human learning or memory.
They’re trying to optimise performance on a given task. But those tasks are rarely if ever embodied into motor output. Whereas human or biological brains learn, and some people argue the whole human experience is dependent upon motor actions. Whether it’s through speaking or moving, our way of interacting with the world is through those actions.
That tight linking between the brain and motor output or embodiment of those cognitive processes is essential for us to understand how we learn and remember, how we adaptively behave. I’m by no means an expert in AI – I know there are a lot of people thinking about it – but that is not the core goal right now of some of the AI research. I think that will lead to differences in interpretation and understanding."
Dr Kishore Kuchibhotla, Johns Hopkins University
The consequences of embodied AI
"It's fascinating how science fiction is becoming reality. I recently saw a new film about where AI was embodied, The Creator (Director, Gareth Edwards), and it portrayed a world where Asian countries embraced AI, while the USA saw AI as a threat, leading to a world war. The AI creatures in the film were living alongside humans in a friendly way, yet there was tension. It felt potentially realistic, even though it was science fiction.
The Creator film made me think about the possibility of embodied AI becoming a reality. It's something we're already seeing glimpses of in films before, like in the 1982 film Blade Runner (Director, Ridley Scott), where androids live among humans. I do believe we'll have embodied AI one day. It's both fascinating and scary to think about, but it's not impossible. The old Blade Runner movie is still relevant and worth watching, capturing these themes in an intriguing way.
There are many technical and ethical issues to overcome. To operate in a humanoid form, AI will have to use a lot more data than just vision, and use, for example proprioceptive data about body position. Of course, the ethical risk is that AI operating in a form like the human body could make mistakes or misinterpretations, and the consequences could be lethal."
Professor William Wisden, Imperial College London
Similarities and differences between artificial and biological intelligence
"It’s a really exciting time to think about the implications of AI in many contexts. If we focus on their implications on neuroscience, there are a number of interesting things we can say about it. Much neuroscience research, including my lab’s, has been inspired by trying to identify the similarities and differences between the brain and AI. For instance, we have a paper that is just about to come out where we compared and found striking similarities between the mouse brain learning to perform a particular task and an AI agent trained to perform a very similar task. This gives us a starting point and inspiration to try and study how the brain does certain tasks by looking at AI that does a very similar task.
AI has the great benefit of being easy to manipulate. So the experiments are much easier. We can manipulate some aspects of AI and see what happens to the function of the AI. That can inspire different animal experiments in neuroscience.
On a similar note, one opportunity that is offered by AI that can mimic the brain is that with AI, we have the ability to look at the mechanism. A lot of the time, AI is a black box. But there is also effort going into understanding how the connectivity of nodes within the AI might be underlying the complex functions of AI. These are the type of things that we are trying to understand in the brain. But in AI, it’s much easier to open it up and find information about its organisation and structure. So we have the opportunity to really understand the mechanism there. You can do all the dream experiments in AI that are not possible in animals.
It’s also very interesting to think about the differences between AI and the brain, especially the human brain. For instance, it’s well known that the human brain is very good at learning over long timescales. You integrate your experiments over many years to develop a general knowledge set. This is sometimes called continual learning.
But a lot of very specific AI agents have been trained to perform very specific tasks. When they are trying to learn something new, they tend to forget the original task – in what is called catastrophic forgetting. What is special about the brain is that it seems to be really good at balancing the act of learning new things without forgetting old things. This is just one example of the potential differences between AI and the brain that can inspire different investigations in neuroscience."
Dr Takaki Komiyama, University of California San Diego
"There's a recent trend in using deep neural networks as models for complex brain processes, and it's been quite exciting to witness. This trend gained momentum after the 2012 publication of AlexNet, the first deep convolutional neural network for natural image classification. Prior to this breakthrough, robust image classification and object detection remained uniquely human, beyond the reach of machines. But AlexNet demonstrated that machines could perform these tasks effectively.
The question then arose: could we use these artificial systems as models for complex brain processes? Unlike biological brains, artificial systems offer the advantage of being readily accessible to low-level inspection and inquiry. Researchers can analyse the internal representations of images as they traverse the visual processing pathway, examine connectivity patterns, and conduct experiments that are currently beyond the current capabilities of our experimental technologies.
One notable study compared the responses of an artificial system and biological neural responses to the same set of images. Surprisingly, they found that the responses were similar, suggesting that artificial systems could indeed perform complex tasks and produce representations that closely resemble those of biological brains, like those of macaques. One was a linear transformation of the other. This was an “aha” moment.
But challenges remain. Different artificial architectures can produce similar representations, raising questions about the level of detail at which meaningful insights can be gleaned about the neurobiology. Researchers are grappling with this problem and striving to identify falsifiable, testable hypotheses that can be translated from artificial systems to animal preparations.
This is all to say, I think modern AI does already look and act more like the brain than you think, at least for some subset of information processing tasks. But obviously, the brain does more than just image classification. And the way in which the brain learns, including how little labelled data it requires for learning effectively, is still quite different from current artificial neural network implementations. I am partial to the idea that more powerful and general AI will need to be trained on large, unlabelled, multimodal datasets."
Dr Timothy Dunn, Duke University
Banner image by Gil Costa and Joana Carvalho