What I think is going to be more difficult will be for AI to have motivation
Whether it would be possible – probably; whether it would always be useful – I don’t know
I think in certain fields it probably will be at least as flexible, if not more

Will AI ever be as flexible as human intelligence?

14 March 2023

By April Cashin-Garbutt and Hyewon Kim

From search engines to how we work, artificial intelligence is on the rise in many aspects of our daily lives. At SWC, we are interested in how the brain produces flexible behaviours in response to changing environments. Do leading neuroscientists think AI will ever be as flexible as human intelligence? We spoke with 17 SWC seminar speakers to find out. Here are their thoughts.

1. No

“I don’t think AI will ever be as flexible as humans as it is made by humans. We are trying to replicate what we know we can do but don’t know how it works. If we don’t understand everything first, I doubt we can put it outside ourselves.” Teresa Guillamón Vivancos, Instituto de Neurociencias de Alicante

2. Not yet…

“Certainly not at the moment. A rapidly developing area is the use of artificial neural network models that are based on tasks that the brain has to solve. Deep learning using model architectures that are informed by the anatomy and physiology of the brain is a key part of some of those developments. These studies have been, and undoubtedly will continue to be, extremely useful in identifying principles that may be relevant to what happens in the brain and which can then be tested experimentally.

For example, deep neural networks have successfully reproduced the way receptive fields change from the retina in the eye all the way to the visual cortex. They are useful because it is much easier to tinker with the artificial network to understand what happens when you take out certain components than it is in the real brain.

But in terms of their capacity to match the human brain, then there is a clear difference. For example, artificial neural networks trained to recognise speech are reasonably good, but nowhere near as good as the real brain. This tells us they are missing key elements of what the brain does.

Whether that is just because we need to learn much more about what happens in the brain and to extend the artificial networks accordingly, or whether they simply operate on different principles that are not fully representative of what happens in the brain, is a matter of debate at the moment.

But there’s no question that artificial devices don’t replicate the capacity of the human brain to understand speech in different listening conditions. Some of the principles of adaptation that we and others are studying will undoubtedly be important to implement in AI devices in order to improve their flexibility.” Andrew King, University of Oxford

3. Potentially in the future…

“I cannot rule out that possibility. But remember, we always programme AI. Let’s say we observe a human and look at all the variations of what a human does with its motor system. I can imagine there will be a robot that can do the same. But we are always the ones that programme the robot to do whatever it does. So whether an AI robot can create another AI robot with its own design, I would highly doubt.” Botond Roska, Institute of Molecular and Clinical Ophthalmology (Basel)

“I think so. What I think is going to be more difficult will be for AI to have motivation. So far, we can only tell an AI model to do something. But for me, the singularity moment that will create a point of no return or define a very different concept of AI will be when we don’t need to tell the AI what we want it to do.

For now, even if we create the best AI that can do the most amazing things, we are still controlling it, so I don’t see so much of a difference from simply smarter computer programmes. But the moment when AI is motivated on its own, I will be scared!” Nuria Vendrell-Llopis, University of California, Berkeley

4. Why not!

“My intuition tells me not for a very long time. But in principle why not! The reason I say not for a very long time is that one of the things I recognise in people is that we have many layers of biology that have a role to play. I don’t think these biological variables can be fully abstracted away. The genomic changes, epigenetic changes, which molecules are translated into proteins and which ones are not, the repressing mechanisms and so on, inject noise and latent variables and provide for all kinds of unpredictable but nonetheless experienced-based consequences in a complex dynamical system that AI would be in some sense foolish to try to mimic. In fact, the whole point of AI is to abstract away all of that stuff.

My sense is that all of that dirty cell biology that we try to abstract away has a real and consequential role to play in the operations of a system. Maybe you won’t need it to drive a car, but it might help you to understand how to be a better political leader, or to educate people to be better citizens. There is real biology in there that is meaningful and it isn’t easy to abstract away.” André Fenton, New York University

“Yes.” Dileep George, DeepMind and Vicarious AI

“Yes. I think it is possible to completely replicate everything that a human does with artificial intelligence. We are not there yet but I think we will get there some day.” Dmitriy Aronov, Columbia University

5. Even if it is… would that be useful?

“I find this a very interesting topic. When deep neural networks became popular, I played around with them a lot and even built one to annotate my images for me! But I always had to correct it afterwards. Knowing the recent advances in AI, like AI learning from each other and deep reinforcement learning, I think there is a possibility. I wouldn’t rule it out because the scalability of this appears to be near limitless. At some point, there will be computational boundaries, but other than that, AI could extract statistical regularities in the world with much more precision because they can get much more experience than humans.

Regarding flexibility, the idea that an artificial general intelligence can solve any task thrown at it should be possible. For AI, it would certainly be possible to be better than a human at certain tasks or multiple tasks at the same time. I do not necessarily know if this is really needed or what we should want, however. What I find very interesting, for instance, is AI that can solve protein folding. But whether that same AI later on needs to drive me home – I’m not so sure if that’s needed, although it could be fun of course.

Therefore, whether it would be possible – probably; whether it would always be useful – I don’t know. One would also need to consider that the smart protein-folding machine that could also drive a car, would, if it didn’t have wheels, still always stand still. This leads you to think in the direction of robotics, androids, etc., it gets maybe too science-fictional. Anyway, taking flexibility to the extreme might create a lot of practical problems, making it probably not really the right solution. However, to a certain degree it would be great to see and I think certainly feasible.” Pieter Goltstein, Max Planck Institute for Biological Intelligence (Munich)

“I was talking about this in class recently and it’s something we’ve had conversations about with the US Department of Defense. The increasing presence of robots and automated agents on battlefields, and in other realms of life, creates important questions for the coming decades about human-AI or human-robot teaming. How can we create robots that will respond to humans in a way that generates a sense of connection and trust? If synchrony is one of the mechanisms, what is the equivalent that you have to create in a robot to synchronise with a human? Or is it enough to mimic it, to move together, so that the human feels like they’re in sync with an autonomous agent?” Michael Platt, University of Pennsylvania

“Do I think AI will ever become flexible enough to engage in what appears to be decision-making? Absolutely. I have seen groups that can use machine learning to try and simulate neural population activity, especially in sensory regions. They do a fantastic job of simulating spiking outputs in a visual area – but often with very little mechanistic understanding of how those spikes got there. I think there’s going to be a ceiling of performance in these models. Until we put the entire puzzle together with cell types, dendritic processing, and all the stuff that systems and cellular neuroscientists have been thinking about for decades, we will be chasing the smoke without understanding the fire.” Michael Long, New York University

“At this time, AI is as good as the training sets we provide it. Practically, AI is extremely useful to scientists as methods to study the brain or behaviour. In its use for society in general, I have a much more negative view. Even if AI were as flexible as human beings, I’m not entirely sure that it will be used in a way that will help us. But that’s a different question – it’s not a scientific question, but more of a political or sociological one.

The way in which I see this technology evolve does not make me very optimistic, sadly. It seems that companies that develop AI are not thinking enough about the social consequences of these tools, which you see politically in the world already. I’m just expressing my worry.” Gilles Laurent, Max Planck Institute for Brain Research

6. AI may be more flexible than humans

“I actually think they can be better. This is because the human brain, through evolution, has certain limitations when it comes to adaptive control of action. For example, the physical structure already determines things like not being able to run as fast as the leopard. It’s simply not possible. The human muscle, heart and other organs were not built to do that. So my argument is that through evolution, humans adapted to particular environments for human societies, which could be seen as intelligent from one perspective, but has limitations from another.

If we were able to figure out how the brain learns and adaptively controls actions, then we could build the same intelligence as the brain in machines, and at the same time, get rid of those physical limitations that we humans have. This includes how much oxygen our brains need, or how fast our heart can beat, or how much weight our bones can sustain. We can get rid of all this. Even right now AlphaGo has beaten the world’s best Go masters. AlphaGo never gets tired, but we human do. If you play for a sustained period of time, you are bound to get mentally and physically tired, so your performance declines. The AI’s performance does not.

That is why I totally believe AI will be as intelligent and flexible as humans if not more. The key is probably not in computer science, but I think more in neuroscience. Obviously, I am biased. The challenge is that we first have to understand how the brain does its job, then, we can surpass it. Now, the reality is that we actually do not know enough, or in fact know very little about how the brain can be so flexible and reliable at the same time, hence AI is still somehow too ‘artificial’.” Xin Jin, East China Normal University

“I think in certain fields it probably will be at least as flexible, if not more. I suspect the thing that will be complicated for AI is to regulate when it does and doesn’t need to be flexible across different domains.

For example, think about those tales of famous mathematicians that study one problem for decades. You may think – should they have given up? Obviously, the ones that you hear about are those that eventually succeeded and got a prize. But surely, any other kind of rational system would have given up?

I wonder whether AI would be able to have that ability to regulate its degree of flexibility in different situations and across different domains.” Mark Walton, University of Oxford

7. Important to consider energy costs

“I think so. But one of the key component to whether we’ll be able to effectively use that AI is energy costs. We will bump up against this in the next few years because one thing that AI does to improve performance is to just throw more training and examples. All of that is incredibly energy-intensive. We can improve things, but as the questions get more specific, and you need multiple algorithms – solving language recognition, pattern recognition, trying to devise smart heating systems, etc. – you need to train these kinds of networks. Eventually, we will run up against how much energy we consume for all these myriad processes. That’s one of the major limits of AI right now – how to make them energy efficient, which the brain has solved.

Another problem we’ll run into in AI is that these networks and systems might break. They will not fix themselves. There are so many different components, and some interface or mechanical component may break. There might be some update, and your keyboard doesn’t work. It doesn’t fix itself. Biological systems, on the other hand, are amazing because they have their own instructions not only to create themselves, but also to repair themselves! That is another milestone that AI should reach. AI might be able to come up with digital workarounds for things, but I don’t know if they’re going to be able to come up with a physical solution to repairing components that are, for example, damaged by the weather.” Alison Barth, Carnegie Mellon University

“Something that I think is a really interesting idea is whether computers would benefit from a sleep-like state. One of the big problems with computing is power. One of the things that is preventing miniaturisation on your phone is it still needs a big battery. It takes so much energy to mine bitcoin. So, a big challenge for computation is the energy required. I know that some people in the world of computers are starting to look to the brain for energy efficiency tips, because the brain does lots of highly complex computations with a lot less energy. And it has things you would not expect, like synaptic release failures, and lower firing rates than would be expected by computational drive alone.

So, if computing can take those kinds of tips from the brain, why not take other tips? Firstly, we would need to know what sleep is doing, but maybe there is an AI model that sleeps – what is that doing to the model? Can you make an AI model that spontaneously sleeps and see how it reorganises itself? Maybe overall we could reduce the power required for computing this way. This is something I have a deep personal curiosity about.” Julia Harris, The Francis Crick Institute

8. Hard to know

“Maybe if we pair some of the advances that we’re seeing in AI with advances in computing architectures, we’ll get there. But it seems like we’re a very far way away. There are so many biological computations occurring at different scales within the brain that scientists haven’t even really begun to study yet. So, it’s hard to know. I think we’ll be able to create computers that are very smart. But will they ever be as intelligent as humans? I don’t know, maybe smarter than me but I’m not sure about humanity more generally.” Andrew Alexander, Boston University

“Do I think AI will ever behave flexibly as humans? The sceptic in me says, probably not. Some years ago, I would have said AI doesn’t have the creativity power of a human, they are designed to only give back what they have been designed for. But there have been lots of development in AI in the last years and we have been seeing AI producing images or texts that strongly remind us of the output of a creative process. I honestly have no idea, also because we humans as well are a bunch of physical systems. This question has very deep consequences – we either think our mind is something else beyond just the physical elements, or if we think it is indeed just that, once we understand how the physical elements work, there is no reason why we should not implement it in an AI system. At the same time, I know that we are never going to understand the mind completely – so, this will never allow us to reproduce the neural code. All in all, I think this is a controversial question.” Eleonora Russo, Johannes Gutenberg University of Mainz (Germany)