
Do parrots really speak? Lessons in lexical access from budgerigars
An interview with Professor Michael A. Long, NYU, conducted by April Cashin-Garbutt
Have you ever struggled to find the right word? Perhaps it was on the tip of your tongue, or your mind went blank altogether. This momentary lapse highlights a process known as ‘lexical access’ – the brain’s mechanism for retrieving words. Professor Michael A. Long is striving to understand how this process works. In his lab at the NYU Grossman School of Medicine, Long studies a type of parrot called budgerigars. But what can parrots really teach us about human communication?
Following his captivating SWC seminar on “How parrots speak,” Professor Long shares further insights from his research, revealing how these birds not only mimic sounds but engage in meaningful vocal exchanges, develop cultural dialects, and offer a surprisingly elegant neural model for studying speech and language.
How much do you think we can learn about human communication from studying parrots?
The dogma in the field has long been that language is unique to humans. But over the past few years of studying the parrot brain and analysing their behaviour quantitatively, I’ve seen evidence of very high-level computations happening in these animals.
I come to this from two different pillars that are quite far apart. On one side, the human brain that generates speech and language, and on the other, the zebra finch, which takes enormous effort to learn just one song in its entire life.
Parrots, by contrast, seem to effortlessly learn hundreds or even thousands of distinct vocal elements. That’s incredibly exciting. What’s even more compelling is how their brains manage this, which is a surprisingly elegant strategy compared to the zebra finch.
In humans, I’ve been particularly fascinated by lexical access – the process of finding a word. We’ve identified a brain region that we call the interactive planning hub, which we think is critical for conversational exchange. The area becomes active immediately before someone responds – during the planning process – and we think it is important for finding a word.
This includes what is traditionally thought to be Broca’s region and the middle frontal gyrus, and it’s where we see intense electrical activity during word retrieval and speech planning.
When we disrupt this region with electrical stimulation, people often say, “My mind went blank.” That’s a powerful phenomenon and understanding it could help us address conditions like autism, aphasia, and age-related dementia. Studying the parrot brain might give us a model for lexical access that may also help us tackle these challenges.
Parrots are well-known for mimicking sounds, but do they really “speak”?
Yes, I think they do, but of course we are still at the outset of testing this empirically. There are a few criteria I’d argue are necessary for speech. For instance, it must be communicative. Parrots appear to use their vocalisations in an intensely social manner, and they produce distinct vocal objects that seem to carry meaning. We’ve identified brain regions that assign each vocal object a unique neural signature.
Now we’re working to understand what these sounds signify in the environment. Parrots often name specific objects, and we’re filming them to see if we can decipher what they’re referring to. We’re also studying how parrots interact with other birds, and even robots, to see if they use these sounds appropriately in context. It looks like they do. They have unique conversational exchanges, not just repetitive mimicry.

Photo of a budgerigar with a robot. Credit: Andrew Bahle, postdoc in the Long lab.
If parrots have language, do they also have culture?
Absolutely. We’re not allowed to house parrots alone as they’re highly social. In cages of five to seven birds, each group quickly develops its own dialect. You can discriminate which bird came from which cage based on its “accent” – like an Edinburgh versus a Cockney accent. It’s a clear example of cultural transmission.
How do you overcome the noisy budgerigar environments to study this?
Noisy environments are exactly where they thrive! Parrots want to be at a cocktail party, surrounded by others. If they’re alone, they rarely make a sound.
To isolate the sound from individual birds, we use piezoelectric microphones attached directly to the birds. These mics do an incredible job of filtering out background noise. I remember one time a student was playing loud rock music nearby, and the mic still picked up only the bird’s voice.
Three male budgerigars communicating with one another. Credit: Andrew Bahle and Zetian Yang, postdocs in the Long lab.
Were you surprised to find that the forebrain motor map can form ‘words’ in budgerigars?
Yes, completely. It just seemed to be such an elegant solution.
In zebra finches, the neural code is quite degenerate, which means we struggle to predict what they’re singing based on brain activity. But looking at the equivalent area in parrots, specific notes like B-flat or E correspond to distinct neural ensemble activity. Even if we only had access to the activity of just five neurons, we’d be able to reconstruct the parrot’s song.
This is amazing and mirrors what our group and others have seen in humans, where decoding speech from motor cortex activity is helping restore communication for people who’ve lost their voice. This can then be used to drive a prosthetic device, which is something that groups like Eddie Chang, Krishna Shenoy and others have pioneered. Seeing similar capabilities in parrots is truly exciting, and the link to human speech processes can be more profound than in any species studied to date.
Do you have plans to explore this further in any other species?
We’re now studying the cowbird, which sings seasonally and performs elaborate wing displays during its songs – essentially song and dance at the same time. Interestingly, the brain regions that control song don’t seem to encode the dance, suggesting separate systems.
This raises questions about how humans coordinate speech with hand gestures. Communication isn’t just vocal – it’s multimodal. For example, it is often difficult to know the intentions of someone on the telephone as you can’t see their body language.
The cowbird is going to help us study how things get coordinated at a millisecond timescale. Behaviours – including those critical for social communication – often require the coordination of many different muscular effectors coming together.
We are also continuing to focus on the human brain, which keeps us grounded in the world of clinical importance. Nearly 15% of the population suffers from communication disorders.
I have two nephews with autism and a father-in-law who lost his voice after a stroke. I’m driven to find out how we can try to build this back.
What’s one misconception people often have about animal communication that your research helps clarify?
People often don’t appreciate the role of learning in communication. These animals are trying to form memories of past experiences and use them to guide future behaviours. Animal vocalisations are not just hardwired; instead, it’s a labour-intensive, memory-driven process. Birds are especially powerful models because you can observe the learning process as it happens.
About Professor Michael A. Long
Michael A. Long received his Ph.D. from the Department of Neuroscience at Brown University where he used electrophysiology to explore the function of electrical synapses across a variety of neural circuits, including the hypothalamus and the neocortex under the tutelage of Barry Connors. For his postdoctoral work, Michael worked with Michale Fee, a former high energy physicist turned neurobiologist. During this time, Michael established novel methods of manipulating and recording the dynamics within a song-related premotor region of the songbird brain. After starting his own laboratory in 2010, Michael has continued this spirit of innovation by developing approaches that adapt modern imaging and electrophysiological tools to the study of neural circuits underlying the production of skilled motor behaviours across a variety of model systems (e.g., songbirds, parrots, nontraditional rodent species) as well as human speech.
Banner image by Christopher Auger-Dominguez, commissioned by the Long lab.