It's a really complex skill. And the older I get, the more dumbstruck I am by it.
We know a lot less about that in humans than we do in birds, but it looks like the foundational architecture is similar
This is another way in which I think songbirds are human-like. They're both extremely visual and extremely auditory.
If you don't have the behaviour, well, what do you have?
The vast majority of animals don't learn to vocalise. There's something special there.
I don't think the auditory cortex is primarily tasked with just listening in the sense of detecting sound. I think it's doing something more complicated

From music to neuroscience: decoding the brain through birdsong

25 June 2025

Why do birds sing - and how do they learn to do it? For decades, researchers have turned to songbirds to explore the mysteries of vocal learning, a rare and complex behaviour shared with humans. At the centre of this work is the question of how brains transform sound into behaviour, and how social and sensory cues shape that process. 

In this interview, SWC Seminar speaker Richard Mooney, Professor at the Duke University School of Medicine, reflects on a path that led him from classical guitar to cutting-edge auditory research, and explores what birdsong can teach us about learning, communication, and the intricate relationship between sound and behaviour.

What drew you to study the neural basis of communication, especially birdsong?

I grew up really fascinated with natural history and biology, I spent a lot of time outdoors fishing and collecting butterflies. And at the same time, I just love music. My family is pretty musical. Classical guitar was the formal focus, but I listen to a lot of music. 

I studied biology and evolution, but I knew nothing about how our nervous system works, and that motivated me to try to understand it better. By the time I got to college, I became very intrigued by how we hear and the biological basis of hearing. 

I then studied at the San Francisco Conservatory of Music. But realistically, I couldn't pay the bills playing guitar so after that, I ended up getting a job as a research technician first at Stanford, and then, really in a stroke of luck, at a lab at the UC San Francisco medical campus. 

I worked with Jim Hudspeth there, and I learned more in those few months than I probably ever learned since. He is a brilliant experimentalist.

He's a biophysicist and was studying the exquisitely exact and precise measurements of the tiny, tiny movements of the hair bundles of cochlear hair cells that transduce sound energy into neural impulses. It was transformative for me. I saw what science really looks like.

Then I moved to Caltech to work with Mark Konishi where I was able to use my extensive knowledge of general biology. It was Mark that said this foundational knowledge of organisms is a really good foundation for learning about the brain. 

This is kind of lost today, but I think it's still a really helpful organising principle. He asked me, if you take an organ-based approach to the nervous system and you say well – we know the heart is the organ of circulation and the lungs of the organ of respiration – then what is the brain? I couldn't really give him an answer. And he said, I think the best answer is behaviour. 

He said behaviour, at the time, was defined as the presence or absence of movement. In any animal other than ourselves, the only way we can interrogate behaviour is by measuring movement. So that’s the ultimate purpose of the nervous system. Though he did acknowledge that we don't know about internal states, and we don't know what the subjective experience of another animal is. I mean, it is hard enough with another human.

At that point I was set, because that got me into studying birdsong, which was a major focus of his lab at the time. The galvanising thing for me was an organism that produces an acoustic behaviour. That really resonated with my interest in music. It does it spontaneously. It learns how to do it. It's a really complex skill. And the older I get, the more dumbstruck I am by it.

Your work has shown that birds learn to sing in a way that mirrors how humans learn to speak. What do you think this parallel reveals about the nature of learning across species? 

It's certainly convergent in the sense that the evolutionary gulf between songbirds and humans is vast. Almost all of the intermediate species don't display vocal learning, so the capacity for it was derived from a neural substrate that existed in a common ancestor, but was exploited for vocal learning in humans and in birds. 

The neural substrate that's exploited for vocal learning for birdsong learning includes the midbrain dopamine system that is highly conserved across birds and mammals and a song-specialised region of the basal ganglia. We know a lot less about that in humans than we do in birds, but it looks like the foundational architecture is similar: dysfunction in the dopamine system in humans affects speech and we also have a specialised part of our basal ganglia that functions to facilitate orofacial movements. 

But there are really important differences too. Birdsong isn’t a language with semantic context. It doesn't serve the same social function, exactly. 

Although we speak to be identified (as birds do when they sing), we also speak declaratively to communicate information about ourselves that goes beyond identity. Our speech facilitates social communication and social function. There is abstraction and meaning.  

At the end of the day, a juvenile songbird spontaneously “assembles” a song behaviour based on a song model that is demonstrated to it by an adult tutor. That's about as heavy as it can get.

My guess is, at least in mammals, that this kind of learning is more common than we think. Skills like hunting seem innate, but can animals improve by watching others? Maybe. Measuring that would be hard.

Social context seems to play a role in shaping vocal behaviour, both in birds and humans. There is a performance element. Do we know how the brain represents audience?

This is another way in which I think songbirds are human-like. They're both extremely visual and extremely auditory. 

The cue for an audience in birdsong, certainly in courtship, is visual. When a male sees a female, he will sing to her. In territorial songbirds in England and other temperate zones worldwide, the male will sing even by himself in a broadcast style that's designed to attract females and repel rival males. Then, when he sees a female, he’ll perform a more elaborate courtship display, of which the song is just part. He might chase her, or some birds do a little dance. Bowerbirds make very decorative nests and parade around in front of them.  

Those visual triggers are really, really important. In one part of my group's work now, we're intrigued by how that visual information is transduced into song, and that is a fundamental question.

What are the methodological innovations that have most influenced your work?

Well, going back to behaviour again, I would say the computational analysis of vocal behaviour has really been transformative for my field. 

The juvenile bird learning to sing is an amazing accomplishment, and it's a very, very heavy lift. It takes weeks. It sings the song hundreds of thousands of times, and for the experimentalist, those are massive data sets.

Historically, a meticulous scientist could tape those and, by selective editing of the tapes, could produce sonograms, which is a very laborious process. You could get an understanding of what was going on qualitatively, but to quantify it and to align it with neural data, that wasn't going to happen. 

You could do all the neurophysiology you wanted, but making sense of it without the behavioural framework – it just isn’t meaningful. Modern computing has changed that – we can analyse those big data sets now.

The older I get, the more strongly I feel that despite all the other technological advancements, at the end of the day, getting back to this organ theory of the brain, the behaviour has to rule all of it. If you don't have the behaviour, well, what do you have? 

Are there aspects of vocal learning that you think current tools, even the computational power that we've got now, still fail to capture? If you could build the perfect experiment, what would it be?

We still have a really poor idea of what the bird is after. We can measure what it ends up doing, but why is it doing that?

So I think what motivates the bird is still one of the main questions. There’s not a machine or a technology that can measure that, but I would say a framework for understanding the bird’s perceptions, and what it's after, would be the goal. 

Part of that question’s answer rests in understanding what the other birds recognise in a song. In the case where the male bird's song is attracting females, how does the female discriminate between one song and another? Which male’s song is more desirable?

This gets at the concept of sexual selection. Does the song represent the male’s reproductive fitness, and therefore how likely the male’s offspring would be to reproduce? Or are the genes responsible for producing song linked to genes that determine fitness, and so a song is a proxy for potential reproductive success?

That is a deep question. I think if we had a way to measure something about the qualities in the song that reports something actionable on the part of the female, that would be interesting.

Or, if we could understand something about the linkage between genes that conferred the capacity for vocal learning with improved reproductive fitness – that would be informative.

These would be a larger framework that steps outside of neuroscience, and that would be very interesting to me. 

The vast majority of animals don't learn to vocalise. There's something special there.

Coming back to being a musician and a neuroscientist - how does how do those identities interact? Does your interest in music influence your research in the lab?

In a way. What has come to dominate my interest as I get older is the neural repercussions of not being able to hear. I've lost a lot of my high-frequency hearing. I don't wear hearing aids. But I should. My doctor tells me I could undergo cognitive decline quickly if I don't hear; perhaps I already have. And there is a second-order effect from social isolation and hearing loss, that's a real issue for people cognitively as they age.

I'm pretty social and but the truth is I really don't hear well, and that has motivated a curiosity about what the brain is doing in the absence of hearing.

We have an ongoing study which is an analysis of the auditory cortical connectivity in mice. We have transgenic littermates, which are either hearing or deaf. Remarkably, much of their neural connectivity is exactly the same.  However, in congenitally deaf mice, there are both expanded inputs to the auditory cortex from visual cortical regions and diminished inputs from the auditory thalamus.

One reason I'm interested is from a clinical perspective, if you try to rescue hearing with a cochlear implant, what are you rescuing, or what is it that you can build on? What is the default connectivity of the auditory system even in the absence of auditory stimulation? 

But I’m also interested in the normal function of the auditory system. It’s called the auditory system because if you record neural activity there, and you play sounds to the to the individual, you get neural responses. But I've come to believe that's kind of a backwards way of understanding how the brain is organised. It's true, it does that. But getting back to the service of behaviour - stimuli are only available or valuable to an animal if they influence action. 

A lot of sensory processing depends on movement itself. You move your eyes, you scan the visual scene. We don't think about it in hearing as much because we don't move our ears like cats or dogs do. 

I don't think the auditory cortex is primarily tasked with just listening in the sense of detecting sound. I think it's doing something more complicated that has to do with associating signals across the acoustic spectrum, both in frequency and time, but also integrating a lot of non-auditory information. For example, when I speak, I can integrate information about the motor commands that generate sound and move my mouth. 

I'm very interested in the functional organisation of auditory regions of the brain. 

Biography

Richard Mooney is the Geller Professor of Neurobiology in the Department of Neurobiology in the Duke University School of Medicine.  Motivated by a longstanding interest in neuroscience and music, he and his colleagues study the brain mechanisms that enable birdsong learning and, more generally, vocal communication.   He obtained his BS in Biology from Yale University, his Ph.D. in Neurobiology from Caltech, and pursued postdoctoral training at Stanford University before joining the Duke faculty as an assistant professor in 1994.  Dr. Mooney has received the Moore Visiting Fellowship at Caltech, Dart Foundation Scholar’s Award, McKnight Investigator Award, Sloane Research Fellowship, Klingenstein Research Fellowship, and the Helen Hay Whitney Fellowship.  He was also honoured to receive the Master Teaching Award, the Davison Teaching Award, and the Langford Prize from Duke University.  He was elected to the American Academy of Arts and Sciences in 2020 and to the National Academy of Sciences in 2024.