Supporting adaptive changes in sound processing

19 November 2018

An interview with Andrea Hasenstaub, PhD, an Assistant Professor in the Coleman Memorial Laboratories in the Department of Otolaryngology-Head and Neck Surgery (OHNS) at the University of California, San Francisco, conducted by April Cashin-Garbutt, MA (Cantab)

Flexibility of the auditory cortex is key to enabling highly dynamic responses to sounds. I caught up with Andrea Hasenstaub to discuss her work combining in vivo and in vitrorecordings with computational models to uncover how interactions between numerous cortical cell types support adaptive changes in sound processing.

You have worked on both visual and auditory primary neocortex. Are their common principles and key differences in how these brain regions process information?

Even though the visual and auditory primary neocortex have fundamentally different functions, there are common principles in how these regions process information.  They both use the same machinery, for example, the same basic parts are there in the same combinations.

The key difference is the auditory cortex does not work in the same way as the standard model of how the cortex works. The standard model, where input from the thalamus comes into layer 4, then projects up to layer 2/3 (feedforward) and then 2/3 projects down to 5 (feedback), was built mostly from visual and a little bit of barrel cortex. This model doesn’t exactly apply to the auditory cortex, because there are no layer 4 stellate cells on which information is received, instead information comes into layer 3 primal cells deep inside layer 3.

These differences are fascinating. For example, in the visual cortex there is one stage where the stellate cells have received information from the thalamus without being contaminated by long range connections; whereas in the auditory cortex the first synapse from the thalamus is on a pyramidal cell with an apical dendrite that goes up to layer 1 and listens to all of the multimodal communication and feedback coming in. That means in the auditory system there’s never really any point that this information reaches your cortex without being susceptible to modulatory feedback or contextual influences.

This difference is one reason why the auditory cortex is so great to work on because, on the one hand, it is a primary cortex, as there is this division of the thalamus that sends massive inputs to it, and yet, cytoarchitectonically, it looks a lot more like secondary visual cortex or an association cortex, rather than primary like V1 or S1. This means we have the control and experimental power to dissect the circuity and yet also, it’s a slightly closer model to the pieces of circuitry which we are really interested in, such as the association cortices.

Why is it important for responses to sounds to be highly dynamic?

Partly because the behaviours that you need to generate in response to sounds are very dynamic. If somebody claps their hands in your face once, probably you want to duck out of the way, but if somebody claps their hands at you ten times in a row, you are going to do something different.

How does this compare to visual responses?

The degree of surprise sensitivity, onset sensitivity, is greater in the auditory cortex than it is in the visual cortex. Part of that may be the differences in the way that vision versus sound is coded.

Activity driven by visual stimulation is usually a little bit slower and less temporally precise, whereas activity driven by a sound is a very sharp temporal onset and offset aimed at detecting edges in the sound.

What are the main questions your research focuses on?

My research focuses on how sounds are represented in the brain and how those representations are used to guide actions. Flexibility is really important so that you are able to deal with changes and also behavioural context, i.e. different aspects of sounds may be relevant in some but irrelevant in other contexts and your brain has to pick out what matters and suppress what doesn’t matter.

The auditory cortex is heavily involved in influencing this adaptiveness. We’ve known, since the time of Cajal, that cortex is made from these beautiful cells, and they come in many types with different specialisations. What we want to do now is understand what the different cell types are doing, how they work together to support hearing and especially to support adaptive changes in hearing function.

And then, what is it about each cell type that lets it perform its computational role? Which of the different specialisations in cells are really meaningful, which ones are not meaningful? If they’re meaningful, how is it that they’re helpful, why do the cells need to be built the way that they are?

We do a lot of mouse work because the technology that’s available in mice is just phenomenal but we also work on comparing certain properties or responses in mice to humans, as ultimately we want to understand how the human brain works.

Can you please explain how you combine in vivo and in vitro recordings with computational models to answer these questions?

When you do in vivo recordings, it is often hard to make sense of all the data and that is where models are very helpful as they allow you to make all your assumptions explicit and turn them into numbers, which you can work through. Thereby using models can help you to overcome difficulties, or reveal areas where you thought you were thinking clearly, but that didn’t quite add up. For us, modelling is a way of guiding ourselves and thinking about our data.

How does sound processing vary by species and what does this tell us about the human auditory cortex?

There are so many ways in which sound processing varies by species, for example, the ranges of sounds different species can hear. There are also differences at the level of the cortex in terms of how the cortex represents information about sounds.

In a previous study we compared the responses to sounds in mice versus squirrel monkeys, which are very vocal communicators and have a lot of the same molecular specialisations in their brains that all primates do.

We looked at envelopes because the waveform of sound is very complicated and contains a lot of irrelevant information. For example, you can take white noise and filter it using the amplitude of speech and you’ll still be able to understand the words, even though it quite clearly isn’t a person talking. Understanding how envelopes are processed is one of the key questions in understanding how speech gets processed.

In our study we looked at how amplitudes are coded in the cortex of mice and squirrel monkeys and what we saw was that they used the same general strategies to encode information, in that both species encoded more information in spike timing, than in overall spike rate or spike number. However, when you look at exactly what timing resolution is most helpful, that was very different in the squirrel monkeys versus the mice. In squirrel monkeys the optimal timing to communicate the most information was about 10 milliseconds, whereas in mice the optimal time resolution was much slower, around 40 milliseconds.

So, on the one hand, there are aspects of the representation that are consistent between the two species, particularly that they use the same basic code, but the details, such as the temporal window over which you have to integrate in order to get the code right, are very different between the two species.

Andrea Hasenstaub

About Assistant Professor Andrea Hasenstaub

Andrea Hasenstaub, PhD, is an Assistant Professor in the Coleman Memorial Laboratories in the Department of Otolaryngology-Head and Neck Surgery (OHNS) at the University of California, San Francisco. She received her BS in Mathematics and Engineering at the California Institute of Technology in Pasadena, California; a M.Phil. in Biological Anthropology from Cambridge University, England; and a PhD in Neurobiology at Yale University in New Haven, Connecticut, followed by a fellowship at the Salk Institute in La Jolla, California.