- Home
- Previous APAN Programs
- Abstract Browser 2021
Abstract Browser 2021
Lizabeth Romanski
Topic areas: neuroethology/communication neural coding multisensory processes
Thursday, 10/22 10:00AM - 11:00AM | Keynote
Abstract
Informal discussion to follow at 11:15 am EDT (GMT-4) on Zoom (link below).
Srivatsun Sadagopan
Topic areas: correlates of behavior/perception memory and cognition
Friday, 11/5 10:00AM - 10:30AM | Young Investigator Spotlight
Abstract
The auditory system effortlessly recognizes complex sounds such as speech or animal vocalizations despite immense variability in their production and additional variability imposed by the listening environment. Recent imaging and electrophysiological data from human subjects indicate that hierarchical computations in auditory cortex underlie such robust sound recognition. However, the rationale for how higher auditory processing stages must represent information, the computations performed in primary (A1) and subsequent auditory cortical areas to achieve robust recognition, as well as the underlying circuit mechanisms, remain largely unknown. Our central premise is that to reveal these mechanisms, we must engage the auditory cortex with complex and behaviorally relevant sounds. Using animal vocalizations (calls) as a model of such sounds, I will describe our lab’s approach to understanding cortical circuits underlying robust call recognition. First, I will briefly outline a hierarchical computational model that learns informative features of intermediate complexity from spectrotemporally dense input representations to enable efficient categorization. Supporting this model, I will describe electrophysiological data recorded from awake guinea pigs demonstrating that the transformation from a dense spectral-content-based representation to a sparse feature-based representation occurs in the superficial layers of A1. Next, I will describe ongoing work in which we are extending this model to achieve noise-invariant categorization by incorporating biologically feasible gain-control mechanisms. I will show that such a model approaches the behavioral performance of animals engaged in call categorization tasks. Finally, I will discuss future directions for utilizing this framework to further understand complex sound processing in normal and hearing-impaired subjects. Informal discussion to follow at 11:15 am EDT (GMT-4) on Zoom (link below).
Kasia M. Bieszczad
Topic areas: hierarchical organization neural coding thalamocortical circuitry/function
Friday, 11/5 12:15PM - 12:45PM | Young Investigator Spotlight
Abstract
Advances in modern molecular neuroscience have shown that regulation above the genome—or "epi-genetic" mechanisms, like chromatin modification and remodeling—are important for learning and behavior. In adult brains, they set a permissive state for activity-dependent gene expression at the foundation of lasting synaptic change and memory, which my lab has investigated in the auditory system. In this talk, I will highlight the chromatin modifier, histone deacetylase 3 (HDAC3), which has been called a "molecular brake pad" on memory formation by its action to normally obstruct activity-dependent transcription required for long-term memory. Blocking HDAC3 in adult animals learning about the behavioral relevance of sound “"eleases the brakes" on physiological plasticity in the auditory cortex, allowing it to remodel in an unusually robust and multidimensional way by expanding the cortical representation of training sound features while also "tuning-in" those receptive fields to respond to training sound cues more selectively. Furthermore, neural changes are mirrored in behavior: the same animals are also more likely to selectively respond to the precise sound features of trained cues (vs. novel sound cues). This brain-to-behavior relationship of increased sound cue-selectivity mediated by HDAC3 has been corroborated in several studies, extending also to subcortical auditory processing and to tasks that require discrimination of simple and complex acoustic signals or to inhibitory associations, like in tone-cued extinction. HDAC3 appears to experience-dependently facilitate sound cue-selective neural and behavioral responsivity, opening the door to investigate epigenetic processes that control the dynamics of the auditory system from genes to behavior. Informal discussion to follow at at 3:00 pm EDT (GMT-4) in Gathertown, Discussion Area 1 (link below).
Jonas Obleser, Charlie Schroeder, Molly Henry, Christoph Kayser, Noelle O'Connell
Topic areas: memory and cognition neural coding hierarchical organization
Thursday, 11/4 3:00PM - 4:00PM | Special Presentation
Abstract
Peter Lakatos passed away earlier this year. This symposium will present a tour d'horizon of his pioneering work and its impact on Auditory Neuroscience. At the end of the hour, we will be raising a glass in Peter’s memory. Do prepare your favourite beverage and feel free to turn-on your video for a communal “Cheers!”
Pradeep Dheerendra, Nicolas Barascud, Sukhbinder Kumar, Tobias Overath and Timothy D Griffiths
Topic areas: correlates of behavior/perception
Auditory object change detection statistical learningThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Auditory object analysis requires the fundamental perceptual process of detecting boundaries between auditory objects. However, the dynamics underlying the identification of discontinuities at object boundary is not known. We investigated the cortical dynamics underlying this process by employing a synthetic stimulus composed of frequency modulated ramps known as "acoustic textures", where boundaries were created by changing the underlying spectro-temporal coherence. We collected magnetoencephalographic (MEG) data from 14 subjects in 275-channel CTF scanner. We observed a very slow (less than 1 Hz) drift in the neuro-magnetic signal that started 430 ms post boundary between textures that lasted for 1330 ms before it decayed to baseline no-boundary condition. The response evoked by this drift signal was source localized to Heschl's Gyrus bilaterally which was shown in the previous BOLD study (Overath et al., 2010) to be involved in the detection of auditory object boundaries. Time-frequency analysis demonstrated suppression in alpha and beta bands that occurred after the drift signal.
Chenggang Chen and Xiaoqin Wang
Topic areas: neural coding
Sound Localization Stimulus Context Nonhuman PrimateThu, 11/4 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
Responses of neurons in auditory cortex are influenced by stimulus context. Comparing to spectral and temporal contextual effects, much less is known on spatial contextual effects. In this study, we explored how spatial contextual modulations evolve over time by stimulating neurons in awake marmoset auditory cortex with sequences of sounds either randomly from various spatial locations (equal probability mode) or repeatedly from a single location (high probability mode). To our surprise, instead of inducing adaptation as expected from well documented stimulus-specific adaptation (SSA) literature, repetitive stimulation in the high probability mode from spatial locations away from the center of a neuron’s spatial receptive field evoked lasting facilitation observed by both extracellular and intracellular recordings from single neurons in auditory cortex. Nearly half of the sampled neuronal population exhibited this spatial facilitation, irrespective of stimuli type and visibility of the test speaker. Facilitation with longer duration occurred when the repetitive stimulation was delivered from speakers with firing rates ranked lower than the best speaker’s firing rate under the equal probability mode. The extent of the facilitation decreased with decreasing presentation probability of the test speaker. Interestingly, the induced facilitation did not change spatial tuning selectivity, tuning preference and spontaneous firing rate of the tested neurons. Taken together, our findings revealed a location-specific facilitation (LSF) instead of SSA to repetitively presented sound stimuli which has not been observed in auditory cortex. This form of spatial contextual modulation may play an important role in supporting such functions as auditory streaming and segregation.
Mohsen Alavash, Malte Wöstmann and Jonas Obleser
Topic areas: memory and cognition correlates of behavior/perception novel technologies
connectivity neural oscillations auditory attention source EEGThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Recent advances in network neuroscience suggest that human brain resting-state functional connectivity is driven by bursts of high-amplitude co-fluctuations of hemodynamic responses measured using fMRI (Faskowitz et al., 2020). We here build on these new insights and ask (1) whether connectivity of intrinsic neural oscillations manifests high-amplitude co-fluctuations as such, and (2) whether and how these connectivity events unfold during the deployment of auditory attention? We reanalyzed two recently published EEG data sets of resting-state (N = 154; Alavash, Tune, Obleser, 2021) and an auditory attention task (N = 33; Wöstmann, Alavash, Obleser, 2019), respectively. During the task, an auditory spatial cue was presented to the participants asking them to attend to one of two concurrent tone-streams and judge the pitch difference (increase/decrease) in the target stream. We source-localized EEG and unwrapped power-envelope correlations across time and cortex within alpha and beta frequency ranges. This revealed connectivity events at sub-second resolution during which high-amplitude power-envelope co-fluctuations occurred. In line with recent fMRI work, alpha and beta connectivity derived from these events showed high similarity with static connectivity during both rest and the auditory task. Importantly, during the task frontoparietal beta connectivity events occurred during processing of the spatial cue, while posterior alpha connectivity events occurred during processing of the tone-streams. Our results suggest that high-amplitude power-envelope co-fluctuations drive connectivity of alpha and beta oscillations. Critically, these connectivity events appear to underlie ongoing deployment of auditory attention in a functionally distinct manner, that is, anticipation of stimuli versus selective stimulus processing.
Carolina Fernandez Pujol, Elizabeth Blundon and Andrew Dykstra
Topic areas: memory and cognition correlates of behavior/perception neural coding thalamocortical circuitry/function
auditory cortex perception magnetoencephalography neural circuits laminarFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Electroencephalography (EEG) and magnetoencephalography (MEG) are excellent mediums for capturing human neural activity on a millisecond time scale, yet little is known about their underlying laminar and biophysical basis. Here, we used a reduced but realistic cortical circuit model - Human Neocortical Neurosolver (HNN) - to shed light on the laminar specificity of brain responses associated with auditory conscious perception under multitone masking. HNN provides a canonical model of a neocortical column circuit, including both excitatory pyramidal and inhibitory basket neurons in layers II/III and layer V. We found that the difference in event-related responses between perceived and unperceived target tones could be accounted for by additional input to supragranular layers arriving from either the non-lemniscal thalamus or cortico-cortical feedback connections. Layer-specific spiking activity of the circuit revealed that the additional negative-going peak that was present for detected but not undetected target tones was accompanied by increased firing of layer-V pyramidal neurons. These results are consistent with current cellular models of conscious processing and help bridge the gap between the macro and micro levels of analysis of perception-related brain activity.
Isma Zulfiqar, Elia Formisano, Sriranga Kashyap, Peter de Weerd and Michelle Moerel
Topic areas: correlates of behavior/perception multisensory processes
multisensory periphery temporal modulation laminar high-resolution fMRIFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Recent evidence supports the existence of multisensory processing in early auditory regions of the cortex. To study the source of these multisensory modulations, we investigated visual influences on the auditory cortex in a cortical depth-dependent manner using high resolution functional MRI at 7 Tesla. Specifically, given the reciprocal connectivity between early visual cortex representing the periphery and auditory cortex, we set out to explore audiovisual integration of peripherally presented stimuli. For 10 subjects, we collected anatomical data at 0.6 mm and functional data at 0.8 mm isotropic resolution. In a blocked design, the participants were presented with unisensory and audiovisual stimuli. Attention was directed either towards the auditory stimulus, or away from both stimuli. Our preliminary results showed multisensory enhancement in a cortical network comprising early sensory sites (auditory and visual), insula, and ventrolateral prefrontal cortex. Multisensory enhancement was present in the primary auditory cortex and increased along the auditory cortical hierarchy. These findings confirm that the primary auditory cortex is not uniquely unisensory. Additionally, we observed a task-dependent attentional modulation of multisensory enhancement in deep layers of the auditory belt. This suggests a top-down origin of attention that stems from long-range cortico-cortical feedback. Future analyses will include cortical depth-dependent connectivity analysis which may help discriminate between the frontal regions and visual cortex as sources of the observed context-dependent multisensory enhancement in deep layers of the auditory belt, and a multivariate analysis to increase sensitivity of our analysis by examining distributed multisensory effects.
Beate Wendt, Jörg Stadler and Nicole Angenstein
Topic areas: auditory disorders speech and language
cochlear implant duration processing frequency processing just noticeable differences serial order judgement speech processingThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
The perception of speech requires the processing of different basic acoustic parameters such as frequency, duration and intensity. If this fundamental processing is inefficient, this might lead to problems in speech perception. The present study investigates low-level auditory processing in adult cochlear implant (CI) users in the inexperienced and experienced state. Frequency, duration, intensity processing and serial order judgement were tested by stimulating the ear with a CI and the just noticeable differences were determined. Alternative forced choice measurements were performed shortly after the first fitting of the CI and again around two years or more later. Furthermore, speech processing was tested with German standard speech tests (Oldenburg sentence test (OLSA), Freiburg monosyllabic and multisyllabic word tests). In addition, the perception of consonants and vowels were tested. As expected, speech processing clearly improved over time. However, there was no significant improvement in low-level auditory processing. Correlations of the performance between the different low-level tests and between the different speech tests were observed. In addition, a few correlations between the speech tests and the low-level test performance were detected, e.g. between the OLSA and frequency processing only in the inexperienced state. Correlations between the word tests and the recognition of consonants and vowels were present in both the inexperienced and experienced state, but particularly pronounced in the experienced state. The results might have implications for the rehabilitation of CI users.
Rachid Riad, Julien Karadayi, Anne-Catherine Bachoud-Levi and Emmanuel Dupoux
Topic areas: correlates of behavior/perception novel technologies
spectro-temporal modulations auditory neuroscience interpretability of deep neural networks audio signal processingThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Deep learning models have become potential candidates for auditory neuroscience research, thanks to their recent successes in a variety of auditory tasks, yet these models often lack interpretability to fully understand the exact computations that have been performed. Here, we proposed a parametrized neural network layer, which computes specific spectro-temporal modulations based on Gabor filters [learnable spectro-temporal filters (STRFs)] and is fully interpretable. We evaluated this layer on speech activity detection, speaker verification, urban sound classification, and zebra finch call type classification. We found that models based on learnable STRFs are on par for all tasks with state-of-the-art and obtain the best performance for speech activity detection. As this layer remains a Gabor filter, it is fully interpretable. Thus, we used quantitative measures to describe distribution of the learned spectro-temporal modulations. Filters adapted to each task and focused mostly on low temporal and spectral modulations. The analyses show that the filters learned on human speech have similar spectro-temporal parameters as the ones measured directly in the human auditory cortex. Finally, we observed that the tasks organized in a meaningful way: the human vocalization tasks closer to each other and bird vocalizations far away from human vocalizations and urban sounds tasks.
Michelle Moerel, Agustin Lage-Castellanos, Omer Faruk Gulban and Federico De Martino
Topic areas: memory and cognition correlates of behavior/perception neural coding
frequency-based attention ultra-high field fMRI population receptive field mapping auditory cortexFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Electrophysiological studies suggest that auditory attention induces rapid changes in neuronal feature preference and selectivity. Functional magnetic resonance imaging (fMRI) studies of human auditory cortex have revealed an increased BOLD response in neuronal populations tuned to attended sound features. As fMRI studies could typically not characterize the influence of attention on neuronal population receptive field properties, it is still unclear how the results obtained with fMRI in humans relate to the electrophysiological findings in animal models. We used ultra-high field fMRI to examine auditory processing while participants performed a detection task on ripple sounds. By manipulating the chance of target occurrence, participants alternatively attended to low (300 Hz) or high frequency (4 kHz) ripple sounds. Instead, responses to natural sounds were used to compute neuronal population receptive fields (pRFs). We observed a faster reaction time to noise bursts in attended compared to unattended ripples. In contrast with previous fMRI studies, the auditory cortical response to attended ripple sounds was lower. Maps of frequency preference (BF) and selectivity (tuning width; TW) were similar across attentional conditions, with the exception of a narrower TW with attention in parabelt locations with a BF close to the attended frequency. The narrower tuning width in voxels whose preferred frequency matches the attended one could underlie the observed lower response to attended, compared to non-attended, ripple sounds. The difference between our results and previous fMRI studies may be explained by the difference in experimental design, and suggests that fundamentally different mechanisms may underlie different attentional settings.
Ying Fan and Huan Luo
Topic areas: memory and cognition
auditory sequence memory content structureFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Two forms of information – frequency (content) and ordinal position (structure) – have to be stored when retaining a sequence of auditory tones in working memory (WM). However, the neural representations and coding characteristics of content and structure, particularly during WM maintenance, remain elusive. Here, in two electroencephalography (EEG) studies in human participants performing a delayed-match-to-sample task with a retrocue, by transiently perturbing the ‘activity-silent’ WM retention state and decoding the reactivated WM information, we demonstrate that content and structure are stored in a dissociative manner with distinct characteristics throughout WM process. First, each tone in the sequence is associated with two codes in parallel, characterizing its frequency and ordinal position, respectively. Second, during retention, a structural retrocue successfully reactivates structure but not content, whereas a following neutral white noise triggers content but not structure. Meanwhile, a content retrocue is able to reactivate both content and structure information, while a subsequent neutral visual impulse successfully makes maintained structure information detectable. Third, structure representation remains stable whereas content code undergoes a dynamic transformation through memory progress. Finally, the neutral-impulse-triggered content and structure reactivations during retention correlate with WM behaviors on frequency and ordinal position, respectively. Overall, our results support distinct content and structure representations in auditory WM and provide efficient approaches to access the silently stored WM information (both content and structure) in the human brain.
Danna Pinto, Anat Prior and Elana Zion-Golumbic
Topic areas: memory and cognition speech and language
Statistical Learning Language EEG Frequency TaggingThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Statistical Learning (SL) is hypothesized to play an important role in language development. However, the behavioral measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and often have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative and more direct measure for studying SL. Here we tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive EEG recordings of neural activity in humans. Importantly, we use carefully constructed controls, in order to address potential acoustic confounds of the frequency-tagging approach. We compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL, and the correspondence between these presumed converging operations. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Conversely, the implicit behavior measures indicated that SL has occurred in 70% of participants, which is more consistent with the proposed ubiquitous nature of SL. Moreover, there was low correspondence between the different measures used to assess SL. Taken together, while some researchers may find the frequency-tagging approach suitable for their needs, our results highlight the methodological challenges of assessing SL at the individual level, and the potential confounds that should be taken into account when interpreting frequency-tagged EEG data.
Lingyun Zhao, Alexander Silva and Edward Chang
Topic areas: speech and language
Speech Stopping Frontal cortexFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
An important capacity for normal speech is the ability to stop ongoing production quickly when required. This ability is crucial for maintaining smooth conversations and deficits in it are often indicative of speech disorders. Previous studies have investigated the neural control for canceling speech and other motor outputs before their onset. However, it is largely unknown how the brain controls speech termination when one has already started speaking. Here we studied this question by directly recording neural activity from the human cortex while participants start and stop their speech following visual cues. We found increased high-gamma activity near the end of the production in the premotor cortex during cued stopping, which was not observed in the self-paced, natural finish of a sentence. Across single trials, a subset of premotor regions was activated according to the time of the stop cue or stop action. Activity in single electrodes and across populations distinguishes whether stopping occurred before an entire word was finished. In addition, we ask how the neural process for stopping is related to concurrent articulatory control. We found that stop activity existed in largely separate regions from those in the sensorimotor cortex encoding articulator movements. Finally, we found that areas with stimulation-induced speech arrest overlapped with areas showing stop activity, suggesting that speech arrest may be caused by inhibition of speech production. Together, these data provide evidence that neural activity in the premotor cortex may act as an inhibitory control signal that underlies the stopping of ongoing speech production.
Lixia Gao, Xinjian Li and Xiaohui Wang
Topic areas: subcortical processing
Inferior colliculus Cortical inactivation Temporal representation Rate representation Time-varying signalThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Temporal processing is crucial for auditory perception and cognition, especially for communication sounds. Previous studies have shown that the auditory cortex and thalamus use temporal and rate representations to encode slowly- and rapidly-changing time-varying sounds. However, how the inferior colliculus (IC) encodes time-varying sounds at the millisecond scale remains unclear. In the present study, we investigated the temporal processing by IC neurons in awake marmosets to Gaussian click trains with varying inter-click intervals (2–100 ms). Strikingly, we found that 28% of IC neurons exhibited rate representation with non-synchronized responses, which is in sharp contrast to the current view that the IC only uses a temporal representation to encode time-varying signals. Moreover, IC neurons with rate representation exhibited response properties distinct from those with temporal representation. Next, we further demonstrated that reversible inactivation of the primary auditory cortex modulated 17% of the stimulus-synchronized responses and 21% of the non-synchronized responses of IC neurons, revealing that cortico-colliculus projections play a role, but not a crucial one, in temporal processing in the IC. Our findings fill a gap in the auditory temporal processing in the IC of awake animals and provide new insights into temporal processing from the midbrain to the cortex.
Luis Rivera-Perez, Julia Kwapiszewski and Michael Roberts
Topic areas: subcortical processing
Inferior colliculus neuromodulation acetylcholine nicotinic acetylcholine receptors VIP neuronsFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
The inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input from the pontomesencephalic tegmentum (PMT). Activation of nicotinic acetylcholine receptors (nAChRs) in the IC can enhance auditory performance by altering the excitability of neurons. However, how nAChR activation affects the excitability of specific neuron classes in the IC remains unknown. Our lab identified a distinct class of glutamatergic principal neurons in the IC that expresses vasoactive intestinal peptide (VIP). Using VIP-Cre x Ai14 mice and immunofluorescence, we found that cholinergic terminals are commonly located in close proximity to the somas and dendrites of VIP neurons. By using whole-cell electrophysiology, we found that acetylcholine (ACh) drives a strong, long-lasting excitatory effect in VIP neurons. Application of nAChR antagonists revealed that ACh excites VIP neurons via the activation of α3β4* nAChRs, a subtype that is relatively rare in the brain. Furthermore, we determined that cholinergic excitation of VIP neurons occurs by activating post-synaptic nAChRs located in VIP neurons themselves and does not require activation of presynaptic inputs. Finally, we found that trains of ACh puffs elicited temporal summation in VIP neurons, suggesting that cholinergic inputs can affect activity in the IC for prolonged periods in an in vivo setting. These results uncover the first cellular-level mechanisms of cholinergic modulation in the IC and a novel role for α3β4* nAChRs in the auditory system, and suggest that cholinergic inputs from the PMT can strongly affect auditory processing in the IC by increasing the excitability of VIP neurons.
Jian Carlo Nocon, Howard J. Gritton, Xue Han and Kamal Sen
Topic areas: neural coding
Parvalbumin Cortical code Temporal code Rate code Spike timing Sparse coding Cocktail party problem Amplitude modulation Spatial tuningThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
Cortical coding of sensory stimuli plays a critical role in our ability to analyze complex scenes. Cortical coding can depend on both rate and spike timing-based coding. However, cell type-specific contributions to cortical coding are not well-understood. Parvalbumin (PV) neurons play a fundamental role in sculpting cortical responses; yet their specific contributions to rate vs. spike timing-based codes has not been directly investigated. Here, we address this question in auditory cortex using a cocktail party-like paradigm; integrating electrophysiology, optogenetic manipulations, and a family of spike-distance metrics, to dissect the contributions of PV neurons towards rate vs. timing-based coding. We find that PV neurons improve discrimination performance by enhancing lifetime sparseness, rapid temporal modulations, and spike timing reproducibility. These findings provide novel insights into the specific contributions of PV neurons in auditory cortical discrimination in the cocktail party problem via enhancing rate modulation and spike timing-based coding in cortex.
Jennifer Lawlor, Melville Wohlgemuth, Cynthia Moss and Kishore Kuchibhotla
Topic areas: correlates of behavior/perception neural coding subcortical processing
Two-photon calcium imaging Echolocation Inferior ColliculusThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Navigating our everyday world requires parsing relevant information from constantly evolving sensory flows. How the brain processes and sorts sensory inputs is a central ongoing question in systems neuroscience. Here, we take advantage of a model long-studied for its expert auditory sensing of the world: the echolocating bat. The echolocating bat produces ultrasonic vocalizations and listens to returning echoes to determine the identity and location of objects in the environment. While traditional electrophysiology techniques have provided key insights into network-level activity, they are limited in their ability to reveal micro-functional architecture and cell-type specific activity. We developed two-photon calcium imaging in the awake big brown bat, Eptesicus fuscus, to assay the activity of a population of neurons with cellular and sub-cellular resolution. We expressed GCaMP6f in the excitatory population of the Inferior Colliculus, while using a head-fixation and thinned-skull surgical approach to longitudinally monitor the same local populations. We assessed functional auditory properties of thousands of neurons in awake, passively listening bats (n=3) by presenting pure tones, white noise, frequency sweeps, echolocation and social calls, and other stimuli. We parametrically controlled features including the duration and delay of relevant stimuli. In preliminary analyses, we observe a novel, superficial fine-scaled tonopy in the superficial layers of the IC. In addition, presentation of ‘prey capture’ echolocation sequences varying in their spectral content (‘natural’ vs ‘artificial’) elicited a population rotational dynamic reflecting the temporal structure of the call sequence, while manifolds separate the spectral information.
Vibha Viswanathan, Barbara Shinn-Cunningham and Michael Heinz
Topic areas: speech and language correlates of behavior/perception neural coding subcortical processing
scene analysis temporal coherence consonant confusions comodulation masking release cross-channel processing wideband inhibition computational modeling speech intelligibility cochlear nucleusFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Temporal coherence of sound fluctuations across different frequency channels is thought to aid auditory grouping and scene segregation, as in comodulation masking release. Although most prior studies focused on the cortical bases of temporal-coherence processing, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. Specifically, we tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions, and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial to rigorously test models of speech perception; however, to the best of our knowledge, they have not been utilized in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise, and that physiological computations that exist early along the auditory pathway may contribute to this process.
Lucas Vattino, Maryse Thomas, Carolyn Sweeney, Rahul Brito, Cathryn Macgregor and Anne Takesian
Topic areas: correlates of behavior/perception neural coding thalamocortical circuitry/function
Interneurons Network CorrelationFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Inhibitory interneurons in neocortical layer 1 (L1) convey behaviorally relevant information by integrating sensory-driven inputs with neuromodulatory signals. Their activity is known to regulate moment-to-moment encoding of the sensory environment in a context-dependent manner and can drive cortical plasticity mechanisms, both during postnatal development and adulthood. We and others have shown that these interneurons are heterogenous and can be subdivided into two major classes defined by the expression of either neuron-derived neurotrophic factor (NDNF) or vasoactive intestinal peptide (VIP). It has been previously demonstrated that L1 interneurons make recurrent synaptic contacts and are also connected electrically through gap junctions, suggesting that they might form coordinated inhibitory networks. However, the connectivity patterns of specific L1 interneuron subtypes and the in vivo functional implications remain unclear. We performed fluorescence-guided whole-cell electrophysiology in slices of the mouse primary auditory cortex (A1) while optogenetically activating VIP or NDNF interneurons. We found that GABAA-mediated synaptic connections between NDNF interneurons were significantly stronger than those between VIP interneurons or other L1 interneurons, suggesting a robust NDNF recurrent inhibitory network. Next, we performed two-photon calcium imaging in A1 of awake, behaving mice to monitor the spontaneous and sound-driven activity of networks of VIP and NDNF interneurons. These results will reveal how behavioral context impacts the in vivo coordinated activity of these L1 inhibitory networks. Together, our findings suggest that distinct connectivity patterns among NDNF and VIP interneurons may underlie specialized functions in sensory encoding and cortical plasticity.
Rafay A. Khan, Brad Sutton, Yihsin Tai, Sara Schmidt, Somayeh Shahsavarani and Fatima T. Husain
Topic areas: auditory disorders
Tinnitus Hearing loss Neuroimaging Tractography ConnectivityFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Tinnitus has been associated with both anatomical and functional plasticity. In white matter, tinnitus-associated changes have been seen in a range of regions, as have negative results. Some of this variation in findings may be attributable to small sample sizes, and different methodologies employed by different groups. A further layer of complication is added by the unknown relationship of tinnitus with hearing loss. To evaluate whole-brain network level changes of structure, we investigated anatomical connectivity via fiber tractography. High-resolution diffusion imaging data was collected from 97 participants, who were divided into four groups: normal hearing controls (CONNH, n=19), hearing loss controls (CONHL, n=17), normal hearing tinnitus sufferers (TINNH, n=17) and tinnitus sufferers with hearing loss (TINHL, n=44). Group-level differences in connectivity were inspected in three nodes – the precuneus (representing the default mode network; DMN) and bilateral auditory cortices (nodes for auditory network). Three measures of connectivity were calculated – mean strength, local efficiency, and clustering coefficient. ANOVA revealed significant group differences for all three measures in the precuneus, but none that reached statistical significance in either auditory cortex. Post-hoc analysis revealed that group differences were primarily driven by the CONNH > TINHL and TINNH > TINHL contrasts, suggesting that the DMN connectivity has altered integration and segregation associated with tinnitus, which can be differentiated from connectivity changes associated with hearing loss. This study demonstrated the feasibility of studying tinnitus-related neural plasticity using fiber tractography, and results showed an anatomical analog for findings previously reported in functional connectivity literature.
Kate Christison-Lagay, Noah Freedman, Christopher Micek, Aya Khalaf, Sharif Kronemer, Mariana Gusso, Lauren Kim, Sarit Forman, Julia Ding, Mark Aksen, Ahmad Abdel-Aty, Hunki Kwon, Noah Markowitz, Erin Yeagle, Elizabeth Espinal, Jose Herrero, Stephan Bickel, James Young, Ashesh Mehta, Kun Wu, Jason Gerrard, Eyiyemisi Damisah, Dennis Spencer and Hal Blumenfeld
Topic areas: memory and cognition correlates of behavior/perception
auditory perception intracranial EEG humanThu, 11/4 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
Much recent work towards understanding the spatiotemporal dynamics of the neural mechanisms of conscious perception has focused on visual paradigms. To determine whether there are shared mechanisms for perceptual consciousness across sensory modalities, we developed an auditory task, in which target sounds (calibrated to 50% detection) were embedded in noise. Participants (patients undergoing intracranial electroencephalography for intractable epilepsy; n=31) reported if they perceived the sound, and the sound’s identity. Participants’ perception rate was 58.0% (2.0% SEM) when a target was present; false positive rate was 8.5% (1.4%). For perceived trials, they correctly identified the target in 89.2% (1.4%) of trials; identification accuracy for non-perceived trials was 40.2% (2.0%) (chance: 33%). Recordings from < 2,800 grey matter electrodes were analyzed for power in the high gamma range (40-115 Hz). We performed cluster-based permutation analyses to identify significant activity across perceived and not perceived conditions. For not-perceived trials, significant activity was restricted to auditory regions. Perceived trials also showed activity in auditory regions, but this was accompanied by activity in the right caudal middle frontal gyrus and non-auditory thalamus. Consistent with visual findings, in perceived trials we found (1) early auditory activity is followed by a wave that sweeps through auditory association regions into parietal and frontal cortices and (2) decreases below baseline in orbital frontal and rostral inferior frontal cortex. In summary, we found a broad network of cortical and subcortical regions involved in auditory perception that are similar to the networks observed with vision, suggesting shared general mechanisms for conscious perception.
Heidi Bliddal, Christian Bech Christensen, Cecilie Møller, Peter Vuust and Preben Kidmose
Topic areas: correlates of behavior/perception novel technologies
Ear-EEG EEG beat perceptionFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Ear-EEG is a promising novel technology that records electroencephalography (EEG) from electrodes inside the ear, allowing discrete and mobile recording of EEG. Nozaradan et al. (2011) used scalp EEG to study neural responses to an isochronous sequence of sounds under three conditions: a control condition and two imagery conditions where participants were instructed to imagine accents on every second (march) or third (waltz) beat. A significant peak was found at the frequency of the imagined beat only in the matching imagery conditions. Since no physical accents were present in the stimulus, the peaks at beat-related frequencies indicate higher order processing of the sound sequence. The aim of the present combined scalp- and ear-EEG study (n = 20) was to determine whether neural correlates of beat perception can be measured using ear-EEG. To investigate this, we used an adapted version of the Nozaradan paradigm. Three different electrode reference configurations were tested, a literature-based reference, an in-ear reference, and an in-between ears reference. The results showed that when the literature-based reference or the in-between ears reference was used, a significantly greater peak was found at the march related frequency in the march imagery condition and at the waltz related frequency in the waltz imagery condition, when comparing to the other imagery condition (p < .02). In conclusion, it is possible to measure the neuronal correlates of beat perception using ear-EEG despite the markedly different electrode placement. Therefore, the present study is bringing us one step closer to using neuronal feedback to improve hearing aid algorithms.
Carla Griffiths, Joseph Sollini, Jules Lebert and Jennifer Bizley
Topic areas: memory and cognition speech and language correlates of behavior/perception neural coding
Auditory cortex Perceptual invariance Auditory object Encoding Electrophysiology Neural decoding FerretThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Perceptual invariance, the act of recognising auditory objects across identity-preserving variation and in the presence of other auditory stimuli, is critical to listening. To test perceptual invariance, we trained four ferrets in a Go/No-Go water reward task where ferrets identified a target word ("instruments") from a stream drawn from 54 other British English words (distractors). We then manipulated the mean fundamental frequency (F0) within and across trials. The ferrets identified the target word (chance=33% hit rate) when the F0 was roved within a trial with hit rates (Female/Male speaker) of 60%/40% for F1702, 68%/38% for F2002, 43%/38% for F1803, and 48%/52% for F1815. For whole trial modified F0, the hit rate was 62%/42% (F1702), 48%/44% (F2002), 44%/35% (F1803), and 57%/40% (F1815). We recorded neural activity from auditory cortex in one ferret and considered sites with a sound-onset response for Euclidean distance decoding. We computed a decoding score for pairwise discrimination of the target word from seven high-occurrence distractors, the target word reversed, and pink noise equal in duration and spectrally matched to the target word. We found neural responses that discriminated target from distractor responses across variation in F0. Moreover, in most cases, these responses did not carry F0 information either in target or distractor responses. Our preliminary results suggest that auditory objects are represented in the AC and that these responses are resistant to F0 change. Future work will incorporate hippocampal recordings to determine whether temporal coherence between the hippocampus and AC is required for auditory object recognition.
Jules Lebert, Carla Griffiths, Joseph Sollini and Jennifer Bizley
Topic areas: correlates of behavior/perception neural coding thalamocortical circuitry/function
Auditory Scene Analysis Stream segregation Auditory Cortex Ferret Electrophysiology BehaviorFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Listening in the real world involves making sense of mixtures of multiple overlapping sounds. The brain decomposes such scenes into individual objects, and a sequence of related auditory objects forms a stream. We are investigating the role of the auditory cortex in the formation and maintenance of auditory streams. The temporal coherence theory (Shamma et al., 2011) has provided one explanation for stream formation, postulating that the brain creates a multidimensional representation of sounds along different feature axes, and groups them based on their temporal coherence, to form streams. Supporting this idea, neural correlates of the differences in perception elicited by synchronous and alternating tone have been found in the primary auditory cortex of behaving ferrets (Lu et al., 2017). However, the temporal coherence theory has yet to be tested with more naturalistic sounds composed of multiple streams. To this end, we trained ferrets to detect a target word in a stream of repeating distractor words, spoken by the same talker, played in a background of spatially separated noise (White, pink and speech shaped noise). Preliminary data were collected in the auditory cortex of behaving ferrets. Neural population level analyses are being implemented to identify correlation structures. We hypothesize that when the animal can successfully segregate noise and speech streams, the neural population will be a uniform structure early in the trial, that will evolve into two distinct correlation structures. Stimulus reconstruction will be performed on the different clusters of neurons to investigate whether they encode for different auditory streams.
Joan Belo, Maureen Clerc and Daniele Schon
Topic areas: memory and cognition correlates of behavior/perception novel technologies
Auditory Attention Detection EEG Auditory Attention Cognitive FunctionsThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
EEG-based Auditory Attention Detection (AAD) methods have become popular in the field of auditory neuroscience because they could be of particular interest to better understand how the brain processes naturalistic auditory stimuli including speech and music. Such methods also open promising avenues in clinical applications since they could be used to develop neuro-steered auditory aids. However, one major limitation of AAD lies in the fact that their performances (i.e. reconstruction and decoding accuracy) varies greatly across individuals. We make the hypothesis that part of this inter-individual variability could be due to general cognitive abilities that are necessary to process complex auditory scenes. These abilities may include inhibition, working memory (WM) and sustained attention. In this study, we assess whether inhibition, WM and sustained attention abilities, correlate with the reconstruction and decoding accuracies of a backward linear AAD method. More precisely, we postulate that the higher these cognitives functions are, the higher the reconstruction and decoding accuracies of the method will be. To test this hypothesis 30 participants were enrolled in an experimental paradigm in which they had to 1) actively listen to several dichotic stimuli while their neural activity was recorded 2) complete online behavioral tests, from home, to measure aforementioned cognitive abilities. Using backward linear AAD method we reconstruct the stimulus envelopes, from the EEG data, and obtain subject-specific reconstruction and decoding accuracies. Then we correlate these performances with the behavioral data collected during online cognitive tests.
Maryse Thomas, Carolyn Sweeney, Esther Yu, Kasey Smith, Wisam Reid and Anne Takesian
Topic areas: correlates of behavior/perception thalamocortical circuitry/function
auditory cortex interneuron NDNF frequency discrimination layer 1Fri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
A growing understanding of auditory cortical interneuron circuits has highlighted the role of specific inhibitory interneurons in shaping frequency tuning and discrimination acuity. In auditory cortex, neuron-derived neurotrophic factor (NDNF) interneurons reside primarily within neocortical layer 1 and are known to inhibit the distal dendrites of excitatory neurons while simultaneously disinhibiting their somata via projections to parvalbumin-positive interneurons. Here, we aim to investigate the function of NDNF interneurons in frequency tuning and behavioral frequency discrimination in mice. Using in vivo two-photon calcium imaging, we first characterized the tuning properties of NDNF interneurons in response to pure tones of varying frequencies and intensities. We found that a subset of NDNF interneurons exhibit robust frequency and intensity tuning comparable to excitatory neurons in layer 2/3. We next developed a behavioral frequency discrimination paradigm in which mice distinguished trains of repeating pure tones from trains of two alternating tones allowing us to establish reliable frequency discrimination thresholds by varying the frequency of the alternating tone. We subsequently recorded the responses of both NDNF interneurons and layer 2/3 excitatory neurons to the training stimuli under both passive and behaving contexts. We identified individual neurons in both classes capable of discriminating between repeating or alternating tones. Finally, ongoing experiments using chemogenetic silencing of NDNF interneurons will demonstrate the necessity of their activity for task performance. This work will help establish the function of NDNF interneurons in frequency discrimination and further elucidate the contribution of specific inhibitory cortical circuits to sound processing.
James Baldassano and Katrina MacLeod
Topic areas: cross-species comparisons neural coding subcortical processing
Cochlear Nucleus Avian auditory brainstem Electrophysiology Intrinsic physiologyFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
The avian cochlear nucleus angularis (NA) plays a diverse role in encoding intensity information, including the acoustic temporal envelope. Recent work shows that the intrinsic properties of NA neurons may contribute to functional diversity. The 5 different electrophysiological neuron types present in NA (tonic 1, tonic 2, tonic 3, single spike, damped) have distinct levels of temporal sensitivity and when stimulated in-vitro in a manner that simulates natural biological input, ranging from pure integrators to temporally sensitive neurons known as differentiators. We investigated the role of low-threshold activated Kv1 channel in driving this operating diversity by blocking them with specific antagonist, dendrotoxin (DTX). We found that Kv1 channels shaped the electrophysiological phenotypes of NA neurons types, particularly the single-spike, tonic 1, and tonic 2 neurons. When spike time reliability and fluctuation sensitivity was measured in DTX-sensitive NA neurons, we found that the temporal response sensitivity to rapidly fluctuations in their inputs was reduced with the drug. Finally, we show that DTX reduced spike threshold adaptation in these neurons. These results suggest Kv1 channel properties act as high-pass filters and could be a driving force behind the temporal sensitivity in NA neurons.
Sophie Bagur, Jacques Bourg, Alexandre Kempf, Thibault Tarpin, Etienne Gosselin, Sebastian A Ceballo, Khalil Bergaoui, Yin Guo, Allan Muller, Jérôme Bourien, Jean Luc Puel and Brice Bathellier
Topic areas: hierarchical organization neural coding subcortical processing thalamocortical circuitry/function
Auditory cortex Auditory thalamus Inferior colliculus Neural coding Population coding Electrophysiology Two photon imagingThu, 11/4 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
Auditory perception relies on spectrotemporal pattern separation. Yet the step-by-step transformations implemented by the auditory system remain elusive, partly due to the lack of systematic comparison of representations at each stage. To address this, we compared population representations from extensive two-photon calcium imaging and electrophysiology recordings in the auditory cortex, thalamus, inferior colliculus in mice and a detailed cochlea model. Using a noise-corrected correlation metric, we measured the similarity of activity patterns generated by diverse canonical spectrotemporal motifs (pure tones, broadband noise and chords, amplitude and frequency modulations This measure evidenced a decorrelation of sound representations that was maximal in the cortex where the information carried by temporal and rate codes converged. This decorrelation was accompanied by response sparsening. The thalamus stood out from this global trend by its very dense code that recorrelated responses, possibly because it represents an anatomical bottleneck. The gradual decorrelation of time-independent representations found in the auditory system could be reproduced by a deep network trained to detect in parallel basic perceptual attributes of sounds (frequency and intensity ranges, temporal modulation types). In contrast networks trained on tasks ignoring some perceptual attributes failed to decorrelate the related acoustic information as observed in the biological system. Finally, we tested the impact of introducing a thalamus-like bottleneck and found that it could account for the recorrelation of representations. Together these results establish that the mouse auditory system makes information about diverse perceptual attributes accessible in a rate-based population code and correctly constrained neural networks reproduce these properties.
Koun Onodera and Hiroyuki Kato
Topic areas: thalamocortical circuitry/function
auditory cortex deep layer optogeneticsFri, 11/5 10:45AM - 11:00AM | Short talk
Abstract
Revealing the principles governing interactions between the six layers of the cortex is fundamental for understanding how these intricately woven circuits work together to process sensory information. The information flow in the sensory cortex has been described as a predominantly feedforward sequence with deep layers as the output structure. Although recurrent excitatory projections from layer 5 (L5) to superficial L2/3 have been identified by anatomical and physiological studies, their functional impact on sensory processing remains unclear. Here, we use layer-selective optogenetic manipulations in the primary auditory cortex to demonstrate that feedback inputs from L5 suppress the activity of superficial layers, contrary to the prediction from their excitatory connectivity. This suppressive effect is predominantly mediated by translaminar circuitry through intratelencephalic (IT) neurons, with an additional contribution of subcortical projections by pyramidal tract (PT) neurons. Furthermore, L5 activation sharpened tone-evoked responses of superficial layers in both frequency and time domains, indicating its impact on cortical spectro-temporal integration. Together, our findings challenge the classical view of feedforward cortical circuitry and reveal a major contribution of inhibitory recurrence in shaping sensory representations. Informal discussion to follow at 11:15 am EDT (GMT-4) on Zoom (link below).
Eyal Kimchi, Yurika Watanabe, Devorah Kranz, Tatenda Chakoma, Miao Jing, Yulong Li and Daniel Polley
Topic areas: memory and cognition correlates of behavior/perception multisensory processes
acetylcholine neuromodulator cortex photometryThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
Acetylcholine is an important neuromodulator of cortical function that guides neural plasticity in mammalian auditory cortex (ACtx). However, the neural effects of acetylcholine in ACtx have been primarily studied using exogenous pharmacologic modulation or electrical stimulation of cholinergic inputs. We still do not know when acetylcholine release occurs endogenously in ACtx. Here, we used fiber photometry to monitor a genetically encoded fluorescent indicator to measure acetylcholine release in ACtx in awake mice (N = 12). We compared drivers and correlates of acetylcholine release in ACtx with those in visual cortex (VCtx, N = 8) and medial prefrontal cortex (mPFC, N = 4). While sensory-evoked acetylcholine release in ACtx was expected for loud, novel, or behaviorally relevant sounds, we found that even moderate intensity ( < 50 dB SPL) tones or spectrotemporal ripples elicited strong, non-habituating acetylcholine release. Surprisingly, we observed similar results in VCtx, where auditory stimuli drove stronger acetylcholine release than visual stimuli. In contrast, acetylcholine levels in mPFC were less sensitive to passively presented audiovisual stimuli, yet were highly responsive to behavioral reinforcers including liquid reward, air puff, or electric shock. Acetylcholine levels in all regions were also correlated with pupil dilation and orofacial movements. These findings demonstrate that distributed acetylcholine levels are dynamic and regionally specialized, with shared associations in sensory cortices of different modalities that are distinct from mPFC. Knowing the drivers and correlates of endogenous acetylcholine levels shapes our understanding of physiologic neural plasticity and unveils opportunities for noninvasive monitoring or manipulation of endogenous neuromodulator release.
Ying Yu, Seung-Goo Kim and Tobias Overath
Topic areas: correlates of behavior/perception
MTurk Music perception Temporal processing Tonal structure 12-tone serialismThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
Introduction: Temporal integration windows help us parse continuous information. They have been studied for speech perception but remain relatively unknown for music, especially with respect to their role in appreciating musical structure via tonality. Here, we tested whether two types of highly structured musical styles (Western tonal music and 12-tone atonal music) are analyzed with different temporal integration windows. Method: Stimuli were created using a quilting algorithm (Overath et al., 2015) that controlled the temporal structure (segment lengths were log-spaced multiples of 60 ms, up to 3840 ms). The tonal and atonal music selections were taken from J.S. Bach’s Violin Sonatas and Partitas & E. Krenek’s Sonata for Solo Violin No. 2, respectively. Naturalness ratings for each 12-s stimulus were collected via Amazon MTurk. Results: Analysis of the data collected using two-level GLMs revealed that naturalness ratings significantly increased with increasing segment length. On average, tonal music (Bach) was rated higher than atonal music (Krenek). The interaction between music type and segment length revealed that the effect of segment length was greater in tonal than in atonal music. Importantly, the naturalness ratings for atonal music plateaued at 1920 ms, but continued to increase for tonal music. Conclusions: The results show a greater sensitivity to the temporal structure of tonal music than non-tonal music in the general population with minimal musical training. Further studies are ongoing to determine the relationship between temporal windows of integration for speech and music.
Kelly Jahn, Jenna Browning-Kamins, Kenneth Hancock, Jacob Alappatt and Daniel Polley
Topic areas: auditory disorders correlates of behavior/perception
central gain tinnitus hyperacusis hypersensitivity loudness emotionFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Our experience of sound is deeply interwoven with emotion. While we have well-developed approaches to study the neural encoding of sound as it relates to acoustic sources and sound perception, rigorous quantitative approaches to assess the affective qualities of sound have received far less attention. Whether in the context of normal aging, sensorineural hearing loss, or combat PTSD, subjects often report aversive emotional reactions to sounds that are experienced as neutral by normal listeners. Here, we developed a comprehensive battery of objective physiological biomarkers that can quantitatively dissociate complaints of enhanced loudness perception and sound-evoked distress in human subjects, and which have the potential to evolve into a new class of diagnostic tools for evaluating sound intolerance. To quantify sound-related distress, we assess complementary behavioral and objective indices of arousal including sound-evoked changes in pupil diameter, skin conductance, and facial micro-movements. We quantify neural sound-level growth using cortical electroencephalography (EEG) alongside parallel psychophysical measures of loudness growth. To date, we have tested 23 subjects with normal hearing, 11 with tinnitus, and 3 with hyperacusis. Across subjects, sounds that elicit negative emotional reactions also lead to elevated physiological arousal (e.g., larger changes in pupil diameter and skin conductance responses) relative to neutral sounds. We also find that neural sound-level growth is steepest for individuals with tinnitus and in subjects who report subjective hypersensitivity. The development and validation of a comprehensive battery to quantify the multifaceted aspects of sound tolerance will prove valuable in diagnosing and managing disorders with core hypersensitivity phenotypes.
Gavin Mischler, Menoua Keshishian, Stephan Bickel, Ashesh Mehta and Nima Mesgarani
Topic areas: speech and language
Adaptation Computational Modeling Dynamic STRF Auditory RepresentationsThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
The human auditory pathway displays a robust capacity to adapt to sudden changes in background noise. While this neural ability has been identified and characterized, its mechanism is not well understood due to the difficulty of interpreting such a nonlinear behavior. Traditional linear models are highly interpretable but fail to capture the nonlinear dynamics of adaptation. To overcome this, we employ convolutional neural networks (CNN) with ReLU activations, which can be easily interpreted as dynamic STRFs (DSTRF), since a linear equivalent function to the CNN can be found for each stimulus instance. We seek to interpret the nonlinear operations of the DSTRFs which have been trained to mimic the brain’s responses in order to understand how the brain may be dealing with changing noise conditions. We recorded intracranial EEG (iEEG) from neurosurgical patients who attended to speech with background noise that kept changing between several categories. We first demonstrate that a feedforward CNN trained to predict neural responses can produce the same rapid adaptation phenomenon as the auditory cortex, a feat which linear models cannot achieve. We further analyze the computations performed by these models and show how some neural regions react to a sudden change in background noise by altering their filters to deal with the new sound statistics. By looking at how these filters change over time particularly when adapting to a new background noise, we provide evidence that can explain how the brain achieves noise-robust speech perception in real-world environments.
Timothy Olsen and Andrea Hasenstaub
Topic areas: neural coding
Sound offset response Short-term plasticity Interneurons PV SSTThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
The neural response to sound is dependent on stimulus history, with sound-evoked responses subject to various forms of short-term plasticity (STP). In addition to spiking in response to sound, many neurons in the central auditory system fire action potentials in response to the cessation of a sound (so-called “offset responses”). Sound-offset responses are essential for sound processing, including the encoding of sound duration and frequency-modulated sweeps, which in turn are essential for the understanding of continuous sounds such as speech. Whether offset responses are also subject to STP is currently unknown. We presented awake mice with a series of repeating noise bursts, recorded through the layers of primary auditory cortex (A1) with a silicon probe, and measured dynamics of spiking response magnitudes from putative pyramidal (broad spiking: BS), PV and SST cells. BS and PV cells showed transient sound-evoked responses and frequent sound-offset responses. In contrast, SST cells generally exhibited sustained sound-evoked responses and few offset responses. Sound-evoked responses from BS cells typically depressed, whereas their sound-offset responses were more likely to remain stable or facilitate. In comparison, PV cells showed depressive responses for both sound-evoked and sound-offset responses, whereas SST cells showed increased sound-evoked facilitation and similar sound-offset STP. Across the population of all recorded cells, sound-evoked STP did not predict sound-offset STP, and vice-versa. This study reveals that sound-offset responses in A1 are subject to short-term plasticity, with different cell types in A1 showing varying amounts of sound-evoked and sound-offset adaptation and facilitation.
Benjamin Auerbach, Xiaopeng Liu, Kelly Radziwon and Richard Salvi
Topic areas: auditory disorders correlates of behavior/perception
autism fragile x syndrome hyperacusisThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
Auditory processing impairments are a defining feature of autism spectrum disorders (ASD), most notably manifesting as extreme sensitivity and/or decreased tolerance to sound (i.e. hyperacusis). Fragile X syndrome (FX) is the leading inherited cause of ASD and, like the greater autistic population, a majority of FX individuals present with hyperacusis. Despite this prevalence and centrality to the autistic phenotype, relatively little is known about the nature and mechanisms of auditory perceptual impairments in FX and ASD. Using a combination of novel operant and innate perceptual-decision making paradigms, we found that a Fmr1 KO rat model of FX exhibits behavioral evidence for sound hypersensitivity in the form of abnormally fast auditory reaction times, increased sound avoidance behavior, and altered perceptual integration of sound duration and bandwidth. Simultaneous multichannel in vivo recordings from multiple points along the auditory pathway demonstrated that these perceptual changes were associated with sound-evoked hyperactivity and hyperconnectivity in the central auditory system. These results suggest that increased auditory sensitivity in FX is due to central auditory hyperexcitability and disrupted temporal and spatial integration of sound input. This novel symptoms-to-circuit approach has the potential to uncover fundamental deficits at the core of FX and ASD pathophysiology while also having direct clinical implications for one of the most disruptive features of these disorders.
Stephen Town, Katherine Wood and Jennifer Bizley
Topic areas: correlates of behavior/perception hierarchical organization
Auditory cortex Cortical inactivation Behavior Vowel discrimination Sound localization Spatial release from masking Distributed coding Functional specializationFri, 11/5 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
A central question in auditory neuroscience is how far brain regions are functionally specialized for processing specific sound features such as sound location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the causal contribution of auditory cortex to hearing in multiple contexts. Here we tested the role of auditory cortex in both spatial and non-spatial hearing. We reversibly inactivated the border between middle and posterior ectosylvian gyrus using cooling as ferrets (n=2) discriminated vowel sounds in clean and noisy conditions. The same subjects were then retrained to localize noise-bursts from six locations and retested with cooling. In both ferrets, cooling impaired both sound localization and vowel discrimination in noise, but not discrimination in clean conditions. We also tested the effects of cooling on vowel discrimination in noise when vowel and noise were colocated or spatially separated. Here, cooling exaggerated deficits discriminating vowels with colocalized noise, resulting in increased performance benefits from spatial separation of sounds and thus stronger spatial release from masking during cortical inactivation. Together our results show that an auditory cortical area may contribute to both spatial and non-spatial hearing, consistent with single unit recordings in the same brain region. The deficits we observed did not reflect general impairments in hearing, but rather account for performance in more realistic behaviors that require use of information about both sound location and identity.
Sam Watson, Torsten Dau and Jens Hjortkjær
Topic areas: speech and language correlates of behavior/perception neural coding subcortical processing
Amplitude modulation EEG Modulation Filterbank Neural correlatesThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Critical information for the perception of speech and other natural sounds is encoded within the envelope. Consequently, the auditory system accurately tracks envelope modulations at multiple rates and a ‘modulation filterbank’ model has seen some success in accounting for modulation detection and discrimination behaviour, as well as some speech intelligibility data. The present study investigates whether there is a neural basis for such a modulation filterbank, detectable in far-field envelope following responses (EFRs). This study utilises novel stimuli to simultaneously interrogate temporal and rate-based coding EFR signatures to amplitude modulations in humans and the stability of this coding during modulation masking. A fixed target frequency amplitude modulation (AM) is imposed upon a carrier, along with a noise band AM masker at intervals ranging ± 2 octaves around the target AM rate. The presence of the target is switched on/off periodically at a slow (~2 Hz) rate to create an EEG readout of rate-based encoding pathways, while temporal coding is captured at the target frequency. Modulation domain masking under the theorised modulation filter bank should increase with decreasing proximity of the AM masker to the target. Masking is measured as a reduction the EEG steady-state response at the target or switching rate in response to masker position. Comparing behavioural and these EFR modulation masking curves, preliminary data suggests neural temporal coding of envelopes is unaffected by the presence of a competing random noise modulation masker.
Fangchen Zhu, Sarah Elnozahy, Jennifer Lawlor and Kishore Kuchibhotla
Topic areas: subcortical processing
cholinergic system axonal imaging two-photon imagingFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
The cholinergic basal forebrain (CBF) projects extensively to the auditory cortex (ACx). To date, however, little is known about the intrinsic sensory-evoked dynamics of the CBF. Here, we used simultaneous two-color, two-photon imaging of CBF axon projections and cortical neurons in the ACx to examine stimulus-evoked responses in head-fixed mice passively listening to a suite of auditory stimuli. We observed striking, non-habituating, phasic responses of CBF axons to neutral auditory stimuli that are correlated with tonic cholinergic activity – a known neural correlate of brain state. However, we observed no evidence of tonotopy in CBF axons; there is a coarse population tuning to low-to-mid frequencies that is homogeneous across the ACx. Interestingly, individual axon segments exhibited heterogenous tuning within imaging sites, allowing the CBF to respond to all frequencies presented. Despite this microscopic heterogeneity, the tuning of axon segments and nearby cortical neurons was un-coupled. Finally, using chemogenetic inactivation of the ACx and auditory thalamus while imaging CBF axons in the ACx, we demonstrated that inactivation of the auditory thalamus, but not ACx, disrupted the frequency tuning of CBF axons and significantly dampened their responsiveness. Our work proposes a novel, non-canonical function for the CBF in which the basal forebrain receives auditory input from the auditory thalamus, modulates these signals based on brain state, and then projects the multiplexed signal to the ACx. These signals are temporally-synchronous with cortical responses but differ in their underlying tuning, providing a potential mechanism to influence cortical sensory representations during learning and task engagement.
Rose Ying and Melissa Caras
Topic areas: correlates of behavior/perception subcortical processing
inferior colliculus perceptual learning task-related plasticity electrophysiologyThu, 11/4 1:15PM - 2:15PM | Virtual poster
Abstract
Sensory stimuli that are alike in nature, such as tones with similar frequencies or slightly different shades of the same color, can be difficult to differentiate at first. However, training can lead to improvement in one’s ability to discriminate between the stimuli, a process called perceptual learning. Auditory perceptual learning is important for language learning, and it can also improve the use of assisted listening devices (Fu & Galvin, 2007). Auditory cortex is an important cortical hub for sensory information that also receives functional inputs from frontal regions associated with higher-order processing. Previous research has shown that perceptual learning strengthens the top-down modulation of auditory cortex. However, it is unclear whether these learning-related changes first emerge elsewhere in the ascending auditory processing pathway and are inherited by the auditory cortex, or arise in the cortex de novo. The inferior colliculus (IC) has been shown to display spectrotemporal task-related plasticity (Ryan & Miller, 1977; Slee & David, 2015), making it an attractive candidate region for the target of top-down projections that modulate neural improvement in perceptual learning. To explore this possibility, single-unit recordings were obtained from the IC of awake, freely-moving Mongolian gerbils during perceptual learning on an amplitude modulation detection task. Our results will determine whether the gerbil IC displays task-related plasticity, and whether learning-related plasticity occurs in the IC during perceptual training. These findings will contribute to a deeper understanding of the circuits behind perceptual learning, which can aid translational research in hearing loss and cochlear implant use.
Meredith Schmehl and Jennifer Groh
Topic areas: correlates of behavior/perception multisensory processes neural coding subcortical processing
multisensory integration sound localization inferior colliculus macaqueFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Visual cues can influence brain regions that are sensitive to auditory space (Schmehl & Groh, Annual Review of Vision Science 2021). However, how such visual signals in auditory structures contribute in the perceptual realm is poorly understood. One possibility is that visual inputs help the brain distinguish among different sounds, allowing better localization of behaviorally relevant sounds in noisy environments (i.e., the cocktail party phenomenon). Our lab previously reported that when two sounds are present, auditory neurons may switch between encoding each individual sound across time (Caruso et al., Nature Communications 2018). We sought to study how pairing a light with one of two sounds might change these time-varying responses (e.g., Atilgan et al., Neuron 2018). We trained a rhesus macaque to localize one or two sounds in the presence or absence of accompanying lights. While the monkey performed this task, we recorded extracellularly from single neurons in the inferior colliculus (IC), a critical auditory region that receives visual input and has visual and eye movement-related responses. We found that pairing a light and sound can change an IC neuron's response to that sound, even if the neuron is unresponsive to light alone. Further, when two sounds are present, pairing a light with one of the sounds can change a neuron's time-varying response to the two sounds. Together, these results suggest that the IC alters its sound representation in the presence of visual cues, providing insight into how the brain combines visual and auditory information into a single perceptual object.
Jong Hoon Lee and Xiaoqin Wang
Topic areas: neural coding
vocalizations auditory belt marmoset auditory cortex high-density electrodesFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
In early stages of auditory processing, neurons faithfully encode acoustic features of sounds. As we move up the auditory pathway, the auditory system is thought to represent behaviorally relevant stimuli (e.g., species-specific vocalizations) in a manner that is invariant to differences or changes in their basic acoustic parameters when such differences fall within the range of natural stimuli. This invariance is believed to be the underlying mechanism for perceptual phenomena such as categorical perception or the perceptual magnet effect that have been documented in human speech perception (Kuhl 1991). Although there have been studies investigating the emergence of such invariance, there is much to discover in terms of where and how it is represented. In this study we investigated how core and belt areas of marmoset auditory cortex encode changes in center frequency of both pure tones and synthesized marmoset phee calls and compared the neural responses with corresponding behavioral measures. Marmoset is a highly vocal non-human primate species with a hearing range similar to that of humans. We observed that in the core area, responses to a given pair of stimuli (pure tones or phees) reflect their differences in center frequency. In the belt area, however, this was the case for the responses to pure tones, but not to phees. In particular, offset responses to frequency-shifted phees were invariant to changes in center frequency when the frequency shift was within the range of the center frequency of phee population samples recorded in our colony (Agamaite et al., 2015).
Ryan Calmus, Benjamin Wilson, Yukiko Kikuchi, Zsuzsanna Kocsis, Hiroto Kawasaki, Timothy D. Griffiths, Matthew A. Howard and Christopher I. Petkov
Topic areas: memory and cognition speech and language neural coding
Sequence learning Computational modelling Relational code Binding ECoG Electrophysiology Cognition HumanFri, 11/5 11:15AM - 12:15PM | Virtual poster
Abstract
Understanding how the brain represents and binds information distributed over time benefits from neurocomputationally informed approaches. Language exemplifies the temporal binding problem, where syntactic knowledge facilitates mental restructuring of words/phrases, yet the problem is broadly relevant. For instance, various animal species can learn auditory sequencing dependencies in Artificial Grammar Learning (AGL) tasks, and in primates fronto-temporal regions including frontal operculum and areas 44/45 are implicated. We recently proposed a neurocomputational model, VS-BIND, which triangulates reported findings across frontal auditory areas to define site-specific roles and interactions (Calmus et al., 2019). We are testing this model with human and primate intracranial recordings and here report tests undertaken on AGL data in neurosurgery patients being monitored for epilepsy treatment. Under the AGL task, 12 patients listened to speech sequences containing adjacent and non-adjacent dependencies and were then tested on their ability to distinguish novel "grammatical" and "ungrammatical" sequences. Analysis of the intracranial data using traditional methods demonstrated fronto-temporal engagement, and we subsequently undertook novel multivariate analyses to reveal the representational geometry of regional relational encodings and inter-regional causal flow. Results revealed that prefrontal auditory areas interact to integrate relational information, including ordinal positions of items in a speech sequence, concordant with predictions of VS-BIND. We observed causal flow consistent with expectation-driven prefrontal feedback predictions to primary auditory cortex and feedforward auditory information flow. These results indicate critical fronto-temporal roles in transforming the auditory sensory world into mental structures, and show how the neural system integrates key speech sequence features into relational codes.
Rebecca Krall, Megan Arnold, Callista Chambers, Mara Frum and Ross Williamson
Topic areas: correlates of behavior/perception neural coding subcortical processing
Auditory categorization Corticofugal Sensory-guided behaviorFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Sensory-guided behavior is ubiquitous in everyday life. During sensory-guided behavior, distinct populations of neurons encode relevant sensory information and transform it into an appropriate behavioral response. An open problem is identifying which neural circuits contribute to such behaviors. In the auditory system, information is propagated through a feed-forward hierarchy that runs from the cochlea to the primary auditory cortex (ACtx). Corticofugal neurons in the ACtx send projections to multiple nodes of this ascending pathway and target distinct downstream regions associated with decision making, action, and reward. Through these projections, corticofugal neurons are positioned to broadcast behaviorally-relevant information and shape auditory representations across brain-wide networks. We hypothesized that distinct classes of ACtx projection neurons differentially mediate auditory-guided behaviors. To test this hypothesis, we developed a head-fixed behavioral task, where mice are trained to categorize amplitude-modulated noise bursts by licking one of two spouts yoked to the modulation frequency. To determine the role of ACtx in this behavior, we optogenetically silenced excitatory neurons during stimulus presentations using soma-targeted Guillardia theta anion-conducting channelrhodopsins (GtACRs). We found inactivating ACtx excitatory neurons in 20% of trials did not disrupt task learning, allowing us to assess the contribution of ACtx neural populations longitudinally. We found that inhibition of excitatory neurons in the ACtx altered task performance in a manner dependent on the extent of learning and task difficulty. Our ongoing research seeks to characterize the specific contributions of distinct ACtx projection neuron class using selective expression of GtACRs to inhibit activity across task acquisition and expression.
Justin Yao, Klavdia Zemlianova and Dan Sanes
Topic areas: correlates of behavior/perception
parietal cortex temporal integration alternative forced choice neural manifoldsFri, 11/5 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
The transformation of sensory evidence into decision variables is fundamental to forming perceptual choices. We asked how the neural representation of acoustic information is transformed in the auditory-recipient parietal cortex, a region that is causally associated with sound-driven perceptual decisions (Yao et al., 2020). Neural activity was recorded wirelessly from parietal cortex while gerbils performed an alternative forced choice auditory temporal integration task, or during passive listening to the identical acoustic stimuli. Gerbils were required to discriminate between two amplitude modulated (AM) noise rates, 4 versus 10 Hz, as a function of signal duration (100-2000 ms). Task performance improved with increasing duration, and reached an optimum at ≥600 ms. We found that population activity from simultaneously recorded parietal neurons represented acoustic information (4 vs 10 Hz AM). A principal component analysis fit to trial averaged neural responses revealed low-dimensional encoding of acoustic information as neural trajectories (i.e., neural manifolds) differentiated across stimulus conditions. During task performance, decoded population activity reflected psychometric performance, which was consistent with low-dimensional encoding of acoustic information (seen in passive listening) and behavioral choices (left versus right). At stimulus onset, neural trajectories started at a similar position, but began to diverge toward the relevant decision subspace after ~300 ms of acoustic stimulation. Neural trajectories of incorrect trials tended to course along the opposing decision subspace, reflecting lapse rate or failed evidence accumulation. Taken together, our findings demonstrate that parietal cortex leverages the encoded auditory information to guide sound-driven perceptual decisions over a behaviorally-relevant time course.
Keith Kaufman, Rebecca Krall, Megan Arnold and Ross Williamson
Topic areas: correlates of behavior/perception neural coding
auditory cortex arousal pupil brain state corticofugal neurons frequency tuningThu, 11/4 11:15AM - 12:15PM | Virtual poster
Abstract
Fluctuations in behavioral state, such as attention and arousal, persist during wakefulness and influence both behavior and sensory information processing. Strong correlations between pupil diameter, a biomarker of arousal state, and evoked neural activity in multiple sensory cortices are well-documented. Lin et al (2019) found that arousal modulates both the frequency tuning and response magnitude of L2/3 pyramidal neurons in the primary auditory cortex (ACtx). ACtx pyramidal neurons massively innervate many downstream targets, both within and outside of the central auditory pathway. Notably, classes of corticofugal neurons (i.e., intratelencephalic (IT), extratelencephalic (ET), and corticothalamic (CT)) differ regarding their anatomy, morphology, and intrinsic and synaptic properties. The distinct anatomy and connectivity profiles of these cells lead us to hypothesize that arousal states may differentially modulate their sensory tuning properties. To investigate this, we recorded neural activity from ACtx with two-photon calcium imaging in awake mice while simultaneously capturing facial movements and pupil dilations. We drove GCaMP8s expression in IT/ET/CT populations through Cre-dependent viral transfection. Similar to Lin et al. (2019), we found arousal-dependent changes in the tuning properties of L2/3 IT cells, evidenced by an increase in bandwidth and response magnitude coinciding with higher arousal states. Our preliminary data suggests that the effect of arousal differs in other projection classes, with response magnitudes peaking at intermediate levels of arousal, following the classic inverted-U dependence (Yerkes-Dodson curve). The characterization of these state-dependent effects on distinct excitatory populations provides valuable insight into how sensory information is shared brain-wide to guide perception and action.
Megan Arnold, Rebecca Krall and Ross Williamson
Topic areas: hierarchical organization subcortical processing
corticofugal extratelencephalic auditory projectionFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Excitatory projection neurons in primary auditory cortex (ACtx) propagate sensory information brain-wide to inform emotion, attention, decision-making, and action. These neurons fall into three broad classes: intratelencephalic (IT), extratelencephalic (ET), and corticothalamic (CT). Of these classes, ET cells form the only direct connection between ACtx and myriad sub-cortical targets. Their distinct morphology, with prominent apical dendrites and diverse axonal targets, puts them in a privileged position to broadcast sensory signals to multiple downstream targets simultaneously. However, the extent of their axonal collateralization, the spatial organization of their projections, and whether these distinct organizational motifs receive differential synaptic input remains unknown. To address these questions, we characterized the input/output circuitry of ACtx ET cells and compared their anatomical organization to that of IT and CT populations. We drove selective viral expression of a fluorophore in distinct ET sub-populations, allowing us to quantify downstream projection densities and identify local and long-range synaptic input through monosynaptic rabies tracing. Our preliminary results indicate that many ET neurons collateralize to the non-lemniscal regions of the inferior colliculus and thalamus, confirming previous reports. Monosynaptic rabies tracing demonstrated widespread synaptic inputs to ET, IT, and CT cells from many cortical and subcortical areas, including the thalamus, contralateral ACtx, and ipsilateral visual, somatosensory, parietal, and retrosplenial cortices. Our ongoing experiments are focused on extending these findings to distinct ET organizational motifs. This work will provide a foundation for understanding how brain-wide interactions between distinct areas cooperate to orchestrate sensory perception and guide behavior.
Zsuzsanna Kocsis, Rick L. Jenison, Thomas E. Cope, Peter N. Taylor, Bob McMurray, Ariane E. Rhone, Mccall E. Sarrett, Yukiko Kikuchi, Phillip E. Gander, Christopher K. Kovach, Fabien Balezeau, Inyong Choi, Jeremy D. Greenlee, Hiroto Kawasaki, Timothy D. Griffiths, Matthew A. Howard and Christopher I. Petkov
Topic areas: speech and language correlates of behavior/perception neural coding
surgical disconnection of the ATL diaschisis speech prediction frontal-auditory neural signalsThu, 11/4 11:15AM - 12:15PM | Virtual poster + podium teaser
Abstract
The strongest level of causal evidence for the neural role of a brain hub is to measure the network-level effects of its disconnection. Here we present rare data from two patients who underwent surgical disconnection of the anterior temporal lobe (ATL) as part of a clinical procedure to treat intractable epilepsy. During the surgery, we obtained pre- and post-resection intraoperative electrocorticographic (ECoG) recordings while the patients were awake and performing a speech-sound perceptual prediction task. We also obtained pre- and post-operative magnetic resonance imaging (MRI) including T1 and T2 structural and diffusion-weighted scans. Diffusion MRI tractography from ATL seed regions confirmed disconnection of the temporal pole from other cortical areas. Post-disconnection neurophysiological responses to the speech sounds showed a striking dissociation from the pre-disconnection signal in the form of, 1) magnified responses in auditory cortex (Heschl’s gyrus) across oscillatory frequency bands (3-150 Hz); and, 2) disrupted oscillatory responses in prefrontal cortex (Inferior Frontal Gyrus, IFG). Moreover, after the disconnection auditory cortical mismatch responses and theta-gamma coupling to the speech sounds were disrupted, and neural responses to different speech sounds became less segregable (i.e., more similar). State-space conditional Granger Causality analyses revealed substantial changes in neural information flow between auditory cortex and IFG post disconnection. Overall, we demonstrate diaschisis, whereby the loss of ATL neural signals results in an immediate change in activity and connectivity in intact frontal and auditory cortical areas, potentially reflecting incomplete compensation for processing and predicting speech sounds.
Amy LeMessurier, Kathleen Martin, Cheyenne Oliver and Robert Froemke
Topic areas: memory and cognition correlates of behavior/perception neural coding
auditory cortex perceptual learning inhibition interneuronsFri, 11/5 1:15PM - 2:15PM | Virtual poster
Abstract
Auditory perceptual learning is associated with changes in the tonotopic organization of frequency tuning in auditory cortex, as well as modulation of tuning depending on behavioral context. This modulation depends on activity in inhibitory circuits. Stimulation of neuromodulatory centers projecting to cortex can speed perceptual learning and induce tonotopic map plasticity. Interneurons in layer 1 (L1) receive input from long-range neuromodulatory and intracortical inputs, and target the dendrites of pyramidal cells and layer 2/3 interneurons, positioning them to gate integration of neuromodulatory and sensory input. We hypothesize that plasticity in L1 is a crucial component of auditory perceptual learning. We trained 5 mice on an appetitive, 2-alternative forced-choice tone recognition task while measuring activity in NDNF interneurons on each day of training using chronic 2-photon calcium imaging. Each mouse initially achieved < 80% correct discriminating a center tone and a foil before the introduction of additional foils surrounding the target frequency. After 28 +/- 7 days of training each mouse correctly identified tones on 76 +/- 4% of trials. NDNF neurons displayed a variety of tuning profiles for tones used in the task, including suppression of responses to the target tone in some neurons and enhanced responses in others. Additionally, tuning in NDNF neurons was task modulated – tuning curves differed within most cells between the task context and passive tone presentation. These results suggest that tuning in NDNF interneurons may be plastic over the course of training, and that activity in these neurons may shape context-dependent activity in downstream neurons.
Hannah M. Oberle, Alexander F. Ford, Jordyn Czarny and Pierre F. Apostolides
Topic areas: hierarchical organization neural coding subcortical processing