Beauchamp Lab

The Beauchamp Lab studies the neural mechanisms for multisensory integration and visual perception in human subjects. Of special interest is human communication. When conversing with someone, we use both visual information from the talker's face and auditory information from the talker's voice. While multisensory speech perception engages a broad network of brain areas, the most important is the the superior temporal sulcus. Multisensory integration is particularly beneficial in understanding speech when the auditory modality is degraded, such as in a noisy room. To understand the neural mechanisms of multisensory integration and visual perception, we use a variety of methods, including intracranial electroencephalography (iEEG) and blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Through these sophisticated studies, we hope to unlock one of nature's great mysteries: how the brain performs amazing computational feats, such as understanding speech, that allow us to make sense of the auditory and visual world around us. Every advance in deepening our knowledge of these processes is not only exciting for its own sake but will also help children and patients with language and perceptual difficulties. Link to Beauchamp Lab wiki.

Another fundamental project is the development of RAVE, advanced software for the visualization and analysis of iEEG data. Learn more about RAVE.

Photo of lab members with MRI scanner.
Beauchamp Lab Photo, August 2017. Left to Right: Patrick Karas, M.D. (Neurosurgery Resident). Kira Wegner-Clemens (Post-Bac full-time Research Assistant). Muge Ozker, Ph.D. (recently graduated Ph.D. student). Michael Beauchamp, Ph.D. (PI). Kristen Smith (undergraduate part-time Research Assistant). John Magnotti, Ph.D. (Assistant Professor). Lin Zhu (M.D./Ph.D. student). Johannes Rennig, Ph.D. (postdoctoral fellow). Jacqunae Mays (graduate student who completed lab rotation).