Neural systems in speech and language

We use a combination of behavioral, functional neuroimaging, and computational studies to understand the brain systems supporting the encoding of sound sequences in perception, working memory, and production. The GODIVA model (19583476) provides a computational explanation for how the inferior frontal cortex, medial premotor areas, and basal ganglia are used to plan and produce syllable sequences. Our work also attempts to explain interactions between these different representations as, for example, observed from functional connectivity measurements from fMRI.

We are also involved in work that attempts to uncover the genetic underpinnings of developmental language disorders (see our database here (23949335)). This includes collaborative efforts to analyze how genetic variation gives rise to specific language-related traits as well as how gene expression patterns in the healthy brain may give clues as to genes that may be particularly important for language.

Large-scale brain architecture

We study the global architecture of the brain at multiple scales using different data modalities. In particular, this includes the study of brainwide gene expression patterns (e.g., (19733241) as well as functional connectivity networks derived from fMRI and/or EEG. We employ multivariate data analysis techniques in an effort to provide an integrative view of the architecture of individual brain regions and circuits. We are using EEG data to better understand the temporal dynamics of brain networks at rest and understand the role of the so-called “default mode network.”

We have also been key collaborators in the Brain Architecture Project (19325892), an initiative aimed at systematically mapping anatomical connectivity patterns in the mouse brain.


Our research has been supported by the NSF Center of Excellence for Learning in Education, Science, and Technology (CELEST), National Institute of Mental Health (NIMH), National Institutes of Health Big Data to Knowledge (BD2K) Initiative, American Speech-Language-Hearing Foundation, the Rafik B. Hariri Institute for Computing and Computational Science and Engineering, and the Dudley Allen Sargent Research Fund.