Below is a subset of updates on ongoing/recently finished projects and collaborations:
Conference on Cognitive Computational Neuroscience 2022
Preprint on deep neural network models of the auditory system is out
We released the preprint of our work on how deep neural networks (DNNs) for audio can account for brain responses in the human auditory cortex. This project is co-led with Jenelle Feather, and in collaboration with Dana Boebinger and Josh McDermott.
We evaluated brain-model correspondence for 19 DNNs (9 publicly available models, 10 models trained by us spanning four tasks) on two fMRI datasets (n=8, n=20) and using two different evaluation metrics (via regression and representational similarity analysis, RSA).
We make the following four main claims: 1) Most DNN models (but not all!) outperformed traditional models of the auditory cortex. Results were highly consistent between datasets and evaluation metrics. The overall best DNN model was trained on multiple tasks (word, speaker, environmental sound recognition). 2) This brain-DNN similarity was strictly dependent on task-optimization. DNNs with permuted weights (which destroys the structure learned during model training) performed below the baseline model. 3) Most DNNs exhibited systematic correspondence with the hierarchical organization of the auditory cortex, with earlier DNN stages best matching primary auditory cortex and later stages best matching non-primary cortex. This was not true for permuted networks. 4) The task a DNN model is trained on influences its match to the brain, with e.g., speech-trained models best matching cortical speech responses (scatter plot on the right). 5) Finally, in light of recent discussion suggesting that the dimensionality of a model’s representation correlates with regression-based brain predictions, we evaluated how the effective dimensionality (ED) of each network stage correlated with both the regression and RSA metrics. There was a modest correlation between ED and brain-model similarity but significantly less than that between the two datasets or the two similarity measures. Thus ED does not seem to explain most of the variance across DNNs in our datasets.
Overall, we demonstrate that many, but not all, DNN models account for responses in the human auditory cortex with hierarchical stage-region correspondence, and provide some hints of how to improve brain-model matches for future models.
The preprint can be found here: Tuckute, G.*, Feather*, J., Boebinger, D., McDermott, J.: (2022). Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence, bioRxiv2022.09.06.506680; doi: https://doi.org/10.1101/2022.09.06.506680.
The LanA Language Atlas is published
Our probabilistic language atlas, LanA, is now published in Nature Scientific Data and can be openly accessed here! We also have a website http://evlabwebapps.mit.edu/langatlas/ that contains easy access to data download, visualizations, and additional information.
In brief, the LanA language atlas provides the probability that any location in the brain (volume/surface) is language-selective. The atlas was derived from >800 individuals based on functional localization (a contrast between processing of sentences and a linguistically/acoustically degraded condition, such as non-word strings).
Citation: Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan Blank, Melissa Kline Shruhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, Evelina Fedorenko (2022): LanA (Language Atlas): Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Sci Data 9, 529; doi: https://doi.org/10.1038/s41597-022-01645-3lana7.
Conference on Cognitive Computational Neuroscience 2022
The GAC workshop takes place Friday August 26 (1.30-4.15pm PT) and aims to tackle how we can optimally use neuroscience data to guide the next generation of brain models. Current use of data is often limited to post-hoc model evaluation or vague ‘inspirations’ for model development. Here, we ask: Can we use neuroscience data more efficiently for model development? Is it even the right time in neuroscience to do this? How much data is enough? What type of data should we collect?
The GAC team (and speakers) include Ko Kar (York University, MIT), Joel Zylberberg (York University), SueYeon Chung (NYU), Alona Fyshe (University of Alberta), Ev Fedorenko (MIT), Konrad Kording (University of Pennsylvania), Nikolaus Kriegeskorte (Columbia University), Jacob Yates (UC Berkeley), and Kalanit Grill-Spector (Stanford University).
I will be giving a talk on how to optimize data collection for model development within language. Specifically, I will try to answer why many existing neuroscience datasets within language are not ideal for model development – and I will provide ideas for ways forward.
I will be presenting a poster on Friday August 26 (7.30-9.30pm PT) and our work (with Jenelle Feather*, Dana Boebinger, and Josh McDermott) is on how several auditory networks with diverse architectures trained for diverse tasks capture human brain responses to natural sounds. The poster will focus on how robust our findings are to the model evaluation metric of interest (regression versus representational similarity analysis) as well as how our findings might be affected by latent variables such as effective dimensionality of network activations.
Intrinsically memorable words have unique associations with their meanings
This project is the result of a big joint effort with Kyle Mahowald (co-lead), Phillip Isola, Aude Oliva, Edward Gibson, and Ev Fedorenko.
PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN
Some of these words are consistently remembered better than others. Why is that? In this project, we provide a simple Bayesian account and show that it explains >80% of variance in word memorability.
Building on past work that suggested that words are encoded by their meanings, we hypothesize that words that uniquely pick out a meaning in semantic memory (i.e., unambiguous words with no/few synonyms) are more memorable. We evaluated our account in two behavioral experiments (each with >600 participants and 2,222 target words), similar to past work on image memorability. Participants viewed a sequence of words and pressed a button whenever they encountered a repeat (critical memory repeats occurred 91-109 words apart). Key findings: 1) Words are as memorable as images. In our experiments, the hit rate was ~68% and the false alarm rate was ~10% which is on par with images (e.g., Isola et al., 2011 CVPR). There does not appear to be a memory advantage for images compared to words. 2) Certain words are consistently remembered better than others across participants – so although individuals differ in their exposure to the amount and kinds of linguistic information across their lifetimes, memorability is largely an intrinsic word property. 3) Critically, most memorable words have a one-to-one relationship with their meaning (such as PINEAPPLE or AVALANCHE). They uniquely pick out a particular meaning in semantic memory, in contrast to ambiguous words (e.g., LIGHT which could mean a fixture in a house, the opposite of heavy, cigarette lighter, etc.) or words with many synonyms (e.g., HAPPY with synonyms CHEERFUL, JOYFUL, GLAD, etc.). Number of synonyms was a more important predictor than number of meanings.
Given that our critical predictors (number of synonyms and meanings) can be estimated from language corpora, this simple account provides a scalable model that can make predictions about memorability of newly encountered words in any language where large corpora are available. Memorability can be used to answer cool questions about how the mind and brain prioritizes and organizes information during semantic memory encoding. Understanding which words lead to longer-lasting memory traces can be leveraged to enable more effective information sharing.
The preprint can be found here: Tuckute, G.*, Mahowald*, K., Isola, P., Oliva, A., Gibson, E., Fedorenko, E. (2022). Intrinsically memorable words have unique associations with their meanings, PsyArXiv, doi: https://doi.org/10.31234/osf.io/p6kv9. (This is a revival of a project that got started back in 2011 and we are excited to share a new and improved version of the manuscript (along with the data and analysis scripts).
SentSpace: Large-scale benchmarking and evaluation of text using cognitively motivated lexical, syntactic, and semantic features
SentSpace would not exist without Aalok Sathe* (co-lead), Mingye (Christina) Wang, Harley Yoder, Cory Shain and Ev Fedorenko.
Image that you want to quantify a sentence using a large set of interpretable features. Maybe you are interested in obtaining features that relate to the sentiment of the sentence, maybe features that are known to cause language processing difficulty (such as frequency or age of acquisition). With SentSpace, we introduce such system: we enable streamlined evaluation of any textual input. SentSpace characterizes textual input using diverse lexical, syntactic, and semantic features derived from corpora and psycholinguistic experiments. These features fall into two main domains (sentence spaces, hence the name): lexical and contextual. Lexical features operate on individual lexical items (words) and entail features such as concreteness, age of acquisition, lexical decision latency, and contextual diversity. As several properties of a sentence cannot be attributed to individual words, so the contextual module quantifies a sentence as a whole. This module entails features such as syntactic storage and integration cost, center embedding depth, and sentiment.
Hence, SentSpace provides an interpretable sentence embedding with features that have been to shown to affect language processing.
SentSpace allows for quantification and comparison of different types of text and can be useful for answering questions like: How does text generated by an artificial language model compare to that of humans? How does utterances produced by neurotypicals compare to that of individuals with communication disorders? What psycholinguistic information do high-dimensional vector representations from artificial language models capture?
Aalok and I will be demonstrating the current (first!) version of SentSpace at NAACL 2022 in Seattle July 10-15 (System Demonstration poster session July 12). We would love feedback, so please don't hesitate to reach out! The proceedings paper can be found here: Tuckute, G.*, Sathe, A., Wang, M., Yoder, H., Shain, C., and Fedorenko, E (2022). SentSpace: Large-Scale Benchmarking and Evaluation of Text using Cognitively Motivated Lexical, Syntactic, and Semantic Features. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations. Association for Computational Linguistics.
The SentSpace Python package can be accessed at sentspace.github.io/sentspace and the hosted frontend website at sentspace.github.io/hosted.
LanA (Language Atlas): A probabilistic atlas for the language network based on fMRI data from >800 individuals
This work is a massive effort (14yrs of data collection!) in collaboration with Benjamin Lipkin (lead), Ev Fedorenko, and a bunch of brilliant current/former lab members of EvLab.
Given any location in the brain, what is the probability of that particular location being selective to language? We present a probabilistic language atlas (LanA) that allows to answer exactly this question. For any 3d pixel (voxel/vertex) in the volumetric or surface brain coordinate spaces, how likely is that pixel to fall within the language network? The atlas was obtained from >800 individuals based on functional localization (a contrast between processing of sentences and a linguistically/acoustically degraded condition, such as non-word strings). Thus, among these ~800 individuals, we provide a group average map that allows to quantify and visualize where ‘the average’ language network resides.
Examples of use cases of LanA are: 1) A common reference frame for analyzing group-level activation peaks from past/future fMRI studies, 2) Lesion locations in individual brains, 3) Electrode location in intracranial ECoG/SEEG investigations, 4) Functional mapping in during brain surgery when fMRI is not possible, and others (please see paper introduction). The atlas will be made publicly available (along with individual contrast/significance maps and demographic data) by publication.
The preprint can be found here: Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan Blank, Melissa Kline Shruhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, Evelina Fedorenko (2022): LanA (Language Atlas): A probabilistic atlas for the language network based on fMRI data from >800 individuals. bioRxiv2022.03.06.483177; doi: https://doi.org/10.1101/2022.03.06.483177.
Hierarchical layer-region correspondence of deep neural networks for audition
This work is a great collaboration with Jenelle Feather*, Dana Boebinger, and Josh McDermott.
An overarching aim of neuroscience is to build quantitatively accurate computational models of sensory systems. Deep neural networks provide such candidate models. To consider these neural networks as serious candidate models, they must at least 1) Perform a task that is relevant to the real world, 2) Be predictive of brain data, and 3) Be mappable (meaning that earlier layers of the network map onto earlier parts of the cortical hierarchy in the brain, and later layers onto later parts).
Such models are relatively well explored within vision (convolutional neural networks trained for image classification) (e.g., Yamins et al., 2014), but less explored in audition. Kell et al. (2018) showed that a particular neural network architecture was predictive of brain responses and had a degree of correspondence between model stages and brain regions. However, it is unclear whether these results generalize to other neural network models. In our work, we evaluated brain-model correspondence for publicly available audio neural network models along with in-house models trained on five different tasks. We used two independent datasets (Norman-Haignere et al., 2015, n=8; Boebinger et al., 2021, n=20) of participants listening to natural sounds in the fMRI scanner. Most tested models were more predictive of brain responses than traditional spectrotemporal models of auditory cortex, and exhibited a nice relationship between the model layer hierarchy and the cortical hierarchy in the human brain. However, this was not true for all tested models: not all state-of-the-art models were either predictive or mappable. This work helps us understand which parameters are necessary to yield a computationally accurate model of the human auditory cortex and substantiates our knowledge of the hierarchical organization of the auditory cortex.
I will be discussing these findings and other aspects of the work at Cosyne 2022 in Lisbon, Portugal, March 17-20 (poster session 2).
The neural architecture of language: Integrative modeling converges on predictive processing
Our paper on artificial neural networks (ANNs) as models of language comprehension is now out in PNAS and it received some nice coverage, for instance by Scientific American. I want to emphasize two points from this paper: 1) We show that better-performing language models (based on next-word prediction) also match the brain better. Critically, we did not evidence this link with performance on other linguistic benchmarks (GLUE), suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. 2) Model architecture alone (initialization weights) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input.
I think these two points open up for multiple exciting research questions: Given that better-performing models are more brain-like, how can we engineer more brain-inspired models? Most state-of-the-art language models are inefficient (requiring billions of parameters and training samples resulting in massive energy expenditure), not robust (can be fooled by adversarial input), and not very interpretable (making it challenging to localize causes of success/unwanted capabilities). How can we exploit principles from the human brain that allows us to processes language efficiently and robustly? Can we modularize or constrain language model representations using human data? In which scenarios do interpretability and performance go hand in hand? Lastly, which human and ANN benchmarks could be most meaningful to evaluate some of the aforementioned questions?
The paper can be found here:
Schrimpf, M., Blank, I.*, Tuckute, G.*, Kauf, C.*, Hosseini, E. A., Kanwisher, N., Tenenbaum^, J., Fedorenko^, E (2021): The neural architecture of language: Integrative modeling converges on predictive processing, PNAS Vol. 118, Issue 45; doi: https://doi.org/10.1073/pnas.2105646118.
Can we use transformer models to drive language regions in the brain?
I gave an informal 'poster' presentation at the Boston/Cambridge CogSci 2021 meet-up on exploiting transformer language models to drive regions in the human brain. I presented ideas and preliminary data on whether and how that is feasible, and if so, what we can learn from it. Thanks for the great discussions! This is ongoing work with Mingye Wang, Elizabeth Lee, Martin Schrimpf, Noga Zaslavsky, and Ev Fedorenko. More soon!
Frontal language areas do not emerge in the absence of temporal language areas
This work is a joint effort and brilliant collaboration with Alexander Paunov, Hope Kean, Hannah Small, Zachary Mineroff, Idan Blank, and Ev Fedorenko.
High-level language processing is supported by a left-lateralized fronto-temporal brain network. In this work, we investigated whether frontal language areas emerge in the absence of temporal language areas. To do so, we examined language processing in the brain of an individual (EG) born without a left temporal lobe. We used fMRI methods to establish that the right hemisphere language network is similar to the left hemisphere language network in controls. However, the critical question was whether EG’s intact left lateral frontal lobe contained language-responsive areas. We found no reliable response to language in EG’s intact left frontal lobe, suggesting that the existence of temporal language areas appears to be a prerequisite for the emergence of language areas in the frontal lobe.
The paper can be found here: Tuckute, G., Paunov, A., Kean, H., Small, H., Mineroff, Z., Blank, I., and Fedorenko, E. (2021): Frontal language areas do not emerge in the absence of temporal language areas: A case study of an individual born without a left temporal lobe, bioRxiv 2021.05.28.446230; doi: https://doi.org/10.1101/2021.05.28.446230.
Real-time decoding of visual attention using closed-loop EEG neurofeedback
Happy to share that my MSc thesis work from DTU is now published (with Sofie T. Hansen, Troels W. Kjaer and Lars K. Hansen).
Neurofeedback is a powerful tool for linking neural states to behavior. In this project, we asked i) Whether we can decode covert states of visual attention using a closed-loop EEG system, and ii) If a single neurofeedback training session can improve sustained attention abilities. We implemented an attention training paradigm designed by DeBettencourt et al., (2015) in EEG. In a double-blinded design, we trained twenty-two participants on the attention paradigm within a single neurofeedback session with behavioral pretraining and posttraining sessions.
We demonstrate that we are able to decode covert visual attention in real time. First of all, we report a mean classifier decoding error rate of 34.3% (chance = 50%). Second, we link this decoding performance to behavioral states, and show that within the neurofeedback group, there was a greater level of task-relevant attentional information decoded in the participant's brain before making a correct behavioral response than before an incorrect response (not evident in the control group; interaction p=7.23e−4). This indicates that we were able to achieve a meaningful measure of subjective attentional state in real time and control participants' behavior during the neurofeedback session. Lastly, we do not provide conclusive evidence whether a single neurofeedback session per se provided lasting effects in sustained attention abilities.
The paper can be found here: Tuckute, G., Hansen, S.T., Kjaer, Troels W., Hansen, L. K. (2021): Real-Time Decoding of Attentional States Using Closed-Loop EEG Neurofeedback, Neural Computation Vol. 33, Issue 4; doi: https://doi.org/10.1162/neco_a_01363.
A video of the neurofeedback system is available at here.
The code and sample data for the neurofeedback framework are available on GitHub.
This work is a great collaboration with Shriya Srinivasan (lead), Jasmine Zou, Samantha Gutierrez-Arango, Hyungeun Song, Robert L. Barry, and Hugh Herr.
The brain undergoes marked changes in function after limb loss and amputation. In this work, we investigate individuals with a traditional lower limb amputation, no amputation and a novel amputation procedure that preserves physiological central-peripheral signaling mechanisms. We demonstrate that the proprioceptive signaling enabled by the novel amputation procedure restores sensorimotor feedback in the brain. We investigate changes in functional connectivity in the brain, and show that the lack of proprioceptive feedback results in a strong coupling between visual and sensorimotor networks. This suggests a heavy reliance on visual information when no sensory feedback is available, possibly as a compensatory mechanism. Conclusively, we demonstrate that closed-loop proprioceptive feedback can enable desired neuroplastic changes toward improved neuroprosthetic capability.
The paper can be found here: Srinivasan, S. S., Tuckute, G., Zou, J., Gutierrez-Arango, S., Song, H., Barry, R. L., Herr, H (2020): AMI Amputation Preserves Proprioceptive Sensorimotor Neurophysiology, Science Translational Medicine, Vol. 12, Issue 573, doi: 10.1126/scitranslmed.abc5926.
ANNs as models of language processing in the brain
I gave a workshop talk at the Center for Cognitive and Behavioral Brain Imaging (CCBBI) at The Ohio State University on artifical neural networks as models of language processing. Part of the talk was based on the work by Schrimpf et al., 2020, while another part focused on the methodological considerations of comparing neural network models to brain representations. The talk can be found on OnNeuro.
Linguistic and Conceptual Processing are Dissociated During Sentence Comprehension
This work is a great collaboration with Cory Shain, Idan A. Blank, Mingye Wang, and Ev Fedorenko.
The human mind stores a vast array of linguistic knowledge, including word meanings, word frequencies and co-occurrence patterns as well as syntactic constructions. These different kinds of knowledge have to be efficiently accessed during incremental language comprehension. In this work, we ask how dissociable are the memory stores and processing mechanisms of these different types of knowledge. Moreover, do different types of knowledge representations and processing rely on language-specific networks in the human brain, domain-general networks, or both? To address these questions, we used representational similarity analysis (RSA) to relate linguistic knowledge and processing and neural data.
I will be presenting this ongoing work (poster) at SNL 2020 in October. Poster Session: A, Board #: 29, Wednesday, October 21, 12:00 pm PDT.
Artificial neural networks accurately predict language processing in the brain
This work is a great collaboration with Martin Schrimpf (lead), Idan A. Blank, Carina Kauf, Eghbal A. Hosseini, supervised by Nancy Kanwisher, Josh Tenenbaum and Ev Fedorenko.
In the recent years, great progress has been made in modeling sensory systems with artificial neural networks (ANNs) to provide mechanistic accounts of brain processing. In this work, we investigate if we can exploit ANNs to inform us about higher level cognitive functions in the human brain – specifically, language processing. Here, we ask which language models best capture human neural (fMRI/ECoG) and behavioral responses. Moreover, we investigate how this links to computational accounts of predictive processing. Lastly, we examine the contributions of intrinsic model network architecture in brain predictivity.
We tested 43 diverse state-of-the-art language models spanning a diverse set of embedding, recurrent, and transformer models. In brief, certain transformer families (GPT2) demonstrate consistent high predictivity across all neural datasets investigated. These models’ performance on neural data correlate with language modeling performance (next-word prediction) - but not other The General Language Understanding Evaluation (GLUE) benchmarks, suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. Lastly, model architecture alone (random weights, no training) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input, analogous to evolutionary-based optimization.
The pre-print can be found here: Schrimpf, M., Blank, I., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J., Fedorenko, E (2020): Artificial Neural Networks Accurately Predict Language Processing in the Brain, bioRxiv 2020.06.26.174482; doi: https://doi.org/10.1101/2020.06.26.174482.
Martin Schrimpf will also be presenting this work (slide) at SNL 2020 in October (SNL 2020 Merit Award Honorable Mention).