Greta Tuckute

GitHub gretatuckute
Scholar Greta Tuckute
Twitter @GretaTuckute

Hi, I am Greta. Thank you for visiting my page. I am a PhD candidate at the Department of Brain and Cognitive Sciences at MIT working with Dr. Ev Fedorenko. I completed my BSc and MSc degree in Molecular Biomedicine at KU/DTU (neuroscience/computer science focus) with coursework and research at MIT/CALTECH/Hokkaido University. I work in the intersection of neuroscience, artificial intelligence and cognitive science. I am really passionate about semantic processing and how representations learned by artificial systems compare to the ones learned by humans – specifically, in the domain of language. I also like thinking about memory representations, neurofeedback and control theory. When I don’t do science, I enjoy photography, high altitudes, mornings, writing toolboxes, and magic realism books.

Below is a subset of updates on ongoing/recently finished projects and collaborations:

We link behavioral task performance to neural EEG states (effect only significant in the neurofeedback group and not controls).

Real-Time Decoding of Visual Attention Using Closed-Loop EEG Neurofeedback

March 2021 Happy to share that my MSc thesis work from DTU is now published (with Sofie T. Hansen, Troels W. Kjaer and Lars K. Hansen).
Neurofeedback is a powerful tool for linking neural states to behavior. In this project, we asked i) Whether we can decode covert states of visual attention using a closed-loop EEG system, and ii) If a single neurofeedback training session can improve sustained attention abilities. We implemented an attention training paradigm designed by DeBettencourt et al., (2015) in EEG. In a double-blinded design, we trained twenty-two participants on the attention paradigm within a single neurofeedback session with behavioral pretraining and posttraining sessions.
We demonstrate that we are able to decode covert visual attention in real time. First of all, we report a mean classifier decoding error rate of 34.3% (chance = 50%). Second, we link this decoding performance to behavioral states, and show that within the neurofeedback group, there was a greater level of task-relevant attentional information decoded in the participant's brain before making a correct behavioral response than before an incorrect response (not evident in the control group; interaction p=7.23e−4). This indicates that we were able to achieve a meaningful measure of subjective attentional state in real time and control participants' behavior during the neurofeedback session. Lastly, we do not provide conclusive evidence whether a single neurofeedback session per se provided lasting effects in sustained attention abilities.

The paper can be found here: Tuckute, G., Hansen, S.T., Kjaer, Troels W., Hansen, L. K. (2021): Real-Time Decoding of Attentional States Using Closed-Loop EEG Neurofeedback, Neural Computation Vol. 33, Issue 4; doi:
A video of the neurofeedback system is available at here. The code and sample data for the neurofeedback framework are available on GitHub.

Correlation of connectivity among brain networks and phantom limb sensation. We show that individuals with a low degree of phantom sensation (i.e. low neuroprosthetic controllability) have a strong connectivity among visual and sensorimotor networks, possibly as a compensatory mechanism.

Biological Closed-Loop Feedback Preserves Proprioceptive Sensorimotor Signaling

December 2020 This work is a great collaboration with Shriya Srinivasan (lead), Jasmine Zou, Samantha Gutierrez-Arango, Hyungeun Song, Robert L. Barry, and Hugh Herr.
The brain undergoes marked changes in function after limb loss and amputation. In this work, we investigate individuals with a traditional lower limb amputation, no amputation and a novel amputation procedure that preserves physiological central-peripheral signaling mechanisms. We demonstrate that the proprioceptive signaling enabled by the novel amputation procedure restores sensorimotor feedback in the brain. We investigate changes in functional connectivity in the brain, and show that the lack of proprioceptive feedback results in a strong coupling between visual and sensorimotor networks. This suggests a heavy reliance on visual information when no sensory feedback is available, possibly as a compensatory mechanism. Conclusively, we demonstrate that closed-loop proprioceptive feedback can enable desired neuroplastic changes toward improved neuroprosthetic capability.

The paper can be found here: Srinivasan, S. S., Tuckute, G., Zou, J., Gutierrez-Arango, S., Song, H., Barry, R. L., Herr, H (2020): AMI Amputation Preserves Proprioceptive Sensorimotor Neurophysiology, Science Translational Medicine, Vol. 12, Issue 573, doi: 10.1126/scitranslmed.abc5926.

ANNs as Models of Language Processing in the Brain

October 2020 I gave a workshop talk at the Center for Cognitive and Behavioral Brain Imaging (CCBBI) at The Ohio State University on artifical neural networks as models of language processing. Part of the talk was based on the work by Schrimpf et al., 2020, while another part focused on the methodological considerations of comparing neural network models to brain representations. The talk can be found on OnNeuro.

Linguistic and Conceptual Processing are Dissociated During Sentence Comprehension

September 2020 This work is a great collaboration with Cory Shain, Idan A. Blank, Mingye Wang, and Ev Fedorenko.
The human mind stores a vast array of linguistic knowledge, including word meanings, word frequencies and co-occurrence patterns as well as syntactic constructions. These different kinds of knowledge have to be efficiently accessed during incremental language comprehension. In this work, we ask how dissociable are the memory stores and processing mechanisms of these different types of knowledge. Moreover, do different types of knowledge representations and processing rely on language-specific networks in the human brain, domain-general networks, or both? To address these questions, we used representational similarity analysis (RSA) to relate linguistic knowledge and processing and neural data.

I will be presenting this ongoing work (poster) at SNL 2020 in October. Poster Session: A, Board #: 29, Wednesday, October 21, 12:00 pm PDT.

Left panel: Methodology of brain to ANN comparisons. Right panel: Brain predictivity correlates with computational accounts of predictive processing (next-word prediction).

Artificial Neural Networks Accurately Predict Language Processing in the Brain

July 2020 This work is a great collaboration with Martin Schrimpf (lead), Idan A. Blank, Carina Kauf, Eghbal A. Hosseini, supervised by Nancy Kanwisher, Josh Tenenbaum and Ev Fedorenko.
In the recent years, great progress has been made in modeling sensory systems with artificial neural networks (ANNs) to provide mechanistic accounts of brain processing. In this work, we investigate if we can exploit ANNs to inform us about higher level cognitive functions in the human brain – specifically, language processing. Here, we ask which language models best capture human neural (fMRI/ECoG) and behavioral responses. Moreover, we investigate how this links to computational accounts of predictive processing. Lastly, we examine the contributions of intrinsic model network architecture in brain predictivity. We tested 43 diverse state-of-the-art language models spanning a diverse set of embedding, recurrent, and transformer models. In brief, certain transformer families (GPT2) demonstrate consistent high predictivity across all neural datasets investigated. These models’ performance on neural data correlate with language modeling performance (next-word prediction) - but not other The General Language Understanding Evaluation (GLUE) benchmarks, suggesting that a drive to predict future inputs may shape human language processing. Thus, both the human language system and successful ANNs seem to be optimized for predictivity to efficiently extract meaning. Lastly, model architecture alone (random weights, no training) can reliably predict brain activity, possibly suggesting that these untrained representational spaces already provide enough structure to constrain and predict a given input, analogous to evolutionary-based optimization.

The pre-print can be found here: Schrimpf, M., Blank, I., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J., Fedorenko, E (2020): Artificial Neural Networks Accurately Predict Language Processing in the Brain, bioRxiv 2020.06.26.174482; doi:

Martin Schrimpf will also be presenting this work (slide) at SNL 2020 in October (SNL 2020 Merit Award Honorable Mention).