Abstract
Decoding language from brain activity is a long-awaited goal in both healthcare and neuroscience. Major milestones have recently been reached thanks to intracranial devices: subject-specific pipelines trained on invasive brain responses to basic language tasks now start to efficiently decode interpretable features (eg, letters, words, spectrograms). However, scaling this approach to natural speech and non-invasive brain recordings remains a major challenge. Here, we propose a single end-to-end architecture trained with contrastive learning across a large cohort of individuals to predict self-supervised representations of natural speech. We evaluate our model on four public datasets, encompassing 169 volunteers recorded with magneto- or electro-encephalography (M/EEG), while they listened to natural speech. The results show that our model can identify, from 3 s of MEG signals, the corresponding speech segment with up to 72.5% top-10 accuracy out of 1,594 distinct segments (and 44% top-1 accuracy), and up to 19.1% out of 2,604 segments for EEG recordings – hence allowing the decoding of sentences absent from the training set.
Our speaker
Alexandre is a research scientist at Meta AI in Paris, working on deep learning based signal processing, primarily for audio applications (source separation, compression, generation), but also for brain signal analysis. He completed his PhD at INRIA Paris while being a resident at Meta AI, under the supervision of Léon Bottou, Francis Bach, and Nicolas Usunier.
To become a member of the Rough Path Interest Group, register here for free.