I am a CNRS researcher at the Ecole Normale Supérieure currently detached to Meta AI, where I lead the Brain & AI group. We work on identifying the brain and computational bases of human intelligence, with a focus on language. We develop deep learning techniques to decode and model intracranial recordings, magneto-encephalography and functional magnetic resonance imaging.
Défossez, Caucheteux, Kabeli, Rapin & King, arXiv 2022
“Decoding speech from non-invasive brain recordings”,— Jean-Rémi King (@JeanRemiKing) August 31, 2022
Our latest study (on 169 participants!), by @honualx and our wonderful team @MetaAI
- paper: https://t.co/QiB7Io8af8
- blog: https://t.co/H2W4prbbuD
- illustrated summary: below👇 pic.twitter.com/39eMnJ4IDv
Millet*, Caucheteux*, Orhan, Boubenec, Gramfort, Dunbar, Pallier & King, arXiv 2022
🔥Preprint out:— Jean-Rémi King (@JeanRemiKing) June 6, 2022
`Toward a realistic model of speech processing in the brain with self-supervised learning’:https://t.co/rJH6t6H6sm
by J. Millet*, @c_caucheteux* and our wonderful team:
The 3 main results summarized below 👇 pic.twitter.com/mdrJpbrb3M
Caucheteux, King, Nature Communications Biology 2022
🎉Paper out: ‘Brains and algorithms partially converge in natural language processing’— Jean-Rémi King (@JeanRemiKing) February 23, 2022
by @c_caucheteux, and now freely available at Nature @CommsBio:https://t.co/MpenOUaKwS
The summary thread below 👇 pic.twitter.com/gMruZgGIOv
Caucheteux, Gramfort & King, arXiv 2021
tl;dr:We track language predictions in the brain and show that, unlike modern algorithms, they are hierarchical and apply to a variety of temporal scopes.
‘Long-range and hierarchical language predictions in brains and algorithms’— Jean-Rémi King (@JeanRemiKing) November 30, 2021
Check-out our latest paper https://t.co/rwfVCVLRWA by @c_caucheteux @agramfort @JeanRemiKing
tl;dr: Unlike deep language models, the brain makes long-range & hierarchical predictions
Thread below👇 pic.twitter.com/iP0BEYBjip
Caucheteux, Gramfort & King, EMNLP 2021
tl;dr:We show how deep language algorithms help reveal the hierarchical organization of language integration in the brain.
"Model-based analysis of brain activity reveals the hierarchy of language"— Jean-Rémi King (@JeanRemiKing) October 12, 2021
Our EMNLP paper by @c_caucheteux @agramfort & myself is out: https://t.co/BxvrbZNkPt
It shows (w/ emoji-based equations!) how deepnets can efficiently recover the language hierarchy in the
Caucheteux, Gramfort & King, bioRxiv 2021
tl;dr:The more we understand text, the more our brain responds like GPT-2.
Our latest paper is out:— Jean-Rémi King (@JeanRemiKing) June 9, 2021
GPT-2’s activations predict the degree of semantic comprehension in the human brain, by @c_caucheteux, @agramfort & @JeanRemiKinghttps://t.co/Xjc8IaXT64
The summary thread below 👇
Caucheteux, Gramfort & King, ICML 2021
tl;dr:The similarity between deep nets and the brain allow us to decompose syntax and semantics in the brain.
"Disentangling Syntax and Semantics in the Brain with Deep Networks"— Jean-Rémi King (@JeanRemiKing) July 22, 2021
Go check out our latest @icmlconf paper : https://t.co/4YPK7vJRsJ
by @c_caucheteux, @agramfort & @JeanRemiKing
The summary thread below 👇 pic.twitter.com/v0kxjtBtVP
Millet & King, arXiv 2021
tl;dr:Do convolutional networks process speech sounds like our brains does? Short answer: yes, even without training; but training helps.
Do convolutional networks process speech sounds like our brains does?— Jean-Rémi King (@JeanRemiKing) March 9, 2021
Check out our latest study with Juliette Millet: https://t.co/dcupYxSxKA
Here is the summary thread 👇: 1/n pic.twitter.com/LI6kr8PY9j
Chehab*, Defossez*, Loiseau, Gramfort & King, arXiv 2021
tl;dr: We propose a new end-to-end architecture to encode MEG brain signals. It outperforms standard pipelines by a 3X.
Deep learning improves the analysis of time-resolved brain signals by ... 3️⃣ folds!— Jean-Rémi King (@JeanRemiKing) April 7, 2021
Check out our latest paper by @lomarchehab*, @honualx*, @loiseau_jc, and @agramfort:
Below is the summary thread 👇 pic.twitter.com/h1WcoGm7UD
Sergent, Corazzol, Labouret, Stockart, Wexler,King, Meyniel & Pressnitzer , Nature Communications 2021
tl;dr: We show with EEG that the conscious access follows an all-or-none dynamics even without report.
Most work on the neural basis of consciousness relies on self-report, however @MmeJeanserre, @JeanRemiKing et al. suggest bifurcation in EEG brain dynamics may reflect an independent signature of conscious perception @Univ_Paris @Cognition_ENS @mne_python https://t.co/nHMPaSVxnU pic.twitter.com/n4TXgh2XNt— Nature Communications (@NatureComms) February 20, 2021
Caucheteux & King, bioRxiv 2020
tl;dr: Do deep nets become increasingly correlated with brain activity as they learn to process language? Short answer: only their middle layers do.
Back-to-back regression: Disentangling the influence of correlated factors from multivariate observations
King, Charton, Lopez-Paz & Oquab, Neuroimage 2020
tl;dr: We introduce a simple method to combine the advantages of decoding and encoding analyses.
Back-to-back regression: Disentangling the influence of correlated factors from multivariate observations.— Jean-Rémi King (@JeanRemiKing) July 9, 2020
Our latest paper with @f_charton, David Lopez Paz & Maxime Oquab at @facebookai is now freely available at Neuroimage: https://t.co/2hBgODEeAw
Here's the summary thread ⤵️ pic.twitter.com/i1ZLF2dZ5e
Peiffer-Smadja, Maatoug, Lescure, D’Ortenzio, Pineau & King, Nature Machine Intelligence 2020
tl;dr: We’re teaming up with the AP-HP hospital to review the promises and pitfalls of Machine Learning.
"#MachineLearning for #COVIDー19 needs global collaboration and data-sharing"
#ArtificialIntelligence #SARSCoV2 pic.twitter.com/lZsZh8Hqvm</p>— Nathan Peiffer-Smadja (@nathanpsmad) May 22, 2020
Gwilliams, King, Marantz & Poeppel, bioRxiv 2020
tl;dr: Decoding the neural dynamics underlying phonetic representations shows how the brain can keep up multiple phonemes until the corresponding word is identified.
our new paper "Neural dynamics of phoneme sequencing" is now on bioRxiv!https://t.co/jeTipPTXuf— Laura Gwilliams (@GwilliamsL) April 6, 2020
conducted with dream-team @JeanRemiKing @AlecMarantz @davidpoeppel, we use MEG to study how phonemes are processed in continuous naturalistic speech
short summary in thread below:
Entre #IA et neurosciences. Rencontre avec Jean-Rémi King @JeanRemiKing @Cognition_ENS, chercheur @CNRS spécialiste du fonctionnement du cerveau humain 🧠 https://t.co/6hxwyGHECP pic.twitter.com/cAnJwbbbCj— École normale supérieure | PSL (@ENS_ULM) March 10, 2020
The Human Brain Encodes a Chronicle of Visual Events at Each Instant of Time Through the Multiplexing of Traveling Waves
Wyart and King, Journal of Neuroscience 2021
tl;dr: We measure brain responses to image sequences, and show how the brain recruits a hierarchy of neural processes in order to efficiently represents multiple snapshots of the past. Check-out our tweet thread for the illustrated summary
Gwilliams and King bioRxiv 2019
tl;dr: When an image is ambiguous, the brain slowly recruits a hierarchy of recurrent processes to generate categorical percepts. Check-out our tweet thread for the illustrated summary
0/9: "Recurrent Processes Emulate a Cascade of Hierarchical Decisions", by @GwilliamsL and I, the tl;dr thread:— Jean-Rémi King (@JeanRemiKing) November 15, 2019
3/9 Their average brain response confirm a fast feedforward recruitment of their visual hierarchies pic.twitter.com/Y39WYwJ2Yx— Jean-Rémi King (@JeanRemiKing) November 15, 2019
Claassen et al, New England Journal of Medicine 2019
tl;dr: Acute brain injury patients can sometimes be behaviorally unresponsive. Yet, we show that 15% of them still demonstrate motor-command brain responses.