- Published on
A Pre-trained Framework for Multilingual Brain Decoding Using Non-invasive Recordings
- Authors
- Name
- Yi Guo
- Name
- Yihang Dong
- Name
- Michael Kwok-Po Ng
- Name
- Shuqiang Wang
- Affiliation
- Department of Computer Science, University of XYZ
- Affiliation
- Department of Electrical Engineering, University of ABC
- Affiliation
- Department of Mathematics, University of DEF
- Affiliation
- Department of Physics, University of GHI
Brain-computer interfaces (BCIs) with speech decoding from brain recordings have broad application potential in fields such as clinical rehabilitation and cognitive neuroscience. However, current decoding methods remain limited to single-language, single-subject, and single neuroimaging modality settings, restricting their clinical applicability and generalizability. Here we propose a joint multilingual, multi-subject and multimodal decoding framework. It maps diverse brain recordings into a unified semantic space defined by a pre-trained multilingual model (PMM), enabling decoding across multiple languages, multiple subjects and multiple neuroimaging modalities. The proposed framework is validated using non-invasive brain recordings from 159 participants across four languages. Experimental results show that it exhibits strong generalization across multilingual, multi-subject, and multimodal settings. More importantly, the proposed framework can promote linguistic fairness, which is vital for underrepresented languages in BCI applications. The unified semantic space enables cross-lingual mapping enhancement, allowing the framework to boost the decoding performance of underrepresented languages, thereby promoting linguistic fairness. Overall, the proposed framework establishes a new potential paradigm for brain decoding, opening new paths for broader applications of BCI.