Brain-Computer Interface

(BCI) for Language Decoding

Language brain-computer interfaces (BCIs) have the potential to improve the quality of life for individuals with speech impairments from illnesses such as ALS and Locked-In Syndrome. Through deep learning-based language BCIs, we can capture brain activity during linguistic activities, analyze these signals, and translate them into text or synthesized speech. The development of a BCI that can synthesize speech from thoughts in real time may empower individuals with impaired speech to communicate despite physical limitations. We are among the first groups utilizing MEG to decode full sentences from neural activity through the use of large language models (LLMs) coupled with deep learning-based phoneme decoding. We demonstrated the feasibility of our MEG-based sentence decoding approach with a proof-of-concept study, yielding accuracies that compete with state-of-the-art MEG sentence decoding studies (Ru & Vélez et al., 2025).

Brain Computer Interface
Language decoding brain-computer interface frameworks: The general decoding workflow (left) and our current MEG-based language decoding workflow (right). (Ru & Vélez et al., 2025; Vélez et al., 2025).

Abbreviations: MEG: Magnetoencephalography; EEG: Electroencephalography; fMRI: Functional Magnetic Resonance Imaging; LM: Language Model; LLM: Large Language Model

In an ongoing project, we are further developing, training, and testing our deep learning decoding model to improve upon our existing success in language decoding. Informed by simulation-based validation experiments we conducted to understand how our decoding model and neural data could be improved, we are also developing an extensive data corpus for language decoding with a variety of tasks for both healthy and patient populations, including those with epilepsy and mild ALS. By compiling a data corpus that is more suitable for language decoding, we aim to lessen the gap between invasive and non-invasive language decoding approaches and bring non-invasive decoding closer to clinical feasibility.

Decoding model
Examples of true and decoded sentences from the MEG dataset. (Ru & Vélez et al., 2025)

Related Publications

Reconstructing perceived speech using magnetoencephalography and deep learning

L. Ru, A. Vélez, B. Ahmadi, A. Babajani-Feremi, “Reconstructing perceived speech using magnetoencephalography and deep learning,” [Manuscript in Preparation]. University of Florida, 2025.

Language decoding with non-invasive brain-computer interfaces: A comprehensive review

A. Vélez, B. Ahmadi, L. Ru, A. Babajani-Feremi, “Language decoding with non-invasive brain-computer interfaces: A comprehensive review,” [Manuscript in Preparation]. University of Florida, 2025