brain-computer interface, multimodal signal fusion, deep learning, human-computer interaction, signal decoding, neuroscience, data processing techniques
Abstract
In the past decades, the rapid development of brain-computer interface (BCI) technology has provided new perspectives on human-computer interaction. Traditional single-modal BCI systems, although showing promising results in many applications, still face challenges in terms of decoding accuracy, real-time performance, and user adaptability due to limitations in signal acquisition. To address these issues, multimodal brain-computer interfaces (MMBCIs) have emerged, aiming to improve the overall system performance by integrating information from different physiological signal sources. This paper reviews the basic concepts, signal sources, signal fusion techniques, application examples, and current challenges and future perspectives of multimodal BCI. By analyzing the existing research literature, we hope to provide theoretical guidance and framework support for further research in this area.