Automated Feature Engineering for Single-Trial EEG and Eye-Tracking Classification in Predictive Text Interfaces

  • Ard Kastrati ,
  • R. Michael Winters ,
  • Nemanja Djuric ,
  • ,
  • Yu-Te Wang

13th International Winter Conference on Brain-Computer Interface (BCI) |

Published by IEEE | Organized by IEEE

Brain-Computer Interfaces (BCIs) offer a direct connection between the human brain and digital systems, enabling innovative applications. However, realizing the full potential of BCIs remains challenging due to issues like noise, artifacts, and limited data availability. In this study, we develop a multimodal classifier that integrates electroencephalogram (EEG) and eye-tracking (ET) data to decode user responses to predictive text suggestions. Utilizing an automated feature engineering approach, our pipeline efficiently generates and selects relevant features without extensive manual intervention or deep theoretical insights. Applied to a recent BCI case study involving predictive text input, our method achieved higher classification accuracies compared to traditional approaches. Additionally, it revealed novel insights, such as behavioral patterns where participants did not fully read incorrect predictions and the enhanced performance of multimodal classifiers when combining ICA-preprocessed EEG data with ET data. While automated feature engineering is standard in other domains, it is seldom applied in BCI research. Our findings demonstrate that this approach is a valuable tool for data-driven exploration and the development of competitive single-trial classifiers in novel multimodal BCI paradigms, particularly during the initial stages of research with limited data.