Dong Y D, Ren F J, Li C B. EEG emotion recognition based on linear kernel PCA and XGBoost[J]. Opto-Electron Eng, 2021, 48(2): 200013. doi: 10.12086/oee.2021.200013
Citation: Dong Y D, Ren F J, Li C B. EEG emotion recognition based on linear kernel PCA and XGBoost[J]. Opto-Electron Eng, 2021, 48(2): 200013. doi: 10.12086/oee.2021.200013

EEG emotion recognition based on linear kernel PCA and XGBoost

    Fund Project: National Natural Science Foundation of China-Shenzhen Joint Foundation (Key Project) (U1613217)
  • The principal component analysis of linear kernel and XGBoost models are introduced to design electroencephalogram (EEG) classification algorithm of four emotional states under continuous audio-visual stimulation. In order to reflect universality, the traditional power spectral density (PSD) is used as the feature of EEG signal, and the feature importance measure under the weight index is obtained with XGBoost learning. Then linear kernel principal component analysis is used to process the threshold selected features and send them to XGBoost model for recognition. According to the experimental analysis, gamma-band plays a more important role than other bands in XGBoost model recognition; in addition, for distribution on channels, the central, parietal, and right occipital regions play a more important role than other brain regions. The recognition accuracy of this algorithm is 78.4% and 92.6% respectively under the two recognition schemes of subjects all participation (SAP) and subject single dependent (SSD). Compared with other literature, this algorithm has made a great improvement. The scheme proposed is helpful to improve the recognition performance of brain-computer emotion system under audio-visual stimulation.
  • 加载中
  • [1] Vilar P. Designing the user interface: strategies for effective human-computer interaction (5th edition)[J]. J Assoc Inf Sci Technol, 2010, 65(5): 1073–1074.

    Google Scholar

    [2] Andreasson R, Alenljung B, Billing E, et al. Affective touch in human–robot interaction: conveying emotion to the Nao robot[J]. Int J Soc Robot, 2018, 10(3): 473–491. doi: 10.1007/s12369-017-0446-3

    CrossRef Google Scholar

    [3] 任福继, 孙晓. 智能机器人的现状及发展[J]. 科技导报, 2015, 33(21): 32–38.

    Google Scholar

    Ren F J, Sun X. Present situation and development of intelligent robots[J]. Sci Technol Rev, 2015, 33(21): 32–38.

    Google Scholar

    [4] Fragopanagos N, Taylor J G. Emotion recognition in human-computer interaction[J]. Neural Netw, 2005, 18(4): 389–405. doi: 10.1016/j.neunet.2005.03.006

    CrossRef Google Scholar

    [5] 王晓华, 李瑞静, 胡敏, 等. 融合局部特征的面部遮挡表情识别[J]. 中国图象图形学报, 2016, 21(11): 1473–1482. doi: 10.11834/jig.20161107

    CrossRef Google Scholar

    Wang X H, Li R J, Hu M, et al. Occluded facial expression recognition based on the fusion of local features[J]. J Image Graph, 2016, 21(11): 1473–1482. doi: 10.11834/jig.20161107

    CrossRef Google Scholar

    [6] Ren F J, Huang Z. Automatic facial expression learning method based on humanoid robot XIN-REN[J]. IEEE Trans Hum Mach Syst, 2016, 46(6): 810–821. doi: 10.1109/THMS.2016.2599495

    CrossRef Google Scholar

    [7] 张石清, 李乐民, 赵知劲. 基于一种改进的监督流形学习算法的语音情感识别[J]. 电子与信息学报, 2010, 32(11): 2724–2729.

    Google Scholar

    Zhang S Q, Li L M, Zhao Z J. Speech emotion recognition based on an improved supervised manifold learning algorithm[J]. J Electron Inf Technol, 2010, 32(11): 2724–2729.

    Google Scholar

    [8] Piana S, Staglianò A, Odone F, et al. Adaptive body gesture representation for automatic emotion recognition[J]. ACM Trans Interact Intell Syst, 2016, 6(1): 6.

    Google Scholar

    [9] 孙晓, 彭晓琪, 胡敏, 等. 基于多维扩展特征与深度学习的微博短文本情感分析[J]. 电子与信息学报, 2017, 39(9): 2048–2055.

    Google Scholar

    Sun X, Peng X Q, Hu M, et al. Extended multi-modality features and deep learning based microblog short text sentiment analysis[J]. J Electron Inf Technol, 2017, 39(9): 2048–2055.

    Google Scholar

    [10] Ren F J, Wang L. Sentiment analysis of text based on three-way decisions[J]. J Intell Fuzzy Syst, 2017, 33(1): 245–254. doi: 10.3233/JIFS-161522

    CrossRef Google Scholar

    [11] 赵国朕, 宋金晶, 葛燕, 等. 基于生理大数据的情绪识别研究进展[J]. 计算机研究与发展, 2016, 53(1): 80–92.

    Google Scholar

    Zhao G Z, Song J J, Ge Y, et al. Advances in emotion recognition based on physiological big data[J]. J Comput Res Dev, 2016, 53(1): 80–92.

    Google Scholar

    [12] Petrantonakis P C, Hadjileontiadis L J. Adaptive emotional information retrieval from EEG signals in the time-frequency domain[J]. IEEE Trans Signal Process, 2012, 60(5): 2604–2616. doi: 10.1109/TSP.2012.2187647

    CrossRef Google Scholar

    [13] Lin Y P, Wang C H, Jung T P, et al. EEG-based emotion recognition in music listening[J]. IEEE Trans Biomed Eng, 2010, 57(7): 1798–1806. doi: 10.1109/TBME.2010.2048568

    CrossRef Google Scholar

    [14] Yin Z, Wang Y X, Liu L, et al. Cross-subject EEG feature selection for emotion recognition using transfer recursive feature elimination[J]. Front Neurorobot, 2017, 11: 19.

    Google Scholar

    [15] Jenke R, Peer A, Buss M. Feature extraction and selection for emotion recognition from EEG[J]. IEEE Trans Affect Comput, 2014, 5(3): 327–339. doi: 10.1109/TAFFC.2014.2339834

    CrossRef Google Scholar

    [16] Li X, Song D W, Zhang P, et al. Exploring EEG features in cross-subject emotion recognition[J]. Front Neurosci, 2018, 12: 162. doi: 10.3389/fnins.2018.00162

    CrossRef Google Scholar

    [17] 李幼军, 钟宁, 黄佳进, 等. 进基于高斯核函数支持向量机的脑电信号时频特征情感多类识别[J]. 北京工业大学学报, 2018, 44(2): 234–243.

    Google Scholar

    Li Y J, Zhong N, Hang J J, et al. Human emotion multi-classification recognition based on the EEG time and frequency features by using a Gaussian kernel function SVM[J]. J Beijing Univ Technol, 2018, 44(2): 234–243.

    Google Scholar

    [18] 李昕, 田彦秀, 侯永捷, 等. 小波变换结合经验模态分解在音乐干预脑电分析中的应用[J]. 生物医学工程学杂志, 2016, 33(4): 762–769.

    Google Scholar

    Li X, Tian Y X, Hou Y J, et al. Applications of wavelet transform combining empirical mode decomposition in EEG Analysis with music intervention[J]. J Biomed Eng, 2016, 33(4): 762–769.

    Google Scholar

    [19] Zheng W L, Zhu J Y, Lu B L. Identifying stable patterns over time for emotion recognition from EEG[J]. IEEE Trans Affect Comput, 2019, 10(3): 417–429. doi: 10.1109/TAFFC.2017.2712143

    CrossRef Google Scholar

    [20] 李昕, 蔡二娟, 田彦秀, 等. 一种改进脑电特征提取算法及其在情感识别中的应用[J]. 生物医学工程学杂志, 2017, 34(4): 510–517, 528.

    Google Scholar

    Li X, Cai E J, Tian Y X, et al. An improved electroencephalogram feature extraction algorithm and its application in emotion recognition[J]. J Biomed Eng, 2017, 34(4): 510–517, 528.

    Google Scholar

    [21] 李幼军, 黄佳进, 王海渊, 等. 基于SAE和LSTM RNN的多模态生理信号融合和情感识别研究[J]. 通信学报, 2017, 38(12): 109–120.

    Google Scholar

    Li Y J, Huang J J, Wang H Y, et al. Study of emotion recognition based on fusion multi-modal bio-signal with SAE and LSTM recurrent neural network[J]. J Commun, 2017, 38(12): 109–120.

    Google Scholar

    [22] Chen J X, Zhang P W, Mao Z J, et al. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks[J]. IEEE Access, 2019, 7: 44317–44328.

    Google Scholar

    [23] 郭金良, 方芳, 王伟, 等. 基于稀疏组lasso-granger因果关系特征的脑电信号情感识别[J]. 模式识别与人工智能, 2018, 31(10): 941–949.

    Google Scholar

    Guo J L, Fang F, Wang W, et al. EEG emotion recognition based on sparse group lasso-granger causality feature[J]. Pattern Recognit Artif Intell, 2018, 31(10): 941–949.

    Google Scholar

    [24] Atkinson J, Campos D. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers[J]. Expert Syst Appl, 2016, 47: 35–41.

    Google Scholar

    [25] Gupta R, Laghari K U R, Falk T H. Relevance vector classifier decision fusion and EEG graph-theoretic features for automatic affective state characterization[J]. Neurocomputing, 2016, 174: 875–884.

    Google Scholar

    [26] Chen T Q, Guestrin C. XGBoost: a scalable tree boosting system[C]//Proceedings of the 22nd ACM SIGKDD International conference on Knowledge Discovery and Data Mining, 2016: 785–794.

    Google Scholar

    [27] 张爱武, 董喆, 康孝岩. 基于XGBoost的机载激光雷达与高光谱影像结合的特征选择算法[J]. 中国激光, 2019, 46(4): 0404003.

    Google Scholar

    Zhang A W, Dong J, Kang X Y. Feature selection algorithms of airborne LIDAR combined with hyperspectral images based on XGBoost[J]. Chin J Lasers, 2019, 46(4): 0404003.

    Google Scholar

    [28] Zheng H T, Yuan J B, Chen L. Short-term load forecasting using EMD-LSTM neural networks with a XGBoost algorithm for feature importance evaluation[J]. Energies, 2017, 10(8): 1168.

    Google Scholar

    [29] Chakraborty D, Elzarka H. Advanced machine learning techniques for building performance simulation: a comparative analysis[J]. J Build Perform Simul, 2019, 12(2): 193–207.

    Google Scholar

    [30] Luo Y N, Zou J, Yao C F, et al. HSI-CNN: a novel convolution neural network for hyperspectral image[C]//2018 International Conference on Audio, Language and Image Processing (ICALIP), 2018.

    Google Scholar

    [31] Ayumi V. Pose-based human action recognition with Extreme Gradient Boosting[C]//2016 IEEE Student Conference on Research and Development (SCOReD), 2016.

    Google Scholar

    [32] Zhong J C, Sun Y S, Peng W, et al. XGBFEMF: an XGBoost-based framework for essential protein prediction[J]. IEEE Trans NanoBioscience, 2018, 17(3): 243–250.

    Google Scholar

    [33] Koelstra S, Muhl C, Soleymani M, et al. DEAP: a database for emotion analysis; using physiological signals[J]. IEEE Trans Affect Comput, 2012, 3(1): 18–31.

    Google Scholar

  • Overview: Affective computing aims to build a harmonious human-computer environment so that computers have the ability to recognize and understand emotions. At present, the research of affective computing has penetrated into the fields of face recognition, speech recognition, text representation, gesture expression, and physiological signal. The relevant applications in these fields provide more humanized and emotional interfaces for all levels of human life. As the most direct physiological expression of the central nervous system, EEG contains rich emotional information. Compared with other research fields, the emotional information contained in EEG is more authentic and referential. At the same time, the expression of EEG emotion is not easy to be misled by subjective consciousness. In order to accurately distinguish different emotional states from EEG signals, and combine the corresponding models to explore the emotion-related frequency band and brain area in time and space, the principal component analysis of linear kernel and XGBoost model are introduced to design EEG classification algorithm of four emotional states under continuous audio-visual stimulation in this paper. XGBoost algorithm has the advantages of high speed, low computational complexity, easy parameter adjustment, strong controllability, and high recognition performance. As a recognition and prediction model, XGBoost algorithm has an excellent performance in industry, machine learning, and various scientific research competitions. In addition, XGBoost can measure the importance of features in sample learning according to some feature importance index in the process of sample training, so as to make features more transparent in the process of recognition. First, the traditional power spectral density (PSD) is used as the feature of EEG signal to reflect universality, and the feature importance measure under the weight index is obtained with XGBoost learning. Then the linear kernel principal component analysis is used to increase the dimension of the important features selected by the threshold, which makes the features more nonlinear and separable in the high-dimensional space. Finally, the processed features are sent to XGBoost model for recognition. According to the experimental analysis, gamma-band plays a more important role than other bands in XGBoost model recognition; in addition, for distribution on channels, the central, parietal, and right occipital regions play a more important role than other brain regions. The recognition accuracy of this algorithm is 78.4% and 92.6% respectively under the two recognition schemes of subjects all participation (SAP) and subject single dependent (SSD). Compared with other literature, this algorithm has made a great improvement. Therefore, the scheme proposed is helpful to improve the recognition performance of brain-computer emotion system under audio-visual stimulation.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(7)

Tables(1)

Article Metrics

Article views(5141) PDF downloads(1369) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint