본 연구에서는 사용자에게서 취득한 뇌파의 감정분류를 시행하였고, SVM(Support Vector Machine)과 K-means 알고리즘으로 분류실험을 하였다. 뇌파 신호는 측정 한 32개의 채널 중에서, 이전 연구에서 감정분류가 뚜렷하게 나타났던 CP6, Cz, FC2, T7, PO4, AF3, CP1, CP2, C3, F3, FC6, C4, Oz, T8, F8의 총 15개의 채널을 사용하였다. 감정유도는 DVD 시청과 IAPS(International Affective Picture System)라는 사진 자극 방법을 사용하였고, 감정분류는 SAM(Self-Assessment Manikin) 방법을 사용하여 사용자의 감정상태를 파악하였다. 취득된 사용자의 뇌파신호는 FIR filter를 사용하여 전처리를 하였고, ICA(Independence Component Analysis)를 사용하여 인공산물 (eye-blink)을 제거하였다. 전처리된 데이터를 FFT를 통하여 주파수 분석을 하여 특징추출(feature extraction) 하였다. 마지막으로 분류알고리즘을 사용하여 실험을 하였는데, K-means는 70%의 결과를 도출하였고, SVM은 71.85%의 결과를 도출하여 정확도가 더 우수하였으며, 이전의 SVM을 사용했던 연구결과와 비교분석하였다.
In this study, emotion-classification gathered from users was performed, classification-experiments were then conducted using SVM(Support Vector Machine) and K-means algorithm. Total 15 numbers of channels; CP6, Cz, FC2, T7. PO4, AF3, CP1, CP2, C3, F3, FC6, C4, Oz, T8 and F8 among 32 members of the channels measured were adapted in Brain signals which indicated obvious the classification of emotions in previous researches. To extract emotion, watching DVD and IAPS(International Affective Picture System) which is a way to stimulate with photos were applied and SAM(Self-Assessment Manikin) was used in emotion-classification to users’ emotional conditions. The collected users’ Brain-wave signals gathered had been pre-processing using FIR filter and artifacts(eye-blink) were then deleted by ICA(independence component Analysis) using. The data pre-processing were conveyed into frequency analysis for feature extraction through FFT. At last, the experiment was conducted suing classification algorithm; Although, K-means extracted 70% of results, SVM showed better accuracy which extracted 71.85% of results. Then, the results of previous researches adapted SVM were comparatively analyzed.