Cognitive State Classification using Multi-modal Features

Hazra, Sumit (2023) Cognitive State Classification using Multi-modal Features. PhD thesis.

[img]PDF (Restricted upto 18/06/2027)
Restricted to Repository staff only

15Mb

Abstract

Recently, classification of cognitive states has received considerable attention from several disciplines such as psychology, cognitive science and medical engineering, etc. The multi-modal features extracted from gait and EEG data play an important role for analysing and understanding different types of cognitive states. The existing gait analysis systems are very expensive with the utilisation of high-end gold standard cameras namely Qualisys, OptiTrack, Vicon etc. A low-cost experimental setup for gait analysis is proposed in this thesis. This thesis provides a unique and innovative method for classification of cognitive state using multi-modal features. The dynamics of human gait is studied with anatomical knowledge of the human body for understanding the cognitive states. We apply different vision-based and sensor-based approaches for classifying cognitive states using multi-modal features. Camera calibration is an important step for measuring an instrument’s accuracy with its parameters. A multi-Kinect setup is created due to the limitation of single Kinect measurement range for capturing complete movements of a person. We apply fusion techniques for acquiring synchronized data captured from multiple calibrated Kinects. Two fusion methods namely Kalman filtering and a modified Set-Membership filtering are compared for estimating states of discrete time linear systems. Both the fusion techniques are tested on overground and treadmill data. The outcomes are validated with the gold standard cameras. The proposed Set-Membership filtering approach is compared quantitatively with state-of-the-art techniques. Another study is done to determine the accuracy and reliability of gait features. A novel approach for human detection and tracking is proposed which involves gait feature learning principles from depth and RGB videos. We apply various machine and deep learning modes on depth-based features. The feasibility study of gait signatures is performed using various statistical methods as well for validation with benchmark dataset. A novel event driven environment is created for analysis of cognitive states, using external stimuli through capturing EEG data with 14-channel Emotiv neuro-headset. We extract Gammatone Cepstrum Coefficients (GTCC) features from ambulatory EEG signals. Higher feature importance analysis-based scores are obtained for GTCC features indicating their discriminative ability. Various classification models are employed to achieve promising accuracy on proposed features. The entire approach is validated with benchmark SEED-IV dataset as well. Understanding the human psychology and classification of the cognitive states via multi-modal features is a new area of research in the field of cognitive science. A novel approach using multi-modal feature analysis is proposed for classification of cognitive states. The advantage of a multi-modal system is to provide adequate and diversified data to ensure data reliability for classification. Initially, the association between cognitive states and types of gait is estimated using Pearson’s Correlation Coefficient, Analysis of variance (ANOVA) and Support Vector Machine (SVM) classifier. We propose temporal and non-temporal Bayesian network-based probabilistic models for estimating cognitive states. We use different techniques such as Gaussian Mixture Modelling-Expectation Maximization (GMM-EM), k-Nearest Neighbors (k-NN) and Principal Component Analysis (PCA) to calculate the input probabilities for the Dynamic Bayesian Network (DBN) model. Furthermore, we use deep learning classification models such as Gated Recurrent Units (GRU) and Convolutional Neural Networks (CNN) for classifying the cognitive states. Standard statistical tests and comparative analysis with state-of-the-art research are performed on acquired dataset to validate the experimental results.

Item Type:Thesis (PhD)
Uncontrolled Keywords:Gait; Kinect v2.0; EEG; GTCC; Dynamic Bayesian Network (DBN); Deep Convolutional Generative Adversarial Network (DCGAN).
Subjects:Engineering and Technology > Biomedical Engineering
Engineering and Technology > Computer and Information Science > Networks
Engineering and Technology > Computer and Information Science > Image Processing
Divisions: Engineering and Technology > Department of Computer Science Engineering
ID Code:10530
Deposited By:IR Staff BPCL
Deposited On:17 Jun 2025 09:52
Last Modified:17 Jun 2025 09:52
Supervisor(s):Nandy, Anup

Repository Staff Only: item control page