Performance analysis of different machine learning approaches for single modal facial expression detection
Abstract
Facial expression detection plays a pivotal role in the studies of emotion, cognitive processes,
and social interaction. This has potential applications in different aspects of everyday life .For
Example, real time face detection, sentiment analysis, CCTV violence prediction. In this thesis,
we investigate and analyze the performance of different machine learning approaches for single
modal type facial expression detection. With this proposed model, it is observed that the feature
extraction techniques incorporated in this work performs better in recognizing disparate
expressions than feeding unprocessed raw dataset to the networks. Moreover, this study used
Japanese Female Facial Expression (JAFFE) to demonstrate the comparative performance of
different classical classifiers and neural network-based approaches and how viable they are in the
detection of facial expression from single modal information. Hence this kind of models increase
the advancement of facial recognition for more future purposes .Therefore, the proposed model
proves the feasibility of computer vision based facial expression recognition for practical
applications like surveillance and Human Computer Interaction (HCI). In this system, Principal
Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE) have
been used to solve the dimensionality reduction and visual representation of the feature
components in a 2D feature space. For classification and recognition tasks we used different
classification algorithms like K-nearest Neighbor (KNN), Support Vector Machines (SVM),
Gaussian Naïve Bayes, Random Forest, Extra Tree, Ensemble machines and vanilla neural
networks. To use the total dataset on this algorithm we used 80% training and 20% testing of the
total dataset. Finally the best accuracy result was given by Artificial Neural Network (ANN)
which was 90.63 % from the proposed model.