Show simple item record

dc.contributor.advisorRahman, Tanvir
dc.contributor.advisorShakil, Arif
dc.contributor.authorHossain, Arafat
dc.contributor.authorChakraborty, Akash
dc.contributor.authorSyara, Syeda Rifa
dc.contributor.authorRahman, Saadman
dc.contributor.authorTanmoy, Fahad Muntasir
dc.date.accessioned2023-10-12T10:43:55Z
dc.date.available2023-10-12T10:43:55Z
dc.date.copyright©2022
dc.date.issued2022-06-05
dc.identifier.otherID 18101023
dc.identifier.otherID 18101019
dc.identifier.otherID 18101162
dc.identifier.otherID 18101605
dc.identifier.otherID 18101325
dc.identifier.urihttp://hdl.handle.net/10361/21792
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 30-31).
dc.description.abstractThere are several works on mood detection by machine learning from physical and neuro- physical data of people, along with works on emotion recognition using eye-tracking. We want to show that a person’s mood can be detected using their eye images only. The mood is reflected through one’s eyes. The goal is to establish a connection between an individual’s mood and one’s eye images. The machine learning algorithm that we are going to use is Convolutional Neural Network (CNN) because it does not require external feature extraction. They system learns to extract the features by itself. In this paper, we developed two CNN models and used FER- 2013 as our dataset from which we used only the eye images for each of the six emotions: happy, fear, sad, angry, neutral and surprise to create our own dataset. We trained and tested our models with both FER-2013 dataset as well as our own dataset and compared the results. For FER-2013 dataset, our final accuracy score for model 1 was 83.78% with a validation accuracy score of 65.35%. It was seen that our model 1 showed the final accuracy score of 69.19% with a validation accuracy of 72.08% whereas for model 2, the final accuracy was 66.55% with a validation accuracy of 72.36% when trained and tested with our own dataset. The low accuracy for our dataset is due to the limitations that we faced for insufficient training and testing images. The accuracy can be improved with a better dataset for training our models.en_US
dc.description.statementofresponsibilityArafat Hossain
dc.description.statementofresponsibilityAkash Chakraborty
dc.description.statementofresponsibilitySyeda Rifa Syara
dc.description.statementofresponsibilitySaadman Rahman
dc.description.statementofresponsibilityFahad Muntasir Tanmoy
dc.format.extent37 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectMachine learningen_US
dc.subjectCNNen_US
dc.subjectConvolutional neural networken_US
dc.subjectEye trackingen_US
dc.subjectFER
dc.subjectFacial expression recognition
dc.subject.lcshHuman-computer interaction
dc.subject.lcshPattern recognition systems
dc.titleAccurate analysis of mood detection using eye-images rather than facial expression recognition (FER)en_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record