Show simple item record

dc.contributor.advisorAkhond, Mostafijur Rahman
dc.contributor.authorIslam, Shadman
dc.contributor.authorRahman, Moshiur
dc.contributor.authorAli, Samina Tuz Zohura
dc.contributor.authorShahrior, Tawhid
dc.date.accessioned2020-10-18T06:23:34Z
dc.date.available2020-10-18T06:23:34Z
dc.date.copyright2019
dc.date.issued2019-12
dc.identifier.otherID: 16101304
dc.identifier.otherID: 19341028
dc.identifier.otherID: 19141018
dc.identifier.otherID: 19341026
dc.identifier.urihttp://hdl.handle.net/10361/14062
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2019.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 50-52).
dc.description.abstractBegun roughly a decade ago, Sentiment analysis, the science of understanding human emotions, can trace its roots back to the middle of the 19th century. The motive of sentiment analysis is to extract and predict human emotions through facial expressions, speech or even text in some cases. Being inspired by the existing ideas, we propose a multi-modal model in the market that uses both facial cue and speech to forecast customers’ sentiments and satisfaction towards a certain product. Our model helps various companies get key insights for specific market regions and their customers, and to gain a competitive advantage over the other. In this study, we estimate product perception of a demography based on emotions that were extracted from customers’ facial expressions and speech. Although many researches have been made in the eld, but very few of them are multifaceted, integrated systems, where the different components rely on each other to produce an absolute result. We extract the emotions of people by recording their facial cues and speech patterns as they interact with a specific product of the market, such as a mobile phone in our case. We analyzed their facial expressions using AWS Rekognition. For the textual part, we analyze the sentiments using an algorithm which has a mixture of Tensorflow, Keras, Sequential model and RNN. Finally, we merge the previously obtained emotions from the video section with the textual sentiment to get the features for our predictive model. The model was generated using an algorithm named XGBoost. We have achieved an average accuracy of 81 percent approximately with 0.065 standard deviation by implementing cross validation of k-fold nature with folds of 3 and also 5 different iterations.en_US
dc.description.statementofresponsibilityShadman Islam
dc.description.statementofresponsibilityMoshiur Rahman
dc.description.statementofresponsibilitySamina Tuz Zohura Ali
dc.description.statementofresponsibilityTawhid Shahrior
dc.format.extent52 pages
dc.language.isoen_USen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectProduct marketen_US
dc.subjectXGBoosten_US
dc.subjectSentiment Analysisen_US
dc.titleA multimodal approach of sentiment analysis to predict customer emotions towards a producten_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record