Show simple item record

dc.contributor.advisorAlam, Md. Golam Robiul
dc.contributor.authorShruti, Abanti Chakraborty
dc.date.accessioned2024-09-09T06:44:42Z
dc.date.available2024-09-09T06:44:42Z
dc.date.copyright©2024
dc.date.issued2024-06
dc.identifier.otherID 22366034
dc.identifier.urihttp://hdl.handle.net/10361/24034
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science, 2024.en_US
dc.descriptionCataloged from the PDF version of the thesis.
dc.descriptionIncludes bibliographical references (pages 76-81).
dc.description.abstractAutomatic sentiment recognition from speech data is crucial for various applications. As AI has grown in popularity, the application of the importance of speech sentiment analysis is increasing along with the amount of speech in every industry. Bengali is the seventh most spoken language in the world, yet research on voice sentiment analysis in this language is lacking. This thesis investigates novel techniques to enhance speech sentiment recognition in underresourced languages like Bengali. We explore the efficacy of both unimodal (speech only) and multimodal (speech, Image, and text) approaches for different fusion techniques. This research proposed a semi-supervised Random Forest model, which achieved consistent and robust performance across different modality combinations. This model demonstrated high accuracy with fewer features, showcasing the efficiency and effectiveness of SHAP-based semi-supervised learning in handling unlabeled data. Additionally, eight different feature extraction techniques have been employed to extract acoustic features and VGG19 and Bangla Word2Vec are used to extract image and text features. Moreover, this study has experimented with different modality-based methods such as LSTM, CNN, and BanglaBERT. We have used BanglaSER, SUBESCO, and KBES datasets for our experiments. Among the various models tested, early fusion techniques proved the most effective, achieving an accuracy of up to 83% when combining speech and text modalities with LSTM classifiers and the proposed semi-supervised model acquired the highest 77% accuracy for audio, text, and image modals. In contrast, late fusion techniques showed reduced performance, though including speech and image modalities improved accuracy to 62%. Detailed performance comparisons for unimodal systems indicate that traditional Random Forest models perform well with fully labeled datasets, but our semi-supervised model works comparatively well with only 20% labeled data. Moreover, our proposed semi-supervised AdaBoost model, using only 20 features and SHAP-based feature importance, outperformed the traditional model trained with 50 features. Remarkably, the proposed Random Forest model trained with 20% labeled and 80% unlabeled data achieved over 70% accuracy across different feature selection methods, with the weighted feature selection technique achieving the highest accuracy of 72%. We believe this thesis will contribute significantly to Bangla speech sentiment recognition by providing a robust, efficient, and interpretable framework.en_US
dc.description.statementofresponsibilityAbanti Chakraborty Shruti
dc.format.extent97 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectSentiment analysisen_US
dc.subjectMachine learningen_US
dc.subjectBengali speechen_US
dc.subjectDecision treeen_US
dc.subjectRandom forest regressoren_US
dc.subjectCNNen_US
dc.subjectBanglaBERTen_US
dc.subjectLSTMen_US
dc.subject.lcshAutomatic speech recognition--Bengali.
dc.subject.lcshNatural language processing (Computer science).
dc.subject.lcshSpeech processing systems.
dc.subject.lcshComputational linguistics.
dc.subject.lcshSentiment analysis--Data processing.
dc.titleInto the heart of Bangla speech: advancing speech sentiment recognition with semi-supervised multimodal machine learning model leveraging an iterative SHAP-based feature selectionen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeM.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record