Show simple item record

dc.contributor.advisorAlam, Md.Golam Rabiul
dc.contributor.advisorReza, Md. Tanzim
dc.contributor.authorTafannum, Faiza
dc.contributor.authorShopnil, Mir Nafis Sharear
dc.contributor.authorSalsabil, Anika
dc.contributor.authorAhmed, Navid
dc.date.accessioned2021-11-04T07:06:55Z
dc.date.available2021-11-04T07:06:55Z
dc.date.copyright2021
dc.date.issued2021-09
dc.identifier.otherID: 17101063
dc.identifier.otherID: 17101423
dc.identifier.otherID: 17101498
dc.identifier.otherID: 17101373
dc.identifier.urihttp://hdl.handle.net/10361/15604
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 36-38).
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2021.en_US
dc.description.abstractSocial media and its users are vulnerable to the spread of rumors, therefore, protect ing users from these rumors spread is extremely important. This research proposes a novel approach for rumor detection in social media that consists of multiple robust models: Support Vector Machine, XGBoost Classifier, Random Forest Classifier, Extra Tree Classifier, and Decision Tree Classifier. To evaluate more, we com bine these five different machine learning models to build our own hybrid model. Then, we apply two deep learning models- Long-Short Term Memory (LSTM) and Bidirectional Encoder Representations from Transformers (BERT) and both show promising results with high accuracy. For evaluations, we are using two datasets COVID19 Fake News Dataset and Twitter15 and Twitter16- two publicly available datasets concatenated. The datasets contain posts from both Facebook and Twit ter. We extract the textual part of source posts in vector representations and fit them into the models for predicting results and we evaluate the results. These arti ficial intelligence algorithms are often referred to as “Black-box” where data goes in the box and predictions come out of the box but what is happening inside the box frequently remains cloudy. Although there have been many inspired works for fake news detection, still the number of works regarding rumor detection lags behind and the models used in the existing works do not explain their decision-making process. But with explainable AI, the opaque process happening inside the black box can be explained. We use LIME to explain our models’ predictions. We take models with higher accuracy and illustrate which feature of the data contributes the most for a post to be predicted as a rumor or a non-rumor by the models, thus, demystifying the black box learning models. Our hybrid model achieves an accuracy of 93.22% and 82.49%, while LSTM provides 99.81%, 98.41% and BERT provides 99.62%, 94.80% accuracy on the COVID-19, Twitter15 and Twitter16 datasets respectively.en_US
dc.description.statementofresponsibilityFaiza Tafannum
dc.description.statementofresponsibilityMir Nafis Sharear Shopnil
dc.description.statementofresponsibilityAnika Salsabil
dc.description.statementofresponsibilityNavid Ahmed
dc.format.extent37 Pages
dc.language.isoen_USen_US
dc.publisherBRAC Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectSocial mediaen_US
dc.subjectRumoren_US
dc.subjectDetectionen_US
dc.subjectBlack boxen_US
dc.subjectMachine learningen_US
dc.subjectDeep learningen_US
dc.subjectExplainableen_US
dc.subjectLIMEen_US
dc.subjectCOVID-19en_US
dc.subjectClassifieren_US
dc.titleDemystifying black-box learning models of rumor detection from social media postsen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record