Show simple item record

dc.contributor.advisorHossain, Dr. Muhammad Iqbal
dc.contributor.authorRahman, Tahsinur
dc.contributor.authorAhmed, Nusaiba
dc.contributor.authorMonjur, Shama
dc.contributor.authorHaque, Fasbeer Mohammad
dc.contributor.authorKabir, Naweed
dc.date.accessioned2023-08-08T05:32:47Z
dc.date.available2023-08-08T05:32:47Z
dc.date.copyright2023
dc.date.issued2023-01
dc.identifier.otherID: 19101146
dc.identifier.otherID: 19101236
dc.identifier.otherID: 18201125
dc.identifier.otherID: 19101269
dc.identifier.otherID: 19101053
dc.identifier.urihttp://hdl.handle.net/10361/19354
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2023.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 47-49).
dc.description.abstractAs the world is moving more and more towards a digital era, a great majority of data is transferred through a famous format known as PDF. One of its biggest obstacles is still the age-old problem: malware. Even though several anti-malware and anti-virus software exist, many of which cannot detect PDF Malware. Emails carrying harmful attachments have recently been used in targeted cyber attacks against businesses. Because most email servers do not allow executable files to be attached to emails, attackers prefer to use non-executable files like PDF files. In various sectors, machine learning algorithms and neural networks have been proven to successfully detect known and unidentified malware. However, it can be difficult to understand how these models make their decisions. Such lack of transparency can be a problem, as it is important to understand how an AI system is making decisions in order to ensure that it is acting ethically and responsibly. In some cases, machine and deep learning models may make biased or discriminatory decisions or have unintended consequences. Hence, Explainable AI comes into play. To address this issue, this paper suggests using machine learning algorithms SGD(Stochastic Gradient Descent), XGBoost Classifier, and deep learning algorithms Single Layer Perceptron, ANN(Artificial Neural Network) and check their interpretability using Explainable AI (XAI)’s SHAP framework to classify a PDF file being malicious or clean for a global and local understanding of the models.en_US
dc.description.statementofresponsibilityTahsinur Rahman
dc.description.statementofresponsibilityNusaiba Ahmed
dc.description.statementofresponsibilityShama Monjur
dc.description.statementofresponsibilityFasbeer Mohammad Haque
dc.description.statementofresponsibilityNaweed Kabir
dc.format.extent49 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectMalwareen_US
dc.subjectPDFen_US
dc.subjectPDF-analysisen_US
dc.subjectCybersecurityen_US
dc.subjectSGDen_US
dc.subjectMachine-learningen_US
dc.subjectDetectionen_US
dc.subjectDeep learningen_US
dc.subjectArtificial neural networken_US
dc.subjectAlgorithmen_US
dc.subjectSingle layer perceptronen_US
dc.subjectExtreme gradient boostingen_US
dc.subjectExplainable artificial intelligenceen_US
dc.subjectShapley additive explanationsen_US
dc.subjectANNen_US
dc.subjectSHAPen_US
dc.subjectXAIen_US
dc.subjectXGBoosten_US
dc.subjectClassifiersen_US
dc.subject.lcshArtificial intelligence.
dc.subject.lcshComputer security.
dc.titlePDFGuardian: An innovative approach to interpretable PDF malware detection using XAI with SHAP frameworken_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record