Show simple item record

dc.contributor.advisorHossain, Muhammad Iqbal
dc.contributor.advisorReza, Md. Tanzim
dc.contributor.authorAnan, Fabiha
dc.contributor.authorMamun, Kazi Shahed
dc.contributor.authorKamal, Md Sifat
dc.contributor.authorAhsan, Nizbath
dc.date.accessioned2024-05-08T05:27:43Z
dc.date.available2024-05-08T05:27:43Z
dc.date.copyright©2024
dc.date.issued2024-01
dc.identifier.otherID: 20101085
dc.identifier.otherID: 20301471
dc.identifier.otherID: 20101231
dc.identifier.otherID: 23341119
dc.identifier.urihttp://hdl.handle.net/10361/22774
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 39-41).
dc.description.abstractDistributed machine learning advancements have the potential to transform future networking systems and communications. An effective framework for machine learning has been made possible by the introduction of Federated Learning (FL) and due to its decentralized nature it has some poisoning issues. Model poisoning attacks are one of them that significantly affect FL’s performance. Model poisoning mainly defines the replacement of a functional model with a poisoned model by injecting poison into models in the training period. The model’s boundary typically alters in some way as a result of a poisoning attack, which leads to unpredictability in the model outputs. Federated learning provides a mechanism to unleash data to fuel new AI applications by training AI models without letting someone see or access anyone’s confidential data. Currently, there are many algorithms that are being used for defending model poisoning in federated learning. Some of them are really efficient but most of them have lots of issues that don’t make the federated learning system properly secured. So in this study, we have highlighted the main issues of these algorithms and provided a defense mechanism that is capable of defending model poisoning in federated learning.en_US
dc.description.statementofresponsibilityFabiha Anan
dc.description.statementofresponsibilityKazi Shahed Mamun
dc.description.statementofresponsibilityMd Sifat Kamal
dc.description.statementofresponsibilityNizbath Ahsan
dc.format.extent49 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectModel poisoningen_US
dc.subjectFederated learningen_US
dc.subjectMachine learningen_US
dc.subject.lcshDeep learning
dc.subject.lcshComputer networks--Security measures
dc.titleFortifying federated learning: security against model poisoning attacksen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record