dc.contributor.advisor | Hossain, Muhammad Iqbal | |
dc.contributor.advisor | Reza, Md. Tanzim | |
dc.contributor.author | Anan, Fabiha | |
dc.contributor.author | Mamun, Kazi Shahed | |
dc.contributor.author | Kamal, Md Sifat | |
dc.contributor.author | Ahsan, Nizbath | |
dc.date.accessioned | 2024-05-08T05:27:43Z | |
dc.date.available | 2024-05-08T05:27:43Z | |
dc.date.copyright | ©2024 | |
dc.date.issued | 2024-01 | |
dc.identifier.other | ID: 20101085 | |
dc.identifier.other | ID: 20301471 | |
dc.identifier.other | ID: 20101231 | |
dc.identifier.other | ID: 23341119 | |
dc.identifier.uri | http://hdl.handle.net/10361/22774 | |
dc.description | This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024. | en_US |
dc.description | Cataloged from PDF version of thesis. | |
dc.description | Includes bibliographical references (pages 39-41). | |
dc.description.abstract | Distributed machine learning advancements have the potential to transform future
networking systems and communications. An effective framework for machine learning
has been made possible by the introduction of Federated Learning (FL) and due
to its decentralized nature it has some poisoning issues. Model poisoning attacks
are one of them that significantly affect FL’s performance. Model poisoning mainly
defines the replacement of a functional model with a poisoned model by injecting
poison into models in the training period. The model’s boundary typically alters in
some way as a result of a poisoning attack, which leads to unpredictability in the
model outputs. Federated learning provides a mechanism to unleash data to fuel
new AI applications by training AI models without letting someone see or access
anyone’s confidential data. Currently, there are many algorithms that are being
used for defending model poisoning in federated learning. Some of them are really
efficient but most of them have lots of issues that don’t make the federated learning
system properly secured. So in this study, we have highlighted the main issues of
these algorithms and provided a defense mechanism that is capable of defending
model poisoning in federated learning. | en_US |
dc.description.statementofresponsibility | Fabiha Anan | |
dc.description.statementofresponsibility | Kazi Shahed Mamun | |
dc.description.statementofresponsibility | Md Sifat Kamal | |
dc.description.statementofresponsibility | Nizbath Ahsan | |
dc.format.extent | 49 pages | |
dc.language.iso | en | en_US |
dc.publisher | Brac University | en_US |
dc.rights | Brac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. | |
dc.subject | Model poisoning | en_US |
dc.subject | Federated learning | en_US |
dc.subject | Machine learning | en_US |
dc.subject.lcsh | Deep learning | |
dc.subject.lcsh | Computer networks--Security measures | |
dc.title | Fortifying federated learning: security against model poisoning attacks | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Department of Computer Science and Engineering, Brac University | |
dc.description.degree | B.Sc. in Computer Science | |