Fortifying federated learning: security against model poisoning attacks
Abstract
Distributed machine learning advancements have the potential to transform future
networking systems and communications. An effective framework for machine learning
has been made possible by the introduction of Federated Learning (FL) and due
to its decentralized nature it has some poisoning issues. Model poisoning attacks
are one of them that significantly affect FL’s performance. Model poisoning mainly
defines the replacement of a functional model with a poisoned model by injecting
poison into models in the training period. The model’s boundary typically alters in
some way as a result of a poisoning attack, which leads to unpredictability in the
model outputs. Federated learning provides a mechanism to unleash data to fuel
new AI applications by training AI models without letting someone see or access
anyone’s confidential data. Currently, there are many algorithms that are being
used for defending model poisoning in federated learning. Some of them are really
efficient but most of them have lots of issues that don’t make the federated learning
system properly secured. So in this study, we have highlighted the main issues of
these algorithms and provided a defense mechanism that is capable of defending
model poisoning in federated learning.