Show simple item record

dc.contributor.advisorAlam, Md. Golam Rabiul
dc.contributor.advisorHossain, Muhammad Iqbal
dc.contributor.authorMohiuzzaman, Md.
dc.contributor.authorAbedin, Ahmad Abrar
dc.contributor.authorRahman, Shadab Afnan
dc.contributor.authorChowdhury, Shafaq Arefin
dc.contributor.authorAhmed, Shahadat
dc.date.accessioned2025-02-04T03:58:43Z
dc.date.available2025-02-04T03:58:43Z
dc.date.copyright©2024
dc.date.issued2024-10
dc.identifier.otherID 20301361
dc.identifier.otherID 20201080
dc.identifier.otherID 21101076
dc.identifier.otherID 21101064
dc.identifier.otherID 20301481
dc.identifier.urihttp://hdl.handle.net/10361/25284
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 32-33).
dc.description.abstractFederated Learning (FL) is a decentralized machine learning paradigm that enables training a global model across numerous edge devices while preserving data privacy. However, FL faces significant challenges, particularly in environments with heterogeneous hardware capabilities, communication burdens, and constrained resources. In this paper, we introduce a novel framework, TRI-FED-RKD, which incorporates forward and reverse knowledge distillation (RKD) along with FedAvg using a hybrid architecture of convolutional neural networks (CNNs) and spiking neural networks (SNNs). Our approach employs a tri-layer hierarchical aggregation-based architecture consisting of client devices, intermediate (middle) servers, and a global server. We compared two federated architectures: standard federated learning and federated learning with forward and reverse distillation in a hierarchical setting (TRI-FED-RKD). The same model is used across several datasets to evaluate the architectures, not the model performance. Depending on the use case, the network administrator can pick their own teacher and student models. The teacher model can also be different for each client if needed. This means that our architecture can deal with model heterogeneity when it comes to teacher models. We evaluate TRIFED- RKD on neuromorphic datasets such as DVS Gesture and NMNIST. We also tested it using non-neuromorphic datasets such as MNIST, EMNIST, and CIFAR10. Furthermore, we have shown that using forward and reverse knowledge distillation in federated learning can lead to much better performance than federated learning without knowledge distillation for non-neuromorphic datasets.en_US
dc.description.statementofresponsibilityMd. Mohiuzzaman
dc.description.statementofresponsibilityAhmad Abrar Abedin
dc.description.statementofresponsibilityShadab Afnan Rahman
dc.description.statementofresponsibilityShafaq Arefin Chowdhury
dc.description.statementofresponsibilityShahadat Ahmed
dc.format.extent42 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.rightsBRAC University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectFederated learningen_US
dc.subjectTri-layer architectureen_US
dc.subjectSNNen_US
dc.subjectSpiking neural networksen_US
dc.subjectDynamic vision sensoren_US
dc.subjectDVSen_US
dc.subjectNeuromorphic datasetsen_US
dc.subjectKnowledge distillationen_US
dc.subjectCNN
dc.subjectConvolutional neural network
dc.subject.lcshNeural networks (Computer science).
dc.subject.lcshComputer network architectures.
dc.subject.lcshDeep learning (Machine learning).
dc.titleTRI-FED-RKD: integrating forward-reverse distillation with SNN and CNN within federated learning using tri layer hierarchical aggregation based architectureen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record