ResInvolution: an involution-ResNet fused global spatial relation leveraging model for histopathological image analysis under federated learning environment
View/ Open
Date
2024-05Publisher
Brac UniversityAuthor
Dipto, Shakib MahmudMetadata
Show full item recordAbstract
Accessing image data in the domain of medical image analysis is challenging owing
to concerns regarding privacy. Federated Learning is the approach used to get rid
of this challenge. With millions of learning parameters, Residual Network (ResNet)
is one of the most advanced architectures for classifying medical images. Because of
its resource-hungry nature, using this ResNet architecture in the Federated learning
framework has an impact on the entire system. This research introduces a novel
architecture called Residual Involution (ResInvolution), specifically developed for
analyzing histopathological images within a federated learning environment. The
architecture utilizes a cutting-edge model, the Involution-ResNet Fused Global Spatial
Relation Leveraging model, to enhance the analysis process. This model is
impressively lightweight, boasting less than 190,000 parameters. Its efficiency and
ease of deployment make it ideal for medical image analysis tasks. By incorporating
involution operations into the ResNet framework, it becomes possible to adjust the
spatial weighting of features dynamically. The proposed model enables a comprehensive
analysis of intricate structures that exceed the capabilities of traditional
convolutional networks. This model has been deployed within a federated learning
environment, where privacy is prioritized. Also utilize decentralized data sources,
thereby eliminating the necessity of centralizing sensitive medical images. This
approach ensures strict adherence to medical data privacy regulations while simultaneously
leveraging collective insights from multiple institutions. The model has
undergone rigorous testing on three distinct datasets: GasHisSDB, NTC-CRC-HE-
100K, AND LC25000. In Federated Learning scenarios, the model achieves accuracies
of 91%, 95%, and 99% on these datasets, respectively. However, in the context
of federated learning, the accuracies exhibited are 91%, 93%, and 97%, respectively.
The model’s effectiveness is evaluated through various performance metrics, including
the confusion Matrix, Accuracy, Precision, Recall, F1-Score, Receiver operating
Characteristic (ROC) curve, and Area under the ROC Curve (AUC) Score. The
results highlight the model’s ability to adapt to various challenges, such as limited
data and irregular data distribution, commonly encountered in federated learning
environments. ResInvolution sets a revolutionary benchmark in medical image analysis,
enhancing the ability to interpret intricate medical images and paving the way
for future advancements in scalable, privacy-preserving deep learning technologies.