An interpretable diagnosis of retinal diseases using vision transformer and Grad-CAM
Date
2024-01Publisher
Brac UniversityAuthor
Bhuiyan, Mahdi HasanHaldar, Sumit
Chowdhury, Maisha Shabnam
Bushra, Nazifa
Jilan, Tahsin Zaman
Metadata
Show full item recordAbstract
Early detection of retinal diseases can help people avoid going completely or partially
blind. In this research, we will be implementing an interpretable diagnosis
of retinal diseases using a hybrid model containing VGG-16 and Swin Transformer
and then visualize with Grad-CAM. Using Optical Coherence Tomography (OCT)
Images gathered from various sources, a unique multi-label classification approach is
developed in this study for the diagnosis of various retinal diseases. For the research,
a transformer-like hybrid architecture will be used, which is Vision Transformer that
works by classifying images. Recent developments in competitive architecture for
image classification include the original concept of Transformers. The implication
of this architecture is done over patches of images often called visual tokens. It
can handle different data modality. A ViT employs several embedding and tokenization
techniques. In order to accurately highlight key areas in pictures, the
gradient-weighted class activation mapping, known as (Grad-CAM) technique has
been used so that deep model prediction can be obtained in image classification,
image captioning and several other tasks. It explains network decisions by using the
gradients in back-propagation as weights. We used both VGG-16 that is a variant
of Convolutional Neural Networks (CNN) and Swin Transformers in our model. We
combined these two and introduced a hybrid model. After being tested, the VGG-16
component’s output accuracy was 0.8888, while the Vision Transformer component’s
accuracy was 0.9139. Then the hybrid model was tested after some fine tuning and
it performed extraordinarily. The output accuracy of the hybrid model is 0.988.