Show simple item record

dc.contributor.advisorAlam, Md. Golam Rabiul
dc.contributor.authorIslam, Md. Nazmul
dc.date.accessioned2023-10-09T04:21:55Z
dc.date.available2023-10-09T04:21:55Z
dc.date.copyright2021
dc.date.issued2021-12
dc.identifier.otherID 19166020
dc.identifier.urihttp://hdl.handle.net/10361/21716
dc.descriptionThis project report is submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering, 2021.en_US
dc.descriptionCataloged from PDF version of the thesis.
dc.descriptionIncludes bibliographical references (pages 62-69).
dc.description.abstractRadiology imaging, such as magnetic resonance imaging (MRI), computed tomography (CT-scan), X-ray imaging (X-ray), and ultrasound imaging (US), is used to diagnose a variety of disorders, including brain disease, whole-body difficulties, kidney disease, COVID 19, dental status and many more. Out of those diseases, one of the lung diseases, the coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare system and people around the world. Molecular or antigen testing along with radiology X-ray imaging and CT scan radiographs are now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and the overwhelming workload of doctors, developing an AI-based auto-COVID detection system with high accuracy has become imperative. Given the enigmatic nature of COVID-19 visual markers, lung opacity, and viral pneumonia, diagnosis can be challenging as the three diseases look quite similar in nature on x-ray images. Past studies related to lung disease auto detection were mostly performed on small datasets and the majority of the studies did not reveal the blackbox of models. Furthermore, despite the fact that there are a large number of infected people around the world, the amount of COVID data sets needed to build an AI system is limited and dispersed. Moreover, renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. For this thesis, we chose two imaging modalities to study: x-ray and CT scan images. We automated and assessed our established models for lung illnesses (COVID 19, lung opacity, and viral pneumonia). Furthermore, utilizing CT images, we constructed models to automate kidney disease (kidney tumor, cyst, and stone). This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study was extended to differentiate COVID-19 from normal, lung opacity, and viral pneumonia x-ray pictures. We constructed six models and trained them with x-ray images. Out of those six models, three can be used to categorize normal and COVID diseases (i.e., binary class classification). The other three can be used to categorize normal, COVID- 19, lung opacity, and viral pneumonia images (i.e., four-class classification). All models were trained and validated using the transfer learning approach and then tested with unseen x-ray pictures. Each of the binary class models was trained with different input picture resolutions, and it was found that greater input image resolution training contributed to the model’s better performance and accuracy. With the employed two-class classification model, the best accuracy, precision, and recall are found to be 97.5%, 99.5% and 99.5%, respectively. The high accuracy of this test can significantly assist in reducing global suffering from COVID-19. Moreover, three models’ decisions are verified and compared by visualizing all internal layers, including the final layer’s heatmap utilizing GradCam. The best accuracy found for the multiclass model is 93% while testing the model with unseen data. Each of the three multiclass models is explained using explainable AI to uncover the blackbox of the models.In this letter, we also presented a chest CT scan data set for COVID and healthy patients considering a varying range of severity of COVID, which we published on kaggle and that can assist other researchers to contribute to healthcare AI. We also developed three deep learning approaches for detecting COVID quickly and cheaply from chest CT radiographs. Our three transfer learning-based approaches, Inception v3, Resnet 50, and VGG16, achieve accuracy of 99.8%, 91.3%, and 99.3%, respectively, on unseen data. We delve deeper into the black boxes of those models to demonstrate how our model comes to a certain conclusion, and we found that, despite the low accuracy of the model based on VGG16, it detects the COVID spot in images well, which we believe may further assist doctors in visualizing which regions are affected.This research also deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community’s research scope. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.en_US
dc.description.statementofresponsibilityMd. Nazmul Islam
dc.format.extent69 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectKidney diseaseen_US
dc.subjectVision transformeren_US
dc.subjectTransfer learningen_US
dc.subjectExplainable AIen_US
dc.subjectCT Imagingen_US
dc.subjectX-ray imagingen_US
dc.subjectLung opacityen_US
dc.subjectViral Pneumoniaen_US
dc.subjectCovid-19en_US
dc.subject.lcshArtificial intelligence
dc.subject.lcshImage processing--Digital techniques.
dc.subject.lcshDiagnostic imaging--Digital techniques
dc.titleDemystify the blackbox model of automated detection of lung and kidney diseases from X-ray and CT radiographsen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeM. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record