Approaching transfer learning to identify correctly worn facial masks
Abstract
Climate change has made the outbreak of diseases, especially airborne and contagious diseases much more extreme. Therefore, wearing masks has become essential in protecting ourselves not only from very common airborne diseases like cold, flu but also from very dangerous diseases like COVID-19 or similar. Wearing face masks properly would significantly reduce the spread of most of these diseases. In our research, we would like to propose a way to distinguish whether people are wearing facemasks properly or not by using image data. The data-set we collected included roughly 6000 images of people wearing masks both correctly and incorrectly, among which roughly 50% of it is collected locally from Bangladesh and the rest from other countries around the globe. The idea is to detect properly masked faces from images using different Deep Neural Network architectures. Throughout our analysis, we have implemented three models, ResNet50, Inception v3, and MobileNet v2 while comparing the results when SVM is used as a classifier and when Softmax is used in place of it. The results emerged with ResNet50 with an SVM classifier bringing out the best results by reaching an accuracy of 97.41% accuracy making the model highly reliable and stand out from the rest. This research aims to provide a better understanding of all the models and their architectures while providing statistical results from our data-set when these models are being used to distinguish between pictures of a properly masked person and that of an improperly masked.