Show simple item record

dc.contributor.advisorUddin, Jia
dc.contributor.advisorReza, Md. Tanzim
dc.contributor.authorBhowmik, Durjoy
dc.contributor.authorAbdullah, Mohd.Rahat Bin
dc.contributor.authorIslam, Mohammed Tanvirul
dc.date.accessioned2022-03-03T03:53:57Z
dc.date.available2022-03-03T03:53:57Z
dc.date.copyright2022
dc.date.issued2022-01
dc.identifier.otherID 17301153
dc.identifier.otherID 17301215
dc.identifier.otherID 17301056
dc.identifier.urihttp://hdl.handle.net/10361/16380
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 39-41).
dc.description.abstractThe world stood still during the massive breakout of the Covid-19 worldwide. This massive outbreak of this contagious disease was occurred by being airborne. Not only COVID but also there are many other contagious disease which spread through air. So at present time, mask has become an essential part of our life which protects us from being affected from getting affected by COVID along with small diseases like cold, flu etc. We can get rid of these diseases and stop them from spreading just by wearing a face mask properly. In our research we would propose a way to identify or detect weather a person is using a face mask properly or not. For this we have used image data. The dataset that we have use are being made by us. Which consists of 1,45,537 images. We have divided this dataset into three segments. Which are with mask, without mask and misplaced mask. Among them 1,45,537 number are of images are of Asian region and rest is of the other countries. The main idea was to detect masked face properly using Deep learning architecture. We have implemented DenseNet169 and VGG19 to train the model and test it on images and videos. The accuracy that we got by using DenseNet169 is 91.47% in color images and 88.83% in grayscale. On the other hand in VGG19 we have got accuracy of 88.52% in color images and 92.4% in grayscale. Which makes this model more reliable than the rest. When we implemented this on video we got accuracy of 75.36% in DenseNet169. On the other hand, in VGG19 we have got 92.30% from gray scale. We have tried to provide a brief understanding of this architecture along with statistical results that we got from our dataset with a view to identify a person wearing mask properly or not. In addition it can identify the persons without wearing mask or persons wearing mask improperly.en_US
dc.description.statementofresponsibilityDurjoy Bhowmik
dc.description.statementofresponsibilityMohd.Rahat Bin Abdullah
dc.description.statementofresponsibilityMohammed Tanvirul Islam
dc.format.extent41 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectCovid-19en_US
dc.subjectTransfer learningen_US
dc.subjectCNNen_US
dc.subjectDensenet169en_US
dc.subjectVGG19en_US
dc.subjectFace masken_US
dc.subjectVideo detectionen_US
dc.subjectSoftmaxen_US
dc.subject.lcshMachine learning
dc.subject.lcshImage processing -- Digital techniques.
dc.titleA deep face-mask detection model using DenseNet169 and image processing techniquesen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record