Show simple item record

dc.contributor.advisorMostakim, Moin
dc.contributor.authorHaque, Samiha
dc.contributor.authorRahman, Nazibur
dc.date.accessioned2021-10-18T08:47:58Z
dc.date.available2021-10-18T08:47:58Z
dc.date.copyright2021
dc.date.issued2021-01
dc.identifier.otherID 17101218
dc.identifier.otherID 17101317
dc.identifier.urihttp://hdl.handle.net/10361/15364
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2021.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 60-63).
dc.description.abstractForests and wild vegetation have always been highly significant natural resources throughout history and play a crucial role in keeping the climate and ecosystems well balanced. Over the years, there has been a growing decline in the forest areas all over the world due to hurricanes, landslides and most notably forest fires which have been directly linked to global warming. Very recently, large fires in Brazilian Amazon, Australia, Thailand USA have had devastating effects, destroying millions of acres of vegetation. There has been a rise in frequency and severity of such forest disasters over the years and therefore, understanding the scale of damage on the forest areas is very important. For this purpose, multispectral high resolution remote sensing data from satellites such as Sentinel-2 can be used to extract spatial information before and after the disaster, identify the damaged areas and evaluate the amount damaged. In this research, we construct a dataset from the difference image of disaster struck vegetation areas and a pre-trained ResNet-50 is used to produce flattened feature maps from the difference image patches. These feature maps are then labelled using the K-Means clustering algorithm, thus creating a complete labelled dataset which is used to train two CNN models to classify the areas into undamaged and damaged classes for Model-1 with 89.77% test accuracy and undamaged, mildly damaged, and severely damaged classes for Model-2 with 85.69% test accuracy. Model-1 recorded 89.77% in micro F1-score, 76.99% for kappa score and 0.10 for overall error and Model-2 had 85.69%, 72.87% and 0.14 values in micro F1-score, kappa score and overall error respectively.en_US
dc.description.statementofresponsibilitySamiha Haque
dc.description.statementofresponsibilityNazibur Rahman
dc.format.extent63 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectMachine Learningen_US
dc.subjectCNNen_US
dc.subjectConvolutional Neural Networken_US
dc.subjectK-Meansen_US
dc.subjectChange Detectionen_US
dc.subjectMultispectral Imageen_US
dc.subjectTransfer Learningen_US
dc.subjectResNet-50en_US
dc.subjectSentinel-2en_US
dc.subject.lcshMachine Learning
dc.titleClassification of damaged vegetation areas using convolutional neural network over satellite imagesen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record