Show simple item record

dc.contributor.advisorArif, Hossain
dc.contributor.advisorIslam, Md. Saiful
dc.contributor.authorRumi, Roisul Islam
dc.contributor.authorHossain, Syed Moazzim
dc.contributor.authorShahriar, Ahmed
dc.contributor.authorIslam, Ekhwan
dc.date.accessioned2019-07-01T06:43:01Z
dc.date.available2019-07-01T06:43:01Z
dc.date.copyright2019
dc.date.issued2019-04
dc.identifier.otherID 15301033
dc.identifier.otherID 15301092
dc.identifier.otherID 15301119
dc.identifier.otherID 15301132
dc.identifier.urihttp://hdl.handle.net/10361/12282
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2019.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 27-29).
dc.description.abstractThroughout the world the number of deaf and mute population is rising ever so increasingly. In particular Bangladesh has around 2.6 million individuals who aren't able to communicate with society using spoken language. Countries such as Bangladesh tend to ostracize these individuals very harshly thus creating a system that can allow them the opportunity to communicate with anyone regardless of the fact that they might know sign language is something we should pursue. Our system makes use of convolutional neural networks (CNN) to learn from the images in our dataset and detect hand signs from input images. We have made use of inception v3 and vgg16 as image recognition models to train our system with and without imagenet weights to the images. Due to the poor accuracy we saved the best weights after running the model by setting a checkpoint. It resulted in a improved accuracy. The inputs are taken from live video feed and images are extracted to be used for recognition. The system then separates the hand sign from the image and gets predicted by the model to get a Bangla alphabet as the result. After running the model on our dataset and testing it, we received an average accuracy of 99%. We wish to improve upon it as much as possible in the hopes to make deaf/mute communication with the rest of the society as e ortless as possible.en_US
dc.description.statementofresponsibilityRoisul Islam Rumi
dc.description.statementofresponsibilitySyed Moazzim Hossain
dc.description.statementofresponsibilityAhmed Shahriar
dc.description.statementofresponsibilityEkhwan Islam
dc.format.extent29 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectBangla Sign Language (BSL)en_US
dc.subjectCNNen_US
dc.subjectDeep learningen_US
dc.subjectArtificial intelligenceen_US
dc.subjectImage processingen_US
dc.subject.lcshImage processing.
dc.subject.lcshArtificial intelligence.
dc.titleBengali hand sign language recognition using convolutional neural networksen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record