dc.contributor.advisor | Arif, Hossain | |
dc.contributor.advisor | Islam, Md. Saiful | |
dc.contributor.author | Rumi, Roisul Islam | |
dc.contributor.author | Hossain, Syed Moazzim | |
dc.contributor.author | Shahriar, Ahmed | |
dc.contributor.author | Islam, Ekhwan | |
dc.date.accessioned | 2019-07-01T06:43:01Z | |
dc.date.available | 2019-07-01T06:43:01Z | |
dc.date.copyright | 2019 | |
dc.date.issued | 2019-04 | |
dc.identifier.other | ID 15301033 | |
dc.identifier.other | ID 15301092 | |
dc.identifier.other | ID 15301119 | |
dc.identifier.other | ID 15301132 | |
dc.identifier.uri | http://hdl.handle.net/10361/12282 | |
dc.description | This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2019. | en_US |
dc.description | Cataloged from PDF version of thesis. | |
dc.description | Includes bibliographical references (pages 27-29). | |
dc.description.abstract | Throughout the world the number of deaf and mute population is rising ever so
increasingly. In particular Bangladesh has around 2.6 million individuals who aren't
able to communicate with society using spoken language. Countries such as Bangladesh
tend to ostracize these individuals very harshly thus creating a system that can allow
them the opportunity to communicate with anyone regardless of the fact that they
might know sign language is something we should pursue. Our system makes use of
convolutional neural networks (CNN) to learn from the images in our dataset and
detect hand signs from input images. We have made use of inception v3 and vgg16
as image recognition models to train our system with and without imagenet weights
to the images. Due to the poor accuracy we saved the best weights after running the
model by setting a checkpoint. It resulted in a improved accuracy. The inputs are
taken from live video feed and images are extracted to be used for recognition. The
system then separates the hand sign from the image and gets predicted by the model
to get a Bangla alphabet as the result. After running the model on our dataset and
testing it, we received an average accuracy of 99%. We wish to improve upon it as
much as possible in the hopes to make deaf/mute communication with the rest of
the society as e ortless as possible. | en_US |
dc.description.statementofresponsibility | Roisul Islam Rumi | |
dc.description.statementofresponsibility | Syed Moazzim Hossain | |
dc.description.statementofresponsibility | Ahmed Shahriar | |
dc.description.statementofresponsibility | Ekhwan Islam | |
dc.format.extent | 29 pages | |
dc.language.iso | en | en_US |
dc.publisher | BRAC University | en_US |
dc.rights | Brac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. | |
dc.subject | Bangla Sign Language (BSL) | en_US |
dc.subject | CNN | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Artificial intelligence | en_US |
dc.subject | Image processing | en_US |
dc.subject.lcsh | Image processing. | |
dc.subject.lcsh | Artificial intelligence. | |
dc.title | Bengali hand sign language recognition using convolutional neural networks | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Department of Computer Science and Engineering, Brac University | |
dc.description.degree | B. Computer Science and Engineering | |