Show simple item record

dc.contributor.advisorUddin, Dr. Jia
dc.contributor.authorSayeed, M. M. Mahmud
dc.contributor.authorHossain, Anisha Anjum
dc.contributor.authorPriya, Samrin
dc.date.accessioned2017-12-26T05:36:54Z
dc.date.available2017-12-26T05:36:54Z
dc.date.copyright2017
dc.date.issued8/22/2017
dc.identifier.otherID 11201045
dc.identifier.otherID 13201067
dc.identifier.otherID 13301101
dc.identifier.urihttp://hdl.handle.net/10361/8698
dc.descriptionCataloged from PDF version of thesis report.
dc.descriptionIncludes bibliographical references (page 29-30).
dc.descriptionThis thesis report is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2017.en_US
dc.description.abstractSign language is a medium of communication between individuals who have hearing and speaking deficiency. Generally they are called Deaf and Mute. To have a better, effective and simpler communication between the vocal and non-vocal society, it is very important to understand their language without difficulty. Previous research works have shown us different types of efficient methods of interaction between these two sets of people. On the down side, however, they tend to focus on one sided conversation only. Our paper focuses on two very important things; one is converting the American Sign Language (ASL) to text word-by-word and the other is audio to gesture & text conversion. We used the microphone of Microsoft Kinect Sensor device; a camera; along with common recognition algorithm to detect, recognize sign language and interpret the interactive hand shape to American Sign Language text. We decided to build efficient system that can be used as an interpreter among the hearing impaired and the normal people. Besides helping as an interpreter, this research may also open doors to numerous other applications like sign language tutorials in the future.en_US
dc.description.statementofresponsibilityM. M. Mahmud Sayeed
dc.description.statementofresponsibilityAnisha Anjum Hossain
dc.description.statementofresponsibilitySamrin Priya
dc.format.extent31 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.rightsBRAC University thesis are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectAmerican sign languageen_US
dc.subjectSign language.en_US
dc.subjectGesture and text conversionen_US
dc.subjectGesture and image conversionen_US
dc.subjectMicrosoft kinect sensoren_US
dc.titleAn efficient interpretation model for people with hearing and speaking disabilitiesen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record