Show simple item record

dc.contributor.advisorUddin, Dr. Jia
dc.contributor.authorSayeed, M. M. Mahmud
dc.contributor.authorHossain, Anisha Anjum
dc.contributor.authorPriya, Samrin
dc.date.accessioned2018-01-10T09:26:45Z
dc.date.available2018-01-10T09:26:45Z
dc.date.copyright2017
dc.date.copyright2017
dc.date.issued8/22/2017
dc.identifier.otherID 11201045
dc.identifier.otherID 13201067
dc.identifier.otherID 13301101
dc.identifier.urihttp://hdl.handle.net/10361/9012
dc.descriptionCataloged from PDF version of thesis report.
dc.descriptionIncludes bibliographical references.
dc.descriptionThis thesis report is submitted in partial fulfilment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2017.en_US
dc.description.abstractSign language is a medium of communication between individuals who have hearing and speaking deficiency. Generally they are called Deaf and Mute. To have a better, effective and simpler communication between the vocal and non-vocal society, it is very important to understand their language without difficulty. Previous research works have shown us different types of efficient methods of interaction between these two sets of people. On the down side, however, they tend to focus on one sided conversation only. Our paper focuses on two very important things; one is converting the American Sign Language (ASL) to text word-by-word and the other is audio to gesture & text conversion. We used the microphone of Microsoft Kinect Sensor device; a camera; along with common recognition algorithm to detect, recognize sign language and interpret the interactive hand shape to American Sign Language text. We decided to build efficient system that can be used as an interpreter among the hearing impaired and the normal people. Besides helping as an interpreter, this research may also open doors to numerous other applications like sign language tutorials in the future.en_US
dc.description.statementofresponsibility11201045
dc.description.statementofresponsibility13201067
dc.description.statementofresponsibility13301101
dc.language.isoenen_US
dc.publisherBRAC Univeristyen_US
dc.rightsBRAC University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectHearing disabilitiesen_US
dc.subjectSpeaking disabilitiesen_US
dc.subjectInterpretation modelen_US
dc.titleAn efficient interpretation model for people with hearing and speaking disabilitiesen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB. Computer Science and Engineering 


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record