An efficient interpretation model for people with hearing and speaking disabilities
Abstract
Sign language is a medium of communication between individuals who have hearing and speaking deficiency. Generally they are called Deaf and Mute. To have a better, effective and simpler communication between the vocal and non-vocal society, it is very important to understand their language without difficulty. Previous research works have shown us different types of efficient methods of interaction between these two sets of people. On the down side, however, they tend to focus on one sided conversation only. Our paper focuses on two very important things; one is converting the American Sign Language (ASL) to text word-by-word and the other is audio to gesture & text conversion. We used the microphone of Microsoft Kinect Sensor device; a camera; along with common recognition algorithm to detect, recognize sign language and interpret the interactive hand shape to American Sign Language text. We decided to build efficient system that can be used as an interpreter among the hearing impaired and the normal people. Besides helping as an interpreter, this research may also open doors to numerous other applications like sign language tutorials in the future.