Silent voice: harnessing deep learning for lip-reading in Bangla
Abstract
Understanding speech just through lip movement is known as lipreading. It is a
crucial component of interpersonal interactions. The majority of the previous initiatives
attempted to address the English lipreading issue. However, our goal is to
build up a deep neural network for the Bangla language that can produce comprehensible
speech from silent videos just by capturing the speaker’s lip movements.
Despite the fact that there is research on this topic in various languages, Bangla
does not currently have a study or a suitable corpus to conduct research. Hence, we
created a dataset of 4000 videos where we selected 20 Bangla words and these words
were pronounced by 65 different speakers. Then we implemented models based on
CNN-RNN architecture. Two models LipNet and autoencoder-decoder were used
in previous research and two custom models were implemented as a part of our
own experiments. Finally, Lip-Net exhibits a reasonable level of performance with
an accuracy of 62%, while Auto Encoder-Decoder performs poorly with an accuracy
of 49.65%. Custom Model-1 shows a substantial rise in accuracy with 70.86%,
and Custom Conv-LSTM exhibits the best overall performance with a maximum
accuracy of 76.24%.