Sound classification using deep learning for hard of hearing and deaf people
Abstract
Our paper mainly focuses on developing an audio classification for people, who
cannot hear properly, using Convolutional Neural Network (CNN) and Recurrent
Neural Network (RNN). One of the many prevalent complaints from hearing aid
users is excessive background noise. Hearing aids with background noise classification
algorithms can modify the response based on the noisy environment. Speech,
azan, and ambient noises are all examples of significant audio signals. Whenever
a human hears a sound, they can easily identify the sound, however it’s not the
same for computers, and we have to feed the algorithm data-sets in order to make it
distinguish between different sounds[1]. Hence, we came up with the idea to build
a system for people who have problems to hear. We have successfully managed to
achieve a total of 98.67%, and 97.01% accuracy after training the data on our CNN
and RNN model and testing it respectively.