dc.contributor.advisor | Parvez, Mohammad Zavid | |
dc.contributor.advisor | Rahman, Rafeed | |
dc.contributor.author | Habib, Md.Adnan | |
dc.contributor.author | Arefeen, Zarif Raiyan | |
dc.contributor.author | Hussain, Arafat | |
dc.contributor.author | Shahriyer, S.M.Rownak | |
dc.contributor.author | Islam, Tanzid | |
dc.date.accessioned | 2022-04-25T04:33:40Z | |
dc.date.available | 2022-04-25T04:33:40Z | |
dc.date.copyright | 2022 | |
dc.date.issued | 2022-01 | |
dc.identifier.other | ID 18101551 | |
dc.identifier.other | ID 18101214 | |
dc.identifier.other | ID 18101093 | |
dc.identifier.other | ID 18101611 | |
dc.identifier.other | ID 18101673 | |
dc.identifier.uri | http://hdl.handle.net/10361/16565 | |
dc.description | This thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022. | en_US |
dc.description | Cataloged from PDF version of thesis. | |
dc.description | Includes bibliographical references (pages 35-37). | |
dc.description.abstract | Our paper mainly focuses on developing an audio classification for people, who
cannot hear properly, using Convolutional Neural Network (CNN) and Recurrent
Neural Network (RNN). One of the many prevalent complaints from hearing aid
users is excessive background noise. Hearing aids with background noise classification
algorithms can modify the response based on the noisy environment. Speech,
azan, and ambient noises are all examples of significant audio signals. Whenever
a human hears a sound, they can easily identify the sound, however it’s not the
same for computers, and we have to feed the algorithm data-sets in order to make it
distinguish between different sounds[1]. Hence, we came up with the idea to build
a system for people who have problems to hear. We have successfully managed to
achieve a total of 98.67%, and 97.01% accuracy after training the data on our CNN
and RNN model and testing it respectively. | en_US |
dc.description.statementofresponsibility | Md.Adnan Habib | |
dc.description.statementofresponsibility | Zarif Raiyan Arefeen | |
dc.description.statementofresponsibility | Arafat Hussain | |
dc.description.statementofresponsibility | S.M.Rownak Shahriyer | |
dc.description.statementofresponsibility | Tanzid Islam | |
dc.format.extent | 37 pages | |
dc.language.iso | en | en_US |
dc.publisher | Brac University | en_US |
dc.rights | 37 pages | |
dc.rights | Brac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. | |
dc.subject | RNN | en_US |
dc.subject | CNN | en_US |
dc.subject | melspectrogram | en_US |
dc.subject | Audio feature extraction | en_US |
dc.subject.lcsh | Neural networks (Computer science) | |
dc.title | Sound classification using deep learning for hard of hearing and deaf people | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Department of Computer Science and Engineering, Brac University | |
dc.description.degree | B. Computer Science | |