Show simple item record

dc.contributor.advisorParvez, Mohammad Zavid
dc.contributor.advisorRahman, Rafeed
dc.contributor.authorHabib, Md.Adnan
dc.contributor.authorArefeen, Zarif Raiyan
dc.contributor.authorHussain, Arafat
dc.contributor.authorShahriyer, S.M.Rownak
dc.contributor.authorIslam, Tanzid
dc.date.accessioned2022-04-25T04:33:40Z
dc.date.available2022-04-25T04:33:40Z
dc.date.copyright2022
dc.date.issued2022-01
dc.identifier.otherID 18101551
dc.identifier.otherID 18101214
dc.identifier.otherID 18101093
dc.identifier.otherID 18101611
dc.identifier.otherID 18101673
dc.identifier.urihttp://hdl.handle.net/10361/16565
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 35-37).
dc.description.abstractOur paper mainly focuses on developing an audio classification for people, who cannot hear properly, using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). One of the many prevalent complaints from hearing aid users is excessive background noise. Hearing aids with background noise classification algorithms can modify the response based on the noisy environment. Speech, azan, and ambient noises are all examples of significant audio signals. Whenever a human hears a sound, they can easily identify the sound, however it’s not the same for computers, and we have to feed the algorithm data-sets in order to make it distinguish between different sounds[1]. Hence, we came up with the idea to build a system for people who have problems to hear. We have successfully managed to achieve a total of 98.67%, and 97.01% accuracy after training the data on our CNN and RNN model and testing it respectively.en_US
dc.description.statementofresponsibilityMd.Adnan Habib
dc.description.statementofresponsibilityZarif Raiyan Arefeen
dc.description.statementofresponsibilityArafat Hussain
dc.description.statementofresponsibilityS.M.Rownak Shahriyer
dc.description.statementofresponsibilityTanzid Islam
dc.format.extent37 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rights37 pages
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectRNNen_US
dc.subjectCNNen_US
dc.subjectmelspectrogramen_US
dc.subjectAudio feature extractionen_US
dc.subject.lcshNeural networks (Computer science)
dc.titleSound classification using deep learning for hard of hearing and deaf peopleen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record