Show simple item record

dc.contributor.advisorChakraborty, Amitabha
dc.contributor.authorDas, Joy Krishan
dc.contributor.authorGhosh, Arka
dc.contributor.authorPal, Abhijit Kumar
dc.contributor.authorDutta, Sumit
dc.date.accessioned2021-05-29T10:04:59Z
dc.date.available2021-05-29T10:04:59Z
dc.date.copyright2020
dc.date.issued2020-04
dc.identifier.otherID 17301218
dc.identifier.otherID 16201007
dc.identifier.otherID 16301148
dc.identifier.otherID 16301104
dc.identifier.urihttp://dspace.bracu.ac.bd/xmlui/handle/10361/14444
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2020.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 43-46).
dc.description.abstractThere are many sounds all around us and our brain can easily and clearly identify them. Furthermore, our brain processes the received sound signals continuously and provides us with relevant environmental knowledge. Although not up to the level of accuracy of the brain, there are some smart devices which can extract necessary information from an audio signal, with the help of di erent algorithms. And as the days pass by more, more research is being conducted to ensure that accuracy level of this information extraction increases. Over the years several models like the CNN, ANN, RCNN and many machine learning techniques have been adopted to classify sound accurately and these have shown promising results in the recent years in distinguishing spectra- temporal pictures. For our research purpose, we are using seven features which are Chromagram, Mel-spectrogram, Spectral contrast, Tonnetz, MFCC, Chroma CENS and Chroma cqt.We have employed two models for the classi cation process of audio signals which are LSTM and CNN and the dataset used for the research is the UrbanSound8K. The novelty of the research lies in showing that the LSTM shows a better result in classi cation accuracy compared to CNN, when the MFCC feature is used. Furthermore, we have augmented the UrbanSound8K dataset to ensure that the accuracy of the LSTM is higher than the CNN in case of both the original dataset as well as the augmented one. Moreover, we have tested the accuracy of the models based on the features used. This has been done by using each of the features separately on each of the models, in addition to the two forms of feature stacking that we have performed. The rst form of feature stacking contains the features Chromagram, Mel-spectrogram, Spectral contrast, Tonnetz, MFCC, while the second form of feature stacking contains MFCC, Melspectrogram, Chroma cqt and Chroma stft. Likewise, we have stacked features using di erent combinations to expand our research.In such a way it was possible, with our LSTM model, to reach an accuracy of 98.80%, which is state-of-the-art performance.en_US
dc.description.statementofresponsibilityJoy Krishan Das
dc.description.statementofresponsibilityArka Ghosh
dc.description.statementofresponsibilityAbhijit Kumar Pal
dc.description.statementofresponsibilitySumit Dutta
dc.format.extent47 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectSound classi cationen_US
dc.subjectSpectrogramsen_US
dc.subjectUrbansound8ken_US
dc.subjectCNNen_US
dc.subjectLSTMen_US
dc.subjectLibROSAen_US
dc.subject.lcshNeural networks (Computer science)
dc.titleUrban sound classification using convolutional Neural Network and long short term memory based on multiple featuresen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB. Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record