Show simple item record

dc.contributor.advisorAlam, Md. Golam Rabiul
dc.contributor.advisorAhmed, Md. Faisal
dc.contributor.authorHaque, Tashfia
dc.contributor.authorAhmed, Farhan Fuad
dc.contributor.authorAhmed, S. M. Irfan
dc.contributor.authorSiam, Mohammad
dc.date.accessioned2024-04-24T06:45:25Z
dc.date.available2024-04-24T06:45:25Z
dc.date.copyright2023
dc.date.issued2023-09
dc.identifier.otherID 18201140
dc.identifier.otherID 19101549
dc.identifier.otherID 19101390
dc.identifier.otherID 23141065
dc.identifier.urihttp://hdl.handle.net/10361/22667
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2023.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 46-48).
dc.description.abstractThis thesis presents a novel approach for the automatic detection, categorization, and sub-categorization of violent and nonviolent behaviors in video footage. This research addresses the growing necessity for enhanced security protocols in both public and private sectors. Surveillance cameras are commonly accessible and easily affordable; however, their utilization is frequently inefficient due to boundaries related to human real-time monitoring. This occurrence may lead to delayed responses to unanticipated events, hence highlighting the need for enhanced and efficient monitoring measures. Our thesis presents a novel approach for the automation of violence detection by utilizing machine learning and deep learning techniques. The techniques applied in this study integrate object and motion detection through the utilization of optical flow analysis and a MobileNet-Bi-LSTM fusion architecture. This methodology exceeds conventional methods by incorporating both temporal dynamics and spatial features. We have invested notable efforts in enhancing our dataset acknowledging the significance of training an efficient violence detection system. In addition to the existing dataset, we have systematically compiled an adequate number of video footage. The compiled videos contain a diverse array of circumstances, effectively representing a variety of environments, lighting conditions, and situations. The inclusion of this range is crucial in facilitating our model’s ability to generalize and adapt to real-world scenarios seamlessly. A thorough annotation procedure of meticulous labeling of ‘violent’ ‘non-violent’ actions, along with specific subcategories of violence like ‘Beating,’ ‘Use of Weapons,’ and ‘Burning’ was done to uphold the standards of quality and precision in the enhanced dataset. For an in-depth review, a comparison study was undertaken to examine two unique methodologies. The first approach centers on the categorization of actions into two distinct categories: ‘Non-Violence’ and ‘Violence,’ based on a binary classification system. The second approach entails the categorization of behaviors of the videos of our unique dataset named ‘Beating-Burning-Weapon (BBW) Violence’ Dataset into two main groups, namely ‘Non-Violence’ and ‘Violence,’ which further subdivided into three sub-categories of violence, which are ‘Beating,’ ‘Burning,’ and ‘Use of Weapons.’ In our comprehensive evaluation of violence detection methods, we tested two violence detection methods on the two previously mentioned datasets. The ‘Frame Selection at Equal Intervals’ method achieved higher accuracy, 90.16% in the ‘Real Life Violence Situations (RLVS)’ Dataset and 85.32% in the BBW Violence Dataset, making it a precise choice. On the other hand, the ‘Merged Frame Stacking’ method, offering computational efficiency, achieved respectable accuracies of 85% and 74% in the RLVS and BBW Violence Datasets respectively. This provides a foundational baseline for violence detection, thus highlighting method-specific advantages and trade-offs. Our research holds significant potential for proactive security management by promptly detecting and responding to possible threats.en_US
dc.description.statementofresponsibilityTashfia Haque
dc.description.statementofresponsibilityFarhan Fuad Ahmed
dc.description.statementofresponsibilityS. M. Irfan Ahmed
dc.description.statementofresponsibilityMohammad Siam
dc.format.extent48 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectSurveillance cameraen_US
dc.subjectViolence detectionen_US
dc.subjectDeep learningen_US
dc.subjectMotion detectionen_US
dc.subjectBeatingen_US
dc.subjectUse of weaponsen_US
dc.subjectBurningen_US
dc.subjectOptical flowen_US
dc.subjectBidirectional Long Short-Term Memory (Bi-LSTM)en_US
dc.subjectMobileNet V2en_US
dc.subjectCrime detectionen_US
dc.subjectReal-time monitoringen_US
dc.subjectProactive security managementen_US
dc.subject.lcshMachine learning
dc.subject.lcshCognitive learning theory
dc.titleOptical flow based violence detection from video footage using hybrid MobileNet and Bi-LSTMen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record