Show simple item record

dc.contributor.advisorRahman, Md. Khalilur
dc.contributor.authorSakiba, Cyrus
dc.contributor.authorTarannum, Syeda Maisha
dc.contributor.authorNur, Farzana
dc.contributor.authorArpan, Fahad Faisal
dc.contributor.authorAnzum, Ahnaf Ahmed
dc.date.accessioned2023-12-31T04:24:57Z
dc.date.available2023-12-31T04:24:57Z
dc.date.copyright2023
dc.date.issued2023-05
dc.identifier.otherID 19101512
dc.identifier.otherID 19101178
dc.identifier.otherID 19101480
dc.identifier.otherID 22341070
dc.identifier.otherID 22341086
dc.identifier.urihttp://hdl.handle.net/10361/22038
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2023.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 50-51).
dc.description.abstractThe principal goal of this study is to create a crime detection system in real-time that can effectively handle closed-circuit television (CCTV) video feeds and evaluate them for possible criminal occurrences. The system’s goal is to improve public safety by offering an advanced approach that makes use of ConvLSTM’s expertise in modeling temporal dynamics and YOLO v7’s expertise in object recognition. We suggest a posture and weapon recognition system that can be applied to real-time videos. The first method proposes the utilization of ConvLSTM for the detection of violent postures. The Conv part is derived from MobileNet v2, while a bi-directional LSTM technique is used. MobileNet v2 was chosen for its superior accuracy and efficiency as a result of its lightweight architecture. The model will be trained to recognize illegal behavior by being exposed to annotated datasets of surveillance videos that depict different types of crime. The output of the system distinguishes between violent and non-violent postures in real-time videos. The system identifies violent postures as kicking, collar grabbing, choking, hair pulling, punching, slapping, etc., while identifying non-violent postures as hugging, handshaking, touching shoulders, walking, etc. We used the real-time violence and non-violence dataset from Kaggle. The second method uses YOLO v7 to detect weapons in three categories, e.g., sticks, guns, and sharp objects. The YOLO v4 was also employed for the aforementioned objective; however, the YOLO v7 yielded superior outcomes, hence it was chosen for further implementation. We customized the weapons dataset to enable our model to accurately detect local Asian weapons like machetes and sticks. The system’s intended use is to prevent illegal acts using two distinct machine learning models in a seamless way.en_US
dc.description.statementofresponsibilityCyrus Sakiba
dc.description.statementofresponsibilitySyeda Maisha Tarannum
dc.description.statementofresponsibilityFarzana Nur
dc.description.statementofresponsibilityFahad Faisal Arpan
dc.description.statementofresponsibilityAhnaf Ahmed Anzum
dc.format.extent51 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectDeep learningen_US
dc.subjectBidirectional LSTMen_US
dc.subjectYOLOv7en_US
dc.subjectYOLOv4en_US
dc.subjectMobilenetV2en_US
dc.subjectViolence predictionen_US
dc.subjectRealtimeen_US
dc.subject.lcshMachine learning
dc.subject.lcshCognitive learning theory
dc.subject.lcshReal-time data processing
dc.titleReal-time crime detection using convolutional LSTM and YOLOv7en_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record