Show simple item record

dc.contributor.advisorChakrabarty, Amitabha
dc.contributor.authorGhosh, Dipon Kumar
dc.date.accessioned2022-01-17T06:37:05Z
dc.date.available2022-01-17T06:37:05Z
dc.date.copyright2021
dc.date.issued2021-11
dc.identifier.otherID 19366007
dc.identifier.urihttp://hdl.handle.net/10361/15946
dc.descriptionThis thesis is submitted in partial fulfilment of the requirements for the degree of Master of Engineering in Computer Science and Engineering, 2021.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 67-75).
dc.description.abstractHuman actuation recognition (HAR) has been performed using current deep learning (DL) algorithms using a variety of input formats, including video footage, optical flow, and even skeleton points, which may be acquired via depth sensors or pose estimation technologies. Recent techniques, on the other hand, are computationally costly and have a high memory footprint, making them unsuitable for use in realworld environments. Furthermore, the design of existing techniques does not allow for the full extraction of spatial and temporal characteristics of an action, and as a result, information is lost throughout the recognition process. Here, we present a novel framework for action recognition that extracts spatial and temporal characteristics separately while reducing the amount of information lost by a substantial amount. The multi-dimensional convolutional network (MDCN) and the redefined spatio-temporal graph convolutional network (RSTCN) are two models developed in accordance with this framework. In both cases, spatial and temporal information are extracted irrespective of the precise spatio-temporal location. Our approach was evaluated in two particular aspects of human action recognition, namely violence detection and skeleton-based action recognition, in order to ensure that our models were accurate and reliable. In spite of being cost e↵ective and having less parameters, our proposed MDCN achieved 87.5% accuracy in the largest violence detection benchmark dataset and RST-GCN obtained 92.2% accuracy on the skeleton dataset. The performance of our models edge devices with limited resources, which are suitable for deploying at real-world environments is also also analyze and compare, such as surveillance system and smart healthcare system. The proposed MDCN model processes 80 frames per second on edge device such as, Nvidia Jetson Nano and RST-GCN performs at a speed of 993 frames per second. Our proposed methods o↵er a strong balance between accuracy, memory consumption, and processing time, which make them suitable for deploying at real-world environments.en_US
dc.description.statementofresponsibilityDipon Kumar Ghosh
dc.format.extent75 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectHuman action recognition (HAR)en_US
dc.subjectSurveillance systemsen_US
dc.subjectViolence detectionen_US
dc.subjectSkeleton-based human action recognitionen_US
dc.subjectConvolutional neural network (CNN)en_US
dc.subjectGraph convolutional networks (GCN)en_US
dc.subjectFeature fusionen_US
dc.subject.lcshHuman activity recognition
dc.subject.lcshNeural network (Computer Science)
dc.titleEfficient Spatio-temporal feature extraction for human action recognitionen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeM. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record