Efficient Spatio-temporal feature extraction for human action recognition
View/ Open
Date
2021-11Publisher
Brac UniversityAuthor
Ghosh, Dipon KumarMetadata
Show full item recordAbstract
Human actuation recognition (HAR) has been performed using current deep learning
(DL) algorithms using a variety of input formats, including video footage, optical
flow, and even skeleton points, which may be acquired via depth sensors or pose
estimation technologies. Recent techniques, on the other hand, are computationally
costly and have a high memory footprint, making them unsuitable for use in realworld
environments. Furthermore, the design of existing techniques does not allow
for the full extraction of spatial and temporal characteristics of an action, and as
a result, information is lost throughout the recognition process. Here, we present a
novel framework for action recognition that extracts spatial and temporal characteristics
separately while reducing the amount of information lost by a substantial
amount. The multi-dimensional convolutional network (MDCN) and the redefined
spatio-temporal graph convolutional network (RSTCN) are two models developed
in accordance with this framework. In both cases, spatial and temporal information
are extracted irrespective of the precise spatio-temporal location. Our approach was
evaluated in two particular aspects of human action recognition, namely violence detection
and skeleton-based action recognition, in order to ensure that our models
were accurate and reliable. In spite of being cost e↵ective and having less parameters,
our proposed MDCN achieved 87.5% accuracy in the largest violence detection
benchmark dataset and RST-GCN obtained 92.2% accuracy on the skeleton dataset.
The performance of our models edge devices with limited resources, which are suitable
for deploying at real-world environments is also also analyze and compare, such
as surveillance system and smart healthcare system. The proposed MDCN model
processes 80 frames per second on edge device such as, Nvidia Jetson Nano and
RST-GCN performs at a speed of 993 frames per second. Our proposed methods
o↵er a strong balance between accuracy, memory consumption, and processing time,
which make them suitable for deploying at real-world environments.