A comparative performance analysis of accident anticipation with deep learning extractors
Abstract
Accident anticipation has become a major focus to avert accidents or to minimize
their impacts. Over the years, several network systems are being developed and
applied in self-driving technology. Despite the fact that advancement in the autonomous industry is fast-growing, major efficiency is required in the network systems that are gradually emerging. Recent research has proposed a novel end-to-end
dynamic spatial-temporal attention network (DSTA) by combining a Gated Recur-
rent Unit (GRU) with spatial-temporal attention learning network, to identify an
accident video in 4.87 seconds before the occurrence of the accident with 99.6% ac-
curacy when tested on the Car Crash Dataset (CCD). However, DSTA has not been
able to provide efficient results on the Dashcam Accident Dataset (DAD) dataset.
Moreover, the GRU model integrated in the DSTA network has a weak information
processing capability and low update efficiency amid several hidden layers. The
decision-making process of the accident anticipation network may be understood
using the high quality saliency maps produced by the Grad-CAM and XGradCAM
approaches. In this paper, we evaluate that using Wide ResNet network enhances
the performance mechanism of feature extraction to increase accident anticipation
precision. This change improves the capacity to process information and the learning
efficacy. In addition, we suggest employing a Gated Recurrent Unit (GRU) network
which will serve as a prominent feature to train the model to recognize data’s sequential properties and apply patterns to forecast the following likely event. Hence,
we plan to incorporate Wide ResNet50, a system for extracting features which will
identify the vehicles at risk by using wider residual blocks. These neural networks
generate labels for identifying hazardous conditions in driving environments in order
to anticipate accidents.