Show simple item record

dc.contributor.advisorHossain, Muhammad Iqbal
dc.contributor.advisorAbrar, Mohammed Abid
dc.contributor.authorMostak, Alfi Mashab
dc.contributor.authorNeha, Nayna Jahan
dc.contributor.authorMohiuddin, Azwaad Labiba
dc.contributor.authorTabassum, Adiba
dc.date.accessioned2023-10-15T06:21:49Z
dc.date.available2023-10-15T06:21:49Z
dc.date.copyright©2022
dc.date.issued2022-09-29
dc.identifier.otherID 22341078
dc.identifier.otherID 19101223
dc.identifier.otherID 19101032
dc.identifier.otherID 19101211
dc.identifier.urihttp://hdl.handle.net/10361/21810
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 30-32).
dc.description.abstractAccident anticipation has become a major focus to avert accidents or to minimize their impacts. Over the years, several network systems are being developed and applied in self-driving technology. Despite the fact that advancement in the autonomous industry is fast-growing, major efficiency is required in the network systems that are gradually emerging. Recent research has proposed a novel end-to-end dynamic spatial-temporal attention network (DSTA) by combining a Gated Recur- rent Unit (GRU) with spatial-temporal attention learning network, to identify an accident video in 4.87 seconds before the occurrence of the accident with 99.6% ac- curacy when tested on the Car Crash Dataset (CCD). However, DSTA has not been able to provide efficient results on the Dashcam Accident Dataset (DAD) dataset. Moreover, the GRU model integrated in the DSTA network has a weak information processing capability and low update efficiency amid several hidden layers. The decision-making process of the accident anticipation network may be understood using the high quality saliency maps produced by the Grad-CAM and XGradCAM approaches. In this paper, we evaluate that using Wide ResNet network enhances the performance mechanism of feature extraction to increase accident anticipation precision. This change improves the capacity to process information and the learning efficacy. In addition, we suggest employing a Gated Recurrent Unit (GRU) network which will serve as a prominent feature to train the model to recognize data’s sequential properties and apply patterns to forecast the following likely event. Hence, we plan to incorporate Wide ResNet50, a system for extracting features which will identify the vehicles at risk by using wider residual blocks. These neural networks generate labels for identifying hazardous conditions in driving environments in order to anticipate accidents.en_US
dc.description.statementofresponsibilityAlfi Mashab Mostak
dc.description.statementofresponsibilityNayna Jahan Neha
dc.description.statementofresponsibilityAzwaad Labiba Mohiuddin
dc.description.statementofresponsibilityAdiba Tabassum
dc.format.extent43 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectAccident anticipationen_US
dc.subjectDeep learningen_US
dc.subjectCar Crash Dataset (CCD)en_US
dc.subjectDashcam Accident Dataset (DAD)en_US
dc.subject.lcshTraffic accidents
dc.subject.lcshMachine learning
dc.titleA comparative performance analysis of accident anticipation with deep learning extractorsen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record