Show simple item record

dc.contributor.advisorAlam, Md. Golam Rabiul
dc.contributor.authorSadat, Sami
dc.contributor.authorTalukder, Shaownak Md. Ibne Shahriar
dc.contributor.authorLogno, Shawmika Protichi Sattar
dc.date.accessioned2025-01-15T04:54:23Z
dc.date.available2025-01-15T04:54:23Z
dc.date.copyright©2024
dc.date.issued2024-11
dc.identifier.otherID 20301095
dc.identifier.otherID 20101504
dc.identifier.otherID 18201113
dc.identifier.urihttp://hdl.handle.net/10361/25170
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 39-40).
dc.description.abstractAutonomous navigation for UGVs faces significant challenges in detecting objects accurately in complex environments. Despite advancements in 2D object detection, the absence of robust 3D object detection models leave a critical gap in the accurate identification of objects in real-time UGV applications. In this thesis, we propose a novel approach for 3D object detection in the context of autonomous navigation for Unmanned Ground Vehicles (UGVs). The suggested approach uses a two-stage pipeline. Utilizing the additional depth information from the 3D remote Sensor, 3D proposals are generated from the point cloud data in the initial stage. These proposals act as potential foci for the detection of objects. GLENetVR and SE SSD fusion architecture is used in the second stage to train and detect objects inside the suggested bounding boundaries. The two 3D Networks make it possible to more accurately distinguish between objects and the backdrop because they capture the spatial relationships in the volumetric representations of the point clouds. Combining two Neural Network and CNN models requires combining their feature representations, such as concatenation or element-wise combination, to form a combined feature representation used for object recognition. Through comprehensive testing and evaluation of benchmark datasets, we want to show the effectiveness and efficiency of our suggested strategy in comparison to existing 2D object detection methodologies, which are limited by their reliance on only visual information. Our research lays the door for increased safety and dependability in autonomous navigation systems for UGVs by embracing the promise of cloud point-based 3D object identification. Our proposed model has shown superior performance, achieving high accuracy of surpassing both the SE SSD and GLENetVR models.en_US
dc.description.statementofresponsibilitySami Sadat
dc.description.statementofresponsibilityShaownak Md. Ibne Shahriar Talukder
dc.description.statementofresponsibilityShawmika Protichi Sattar Logno
dc.format.extent50 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subject3D objecten_US
dc.subjectObject detectionen_US
dc.subjectAutonomous navigationen_US
dc.subjectUnmanned ground vehiclesen_US
dc.subjectConvolutional neural networken_US
dc.subjectCNNen_US
dc.subjectDual CNNen_US
dc.subject3D CNNen_US
dc.subjectPoint cloud dataen_US
dc.subject.lcshComputer vision.
dc.subject.lcshPattern recognition systems.
dc.subject.lcshComputational intelligence.
dc.subject.lcshNeural networks (Computer science).
dc.subject.lcshIntelligent control systems.
dc.titlePoint-cloud-based 3D object detection for autonomous navigation in unmanned ground vehiclesen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record