Show simple item record

dc.contributor.advisorChakrabarty, Amitabha
dc.contributor.authorAhmed, Mirza Raiyan
dc.contributor.authorNokib, Shahed Pervez
dc.contributor.authorNafee, Shadman Ahmad
dc.contributor.authorKhondaker, Jannatus Sakira
dc.date.accessioned2024-10-21T06:02:31Z
dc.date.available2024-10-21T06:02:31Z
dc.date.copyright©2024
dc.date.issued2024-05
dc.identifier.otherID 20101188
dc.identifier.otherID 20301123
dc.identifier.otherID 20341033
dc.identifier.otherID 20301468
dc.identifier.urihttp://hdl.handle.net/10361/24359
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science, 2024.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 54-57).
dc.description.abstractTinyML, short for Tiny Machine Learning, focuses on small, low-power machine learning systems, with a significant emphasis on human identification. This capability is crucial in areas like access control, security, and law enforcement. Traditional methods like fingerprint and face recognition often require costly hardware and software, whereas TinyML offers a more economical and efficient alternative. TinyML models can be trained using various sensors, such as cameras, microphones, and accelerometers, making them suitable for devices like smartphones and smartwatches. Techniques such as gait and voice recognition are also viable with TinyML, with computer vision playing a crucial role in processing visual data for human identification. Despite the challenges in facial recognition, such as the need for extensive data and computational resources, TinyML models paired with computer vision hold promise for improving effectiveness, affordability, and security.Our analysis of CNN architectures (SqueezeNet, ResNet50, VGG16, MobileNetV2, and MobileFaceNet) for human identification in dynamic motion reveals significant performance improvements with data augmentation. ResNet50 and MobileNetV2 showed the most notable enhancements, with accuracy improvements to 96%, demonstrating robust generalization with enriched data. MobileNetV2 achieved a precision of 97% and an F1 score of 94%, highlighting its effectiveness. While all models benefited from data augmentation, VGG16 and MobileFaceNet also exhibited significant enhancements. These findings underscore the critical role of data augmentation in bolstering model performance and suggest that deploying ResNet50 and MobileNetV2 on devices like the ESP32-CAM could yield highly effective human identification systems. This analysis highlights the interplay between model architecture, dataset characteristics, and data augmentation in shaping model efficacy for real-world applications.en_US
dc.description.statementofresponsibilityMirza Raiyan Ahmed
dc.description.statementofresponsibilityShahed Pervez Nokib
dc.description.statementofresponsibilityShadman Ahmad Nafee
dc.description.statementofresponsibilityJannatus Sakira Khondaker
dc.format.extent64 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectTinyMLen_US
dc.subjectTiny machine learningen_US
dc.subjectObject detectionen_US
dc.subjectDynamic motionen_US
dc.subjectPerson identificationen_US
dc.subjectImage analysis
dc.subject.lcshPattern recognition.
dc.subject.lcshSignal processing--Digital techniques.
dc.subject.lcshMicrocontrollers.
dc.subject.lcshMotion perception (Vision).
dc.subject.lcshComputer vision.
dc.titleTiny-ML based person identification in dynamic motionen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc. in Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record