Tiny-ML based person identification in dynamic motion
Abstract
TinyML, short for Tiny Machine Learning, focuses on small, low-power machine
learning systems, with a significant emphasis on human identification. This capability
is crucial in areas like access control, security, and law enforcement. Traditional
methods like fingerprint and face recognition often require costly hardware and software,
whereas TinyML offers a more economical and efficient alternative. TinyML
models can be trained using various sensors, such as cameras, microphones, and accelerometers,
making them suitable for devices like smartphones and smartwatches.
Techniques such as gait and voice recognition are also viable with TinyML, with
computer vision playing a crucial role in processing visual data for human identification.
Despite the challenges in facial recognition, such as the need for extensive
data and computational resources, TinyML models paired with computer vision hold
promise for improving effectiveness, affordability, and security.Our analysis of CNN
architectures (SqueezeNet, ResNet50, VGG16, MobileNetV2, and MobileFaceNet)
for human identification in dynamic motion reveals significant performance improvements
with data augmentation. ResNet50 and MobileNetV2 showed the most notable
enhancements, with accuracy improvements to 96%, demonstrating robust
generalization with enriched data. MobileNetV2 achieved a precision of 97% and an
F1 score of 94%, highlighting its effectiveness. While all models benefited from data
augmentation, VGG16 and MobileFaceNet also exhibited significant enhancements.
These findings underscore the critical role of data augmentation in bolstering model
performance and suggest that deploying ResNet50 and MobileNetV2 on devices like
the ESP32-CAM could yield highly effective human identification systems. This
analysis highlights the interplay between model architecture, dataset characteristics,
and data augmentation in shaping model efficacy for real-world applications.