Analysis of deep learning models on low-light pest detection
MetadataShow full item record
It is undeniable that in recent years, exceptional progress has been made toward building the most accurate and efficient object detectors. However, existing low- light object detectors still require a substantial amount of resources to perform at their best. Our main goal in this research is to train and evaluate recently developed deep learning object detection models on low-light images and see if they can show decent performance without any additional enhancement networks. Furthermore, we aim to achieve those results with minimum computational cost. In this research, we have created our own custom dataset from a publicly available insect image dataset called ‘IP102’. The new dataset now named ‘IP013’ consists of 13 classes of insects and approximately 8k annotated images. Moreover, we chose recently developed YOLOv7 and DETR object detectors and compared their performance against now older state-of-the-art RetinaNet and EfficientDet deep learning models. YOLOv7, EfficientDet, and RetinaNet are purely CNN-based models whereas DETR uses a Transformer as both encoder and decoder and a CNN as the backbone. Our research shows that YOLOv7 outperforms all of the other models with a mAP0.5:.95 of 45.9 while using the lowest training time and the model that used the least computational resources was EfficientDet which admittedly showed lackluster mAP0.5:.95 of 33.2 with only 3.9M parameters and using 2.5 GFLOPs.