Fruit and vegetable freshness detection using deep learning
Abstract
Bangladesh’s economy depends on agriculture, which contributes significantly to
GDP. It’s time to automate agriculture for increased productivity, efficiency, and
sustainability. Computer Vision can assist in ensuring agricultural product quality.
CNN is more efficient than other (ML) algorithms for Computer Vision applications
since it automatically extracts features and handles complex problems. We deployed
CNN architectures to identify fruit and vegetable freshness. Using Computer Vision
technology, we want to make food production, sorting, packaging, and delivery more
efficient, inexpensive, feasible, and safe at the production and consumer level. Man ual quality testing is laborious, inaccurate, and time-consuming. In the study, we
have compared 7 pre-trained CNN models (VGG19, InceptionV3, EfficientNetV2L,
Xception, ResNet152V2, MobileNetV2, and DenseNet201) with our custom, CNN based image classification model, “FreshDNN”. Our custom small Deep Learning
model classifies fresh and rotten fruits and vegetables. Using this custom model,
users may snap food images to determine their freshness. Farmers may utilize it to
embedded systems and map out their agricultural areas on the basis of freshness
of their fruits or vegetables. We trained the models on our dataset to recognize
fresh and rotting fruit using image data from 8 distinct fruits and vegetables. We
observed that FreshDNN had a 99.32% training accuracy, 97.8% validation accu racy and beat pre-trained models in various performance measures like Precision
(98%), Recall (98%), F1 Score (98%) except for VGG19. However, our own custom
model surpassed every pre-trained model for our dataset in terms of the number
of parameters (394,448), training time (65.77 minutes), ROC-AUC score (99.98%),
computational cost, and space (4.6 MB). We have also implemented 5-fold cross validation where our model has performed similarly better where train, validation
and test accuracy was 99.35%, 97.62% and 97.658% respectively. We believe it will
perform comparably better than other pre-trained models.