Finding location-specific aggregated air quality index with smartphone images using deep convolutional neural network
Abstract
Although smartphones have already become the de facto tool for environmental
health research for their ubiquity and portability, utilizing them in finding location specific aggregated air quality index based on PM2.5 concentration is little ex plored in the literature to date. In this paper, therefore, we vigorously analyze
the difficulties of predicting location-specific PM2.5 concentration from photos cap tured by smartphone cameras. Here, we particularly focus on Dhaka, the capital
of Bangladesh, considering its very high level of air pollution exposure to a huge
number of its dwellers. In our research, we develop a Deep Convolutional Neural
Network (DCNN) and train it using more than a thousand outdoor photos cap tured and labeled by us. We capture the photos at various locations in Dhaka,
Bangladesh, and label them based on PM2.5 concentration data extracted from the
local US consulate as computed by the NowCast algorithm. During training with
the dataset, our model learns a correlation index through supervised learning, which
improves the model’s ability to act as a Picture-based Predictor of PM2.5 Concen tration (PPPC) making it capable of detecting comparable daily aggregated AQI
index from a photo captured by a smartphone. Here, the computation necessary in
our model is comparatively resource-efficient, as our model subsumes a much smaller
number of parameters compared to most of the other alternatives. Moreover, our
experimental results show that our model exhibits more robustness, for location specific PM2.5 prediction than existing state-of-the-art models such as ViT (Vision
Transformer) and INN (Involutional Neural Network) as well as other popular mod els that are created based on CNN, such as VGG19, ResNet50, or MobileNetV2.