Normalizing images in various weather and lighting conditions using Pix2Pix GAN
Abstract
Autonomous vehicles are widely regarded as the future of transportation due to its
possible uses in a myriad of applications. In recent years, perception systems in
driverless cars have had reasonable development through the various implementations
of object detection systems with deep-learning algorithms. Noticeable progress
has been made in this field of study as many isolated and multi-model systems have
been developed and/or proposed to help overcome the shortcomings of the sensors
and detection algorithms. These include research on sensing objects under varying
environmental conditions (illumination, refractive indexes, weather conditions) as
well as detection and removal of noise, clutter, and camouflage from the collected
sensory inputs. However, in its current state, perception systems in autonomous
vehicles are still incapable of accurately detecting objects in real-life scenarios using
its visual/thermal camera, LiDAR, radar, and other sensors. Additionally, most
systems lack the robustness to perform well under any given condition. Hence, this
paper proposes to use advanced color vision techniques and Generative Adversarial
Networks (GAN) to produce reconstructed images that can improve the accuracy of
object detection systems for more precise predictions.