Surrounding-aware screen-space-global-illumination using generative adversarial network
Abstract
Global Illumination is a strategy in computer graphics to add certain degree of realism in case of 3D scene lighting, by trying to emulate how light rays work in real life. Several approaches exists to achieve such kind of visual effect for computed generated imagery. The most physically accurate approach is through ray-tracing. It can produce results which are realistic enough, with a trade-off of being time and computational-resource intensive, making them unsuitable for real-time usage. For more real-time usage scenarios, a set of faster algorithm exists that utilizes rasterization rather than ray-tracing. Despite being faster, those still can be resource intensive or generate physically inaccurate results. Our Generative Adversarial Net-work based approach targets to bring close to physically accurate results based on rasterization output data which can be obtained from a conventional deferred rendering pipeline, while retaining speed. These rasterization output data, which are basically screen-space feature buffers will act as the input to our deep-learning network, which in turn will produce per-frame lightmaps that contain global illumination data, which are further used to generate a presentable frame on the screen. Using screen-space information from a single viewpoint won’t always guarantee light consistency, thus our approach takes into account the rasterization output data of the surrounding of a certain viewpoint, producing more accurate global illumination.