Show simple item record

dc.contributor.advisorMukta, Jannatun Noor
dc.contributor.authorMahmud, Abrar
dc.contributor.authorSifar, Alimus
dc.contributor.authorRahman, Moh. Absar
dc.contributor.authorMostafa, Fateen Yusuf
dc.contributor.authorTasnova, Lamia
dc.date.accessioned2023-10-16T04:24:43Z
dc.date.available2023-10-16T04:24:43Z
dc.date.copyright©2022
dc.date.issued2022-09-20
dc.identifier.otherID 18201147
dc.identifier.otherID 18201157
dc.identifier.otherID 18201167
dc.identifier.otherID 18201200
dc.identifier.otherID 18301053
dc.identifier.urihttp://hdl.handle.net/10361/21832
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2022.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 57-59).
dc.description.abstractGlobal Illumination is a strategy in computer graphics to add certain degree of realism in case of 3D scene lighting, by trying to emulate how light rays work in real life. Several approaches exists to achieve such kind of visual effect for computed generated imagery. The most physically accurate approach is through ray-tracing. It can produce results which are realistic enough, with a trade-off of being time and computational-resource intensive, making them unsuitable for real-time usage. For more real-time usage scenarios, a set of faster algorithm exists that utilizes rasterization rather than ray-tracing. Despite being faster, those still can be resource intensive or generate physically inaccurate results. Our Generative Adversarial Net-work based approach targets to bring close to physically accurate results based on rasterization output data which can be obtained from a conventional deferred rendering pipeline, while retaining speed. These rasterization output data, which are basically screen-space feature buffers will act as the input to our deep-learning network, which in turn will produce per-frame lightmaps that contain global illumination data, which are further used to generate a presentable frame on the screen. Using screen-space information from a single viewpoint won’t always guarantee light consistency, thus our approach takes into account the rasterization output data of the surrounding of a certain viewpoint, producing more accurate global illumination.en_US
dc.description.statementofresponsibilityAbrar Mahmud
dc.description.statementofresponsibilityAlimus Sifar
dc.description.statementofresponsibilityMoh. Absar Rahman
dc.description.statementofresponsibilityFateen Yusuf Mostafa
dc.description.statementofresponsibilityLamia Tasnova
dc.format.extent71 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectComputer graphicsen_US
dc.subjectGlobal illuminationen_US
dc.subjectNeural networksen_US
dc.subjectGANen_US
dc.subject.lcshNeuropsychology
dc.subject.lcshRendering (Computer graphics)
dc.titleSurrounding-aware screen-space-global-illumination using generative adversarial networken_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB.Sc. in Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record