Show simple item record

dc.contributor.advisorUddin, Dr. Jia
dc.contributor.authorFaria, Fairuz
dc.contributor.authorObin, Tahmid Tahsan
dc.contributor.authorRahat, Shah Md. Nasir
dc.contributor.authorChowdhury, Tanzim Islam
dc.date.accessioned2018-02-15T04:56:43Z
dc.date.available2018-02-15T04:56:43Z
dc.date.copyright2017
dc.date.issued12/26/2017
dc.identifier.otherID 13201050
dc.identifier.otherID 13201057
dc.identifier.otherID 13241006
dc.identifier.otherID 14301074
dc.identifier.urihttp://hdl.handle.net/10361/9470
dc.descriptionCataloged from PDF version of thesis report.
dc.descriptionIncludes bibliographical references (pages 32-37).
dc.descriptionThis thesis report is submitted in partial fulfilment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2017.en_US
dc.description.abstractFor our thesis we study about conditions of a good parallel algorithm which greatly increases efficiency in a program, and show that it is possible to implement Lossless Data Compression using the Run Length Encoding algorithm in parallel architecture. Lossless compression is when the original data that was compressed will not get lost after the data is being decompressed, hence without any loss of data we hope to accomplish a massive reduction in execution time by applying parallelism to this algorithm. Many compression algorithms are typically executed in CPU architectures. In our work, we mainly focused on utilizing the GPU for parallelism in data compression. Hence an implementation of Run Length Encoding algorithm is used by the help of NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework. CUDA has successfully popularized GPU computing, and General Purpose Compute Unified Device Architecture applications are now used in various systems. The CUDA programming model provides a simple interface to program on GPUs. A GPU becomes an affordable solution for accelerating a slow process. This algorithm is convenient for the manipulation of a large data set as oppose to a small one as this technique can increase the file size greatly. Furthermore, this paper also presents the efficiency in power consumption of the GPU being used compared to a CPU implementation. Lastly, we observed notable reduction in both execution time and power consumption.en_US
dc.description.statementofresponsibilityFairuz Faria
dc.description.statementofresponsibilityTahmid Tahsan Obin
dc.description.statementofresponsibilityShah Md. Nasir Rahat
dc.description.statementofresponsibilityTanzim Islam Chowdhury
dc.format.extent37 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.rightsBRAC University thesis reports are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectParallel algorithmen_US
dc.subjectGPUen_US
dc.subjectCUDAen_US
dc.subjectNVIDIA GPUsen_US
dc.titleOptimization techniques for speedup in a parallel algorithmen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record