Show simple item record

dc.contributor.advisorKhan, Mumit
dc.contributor.authorMomtaz, Anik
dc.contributor.authorAmreen, Sadika
dc.date.accessioned2013-04-30T17:29:45Z
dc.date.available2013-04-30T17:29:45Z
dc.date.copyright2012
dc.date.issued2012-12
dc.identifier.otherID 08201002
dc.identifier.otherID 09101003
dc.identifier.urihttp://hdl.handle.net/10361/2379
dc.descriptionThis thesis report is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2012.en_US
dc.descriptionCataloged from PDF version of thesis report.
dc.descriptionIncludes bibliographical references (page 46).
dc.description.abstractThe everlasting necessity to process data is only becoming more and more challenging due to the exponential growth of the data itself. We are talking about exabytes, zettabytes and even yottabytes of data; generally referred to as Big Data. Hence, the conventional processing methods of data have become obsolete when handling Big Data. It is simply not feasible to use a single machine to analyze data of such tremendous volume. This is where Hadoop comes in. Simply put, using the Hadoop Distributive File System (HDFS), an enormous chunk of data can be divided into smaller pieces and be distributed amongst multiple machines referred to as nodes to parallel process them using a technique called MapReduce. The potential for such a concept is limitless. However, for our thesis, we have used the HDFS to identify similarities between multiple documents. The initial idea was to make an algorithm to detect full or partial plagiarism in documents as there are countless materials of interest readily available on the internet. However, upon successfully being able to implement an algorithm for the English language, we realized that there is no record of any work on document similarity detection carried on upon Bangla language. Therefore, with some modifications to our existing algorithm to fit our specifications (as the Bangla language is completely different from the English language as far as construction is concerned), we were able to develop an algorithm to detect document similarities on a broad scale using the Ferret model.en_US
dc.description.statementofresponsibilityAnik Momtaz
dc.description.statementofresponsibilitySadika Amreen
dc.format.extent54 pages
dc.language.isoenen_US
dc.publisherBRAC Universityen_US
dc.subjectComputer science and engineering
dc.titleDetecting document similarity in large document collecting using MapReduce and the Hadoop frameworken_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, BRAC University
dc.description.degreeB. Computer Science and Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record