Show simple item record

dc.contributor.advisorSadeque, Farig Yousuf
dc.contributor.authorHossain, Ariyan
dc.contributor.authorHaque, Rakinul
dc.contributor.authorHannan, Khondokar Mohammad Ahanaf
dc.contributor.authorRafa, Nowreen Tarannum
dc.contributor.authorMusarrat, Humayra
dc.date.accessioned2024-06-13T11:33:47Z
dc.date.available2024-06-13T11:33:47Z
dc.date.copyright2023
dc.date.issued2023-09
dc.identifier.otherID 20101099
dc.identifier.otherID 20101290
dc.identifier.otherID 20101079
dc.identifier.urihttp://hdl.handle.net/10361/23457
dc.descriptionThis thesis is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2023.en_US
dc.descriptionCataloged from PDF version of thesis.
dc.descriptionIncludes bibliographical references (pages 60-66).
dc.description.abstractMachine learning has the potential to uncover data biases resulting from human error when it’s implemented without proper restraint. However, this complexity arises from word embedding, which is a prominent technique for capturing textual input as vectors applied in different machine learning and natural language processing tasks. Word embeddings are biased because they are trained on text data, which frequently incorporates prejudice and bias from society. These biases may become deeply established in the embeddings, producing unfair or biased results in AI applications. There are efforts made to recognise and lessen certain prejudices, but comprehensive bias elimination is still a difficult task. In Natural Language Processing (NLP) systems, contextualized word embeddings have taken the place of traditional embeddings as the preferred source of representational knowledge. It is critical to evaluate biases contained in their replacements as well since biases of various kinds have already been discovered in standard word embeddings. Our focus is on transformer-based language models, primarily BERT, which produce contextual word embeddings. To measure the extent to which gender biases exist, we apply various methods like cosine similarity test, direct bias test and ultimately detect bias through probability of filling MASK by the models. Based on this probability, we develop a novel metric called MALoR to observe bias. Finally, to mitigate the bias, we continue pretraining these models on a gender balanced dataset. Gender balanced dataset is created by applying Counterfactual Data Augmentation (CDA). To ensure consistency, we perform our experiments on different gender pronouns and nouns - “he-she”, “his-her” and “male names-female names”. These debiased models can then be used across several applications.en_US
dc.description.statementofresponsibilityAriyan Hossain
dc.description.statementofresponsibilityRakinul Haque
dc.description.statementofresponsibilityKhondokar Mohammad Ahanaf Hannan
dc.description.statementofresponsibilityNowreen Tarannum Rafa
dc.description.statementofresponsibilityHumayra Musarrat
dc.format.extent76 pages
dc.language.isoenen_US
dc.publisherBrac Universityen_US
dc.rightsBrac University theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission.
dc.subjectNatural Llnguage processingen_US
dc.subjectGender biasen_US
dc.subjectDebiasingen_US
dc.subjectContinued pretrainingen_US
dc.subject.lcshNatural language processing (Computer science)
dc.titleExploration and mitigation of gender bias in word embeddings from transformer-based language modelsen_US
dc.typeThesisen_US
dc.contributor.departmentDepartment of Computer Science and Engineering, Brac University
dc.description.degreeB.Sc in Computer Science 


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record