Exploration and mitigation of gender bias in word embeddings from transformer-based language models
Date
2023-09Publisher
Brac UniversityAuthor
Hossain, AriyanHaque, Rakinul
Hannan, Khondokar Mohammad Ahanaf
Rafa, Nowreen Tarannum
Musarrat, Humayra
Metadata
Show full item recordAbstract
Machine learning has the potential to uncover data biases resulting from human
error when it’s implemented without proper restraint. However, this complexity
arises from word embedding, which is a prominent technique for capturing textual
input as vectors applied in different machine learning and natural language processing
tasks. Word embeddings are biased because they are trained on text data,
which frequently incorporates prejudice and bias from society. These biases may
become deeply established in the embeddings, producing unfair or biased results in
AI applications. There are efforts made to recognise and lessen certain prejudices,
but comprehensive bias elimination is still a difficult task. In Natural Language
Processing (NLP) systems, contextualized word embeddings have taken the place of
traditional embeddings as the preferred source of representational knowledge. It is
critical to evaluate biases contained in their replacements as well since biases of various
kinds have already been discovered in standard word embeddings. Our focus is
on transformer-based language models, primarily BERT, which produce contextual
word embeddings. To measure the extent to which gender biases exist, we apply
various methods like cosine similarity test, direct bias test and ultimately detect
bias through probability of filling MASK by the models. Based on this probability,
we develop a novel metric called MALoR to observe bias. Finally, to mitigate the
bias, we continue pretraining these models on a gender balanced dataset. Gender
balanced dataset is created by applying Counterfactual Data Augmentation (CDA).
To ensure consistency, we perform our experiments on different gender pronouns
and nouns - “he-she”, “his-her” and “male names-female names”. These debiased
models can then be used across several applications.