Mitigation of hallucination and interpretations of self attention of Mistral 7B AI to analyze and visualize context understanding ability of large language models
Date
2024-01Publisher
Brac UniversityAuthor
Taki, S.M. Abrar MustakimKar, Showmick
Niloy, Soumik Deb
Rakib, Mazharul Islam
Biswas, Abdullah Al Nahid
Metadata
Show full item recordAbstract
In recent years, Large Language Models(LLM) have shown excellent performance in
a variety of Natural Language Processing tasks. However, they often produce hallucinated
content. Contents that are seemingly correct and make sense linguistically,
but are factually incorrect. Since researchers have started working on LLM hallucinations
very recently, the problem of mitigating hallucination and understanding
which factors play a role in correcting hallucinated content is relatively new. In this
paper, we modified a multi-step pipeline called ’Chain of Verification’ that reduces
hallucination in Large Language Models by itself without having to feed in external
resources. This method is particularly useful for reasoning and reading comprehension
types of language tasks. In addition, we extracted the decoder layers of an
large language model Mistral 7B to interpret and analyze how the correction was
done under the hood. A custom attention weight pruning method was used to prune
the defective layers and after pruning, the LLM model passed 3/4 test cases to give
proper and correct output results.