Performance comparison of transformer-based models for multi-reasoning in machine reading comprehension
Abstract
Machine Reading Comprehension (MRC) is an artificial intelligence task that ex
amines a given passage or text and answers queries regarding it. The objective is to
make an intelligent support system that has the ability to understand the contex
tual information of the passage and give correct answers for multi-reasoning ques
tions, commonsense based questions and multiple-choice questions, etc. One of the
main challenges faced by MRC models in commonsense based and multi-reasoning
questions is the need for understanding and reasoning beyond explicit textual infor
mation. To enhance the capabilities of MRC systems in these areas, the research
focuses on the comparative analysis of state-of-the-art transformer-based models in
cluding BERT, ALBERT, RoBERTa, DistilBERT, MobileBERT, and ELECTRA.
Our investigation specifically targets the enhancement of commonsense reasoning
within MRC frameworks. In regards to this, we have used a binary decision mak
ing approach in our algorithm, in order to achieve a better outcome from these
transformer-based models. To evaluate the performance, the experiments were con
ducted using CosmosQA dataset, which consists of narrative-driven questions that
necessitate commonsense understanding to resolve.