Automatic Bengali image captioning using efficientNet-transformer network and vision transformer
Abstract
The task of image captioning is a complex process that involves generating textual
descriptions for images. This technology is extremely beneficial for a wide range of
applications, such as assisting people with visual impairments, monitoring surveil lance systems, content generation, image indexing, and automatic annotation of
images for producing data for training AI-based image generation models. Much of
the research done in this particular domain, especially using transformer models, has
been focused on English language. However, there has been relatively little research
dedicated to the context of the Bengali language. This study addresses the lack of
research in the context of Bengali language and proposes a novel approach to auto matic image captioning that involves a multi-modal, transformer-based, end-to-end
model with an encoder-decoder architecture. Our approach utilizes pre-trained Ef ficientNet Transformer Network. To evaluate the effectiveness of our approach, we
compare our model with a Vision Transformer that utilizes a non-convolutional en coder pre-trained on ImageNet.The two models were tested on the BanglaLekhaIm ageCaptions dataset and evaluated using BLEU metrics.