Enhancing Bangla video comprehension through multimodal feature integration and attention-based encoder-decoder captioning models for single-action videos
Abstract
Video understanding and description have an important role to play in the field of
computer vision and natural language processing. The capacity of automatically
generating natural language descriptions for video content has many real-world applications,
for example, quoting accessibility tools up to multimedia retrieval systems.
Although understanding and describing video content in natural language is
a challenging job, it is more so in resource-constrained languages like Bangla. This
study investigates the integration of a feature fusion method and the attention-based
encoder-decoder framework to improve comprehension of videos and to generate accurate
captions for single-action video clips in Bangla. We propose a novel model
based on multimodal fusion by combining visual features from video frames and
motion information derived from optical flow. The adopted multimodal representations
are then fed into an attention-based encoder-decoder architecture aiming to
generate descriptive captions in the Bangla language. To facilitate our research, we
collected and annotated a new dataset comprising single-action videos sourced from
various online platforms. Extensive experiments are conducted on this newly created
Bangla single-action videos dataset, with the models evaluated using standard
metrics like BLEU, METEOR, and CIDEr. Among the models tested, including
architectural variations, the GRU-Gaussian Attention model achieves the best performance,
generating captions closest to the ground truth. As this is a new dataset
with no previous benchmarks, the proposed approach establishes a strong baseline
for Bangla video captioning, achieving a BLEU score of 0.53 and a CIDEr score of
0.492. Additionally, we analyze the attention mechanisms to interpret the learned
representations, providing insights into the model’s behavior and decision-making
process. This work on developing solutions for under-resourced languages paves
the way for enhanced video comprehension with potential applications in human-computer
interaction, accessibility, and multimedia retrieval.