A comparative analysis of the different CNN-LSTM model caption generation of medical images
Abstract
The intent of this paper is to make the process of interpreting and understanding
information within ultrasound pictures simpler and quicker by addressing the lack
of techniques for automatically deciphering medical images. In order to do so, we
propose a method of ultrasound image caption generation using AI that highlights
the potential Machine Translation has in translating medical images to textual notations. The model needs to be trained on an ultrasound image dataset of the
abdominal region including the uterus, myometrium, endometrium and cervix, a
field of the medical sector that remains inadequately addressed. Two pre-trained
CNN models, namely, VGG16 and Inception v3 have been used to extract features
from the ultrasound images. Subsequently, the encoder-decoder model takes in two
types of inputs, one for each of its layers. The two kinds of inputs are the text sequence and the image features. Both Vanilla LSTM and Bi-directional LSTM have
been used to build the language generation model. The embedding layer along with
the LSTM layer will process the text input. At last, the output from the two layers
stated above will be merged.