Content based image search in openstack swift
Abstract
The OpenStack Object Store, also known as Swift, is a cloud storage software. Swift
is optimized for durability, availability; also concurrency across the entire data set.
However, Swift does not have a proper technique to let users and administrators
search inside the object storage without the entire OpenStack Infrastructure. In
this paper, we propose a Content-Based Image Model for Swift, which enables us
to extract additional information from images and store it into an elasticsearch
database which helps us to search for our desired data based on its contents. This
novel approach works in 2 parallel stages. First, the image which is being uploaded
is sent to our trained model for object detection. Secondly, this information is being
sent to the elasticsearch, which in return helps us to do the searching based on the
contents of the uploaded images. As the accuracy of the search solely depends on
the accuracy of the object detection model, we have trained our model with MS
COCO Dataset. Lastly, we upload these images in various segments to find out
the efficacy of our model not only in real-life small and medium-size Swift object
storages but also as a user-centered Content-based image retrieval system from a
text-based database.