UAV assisted cooperative caching on network edge using multi agent Actor critic reinforcement learning
Abstract
In recent times, Multi-access edge computing (MEC) has been introduced to assist
cloud servers by bringing the computation closer to the edge. This is a well-known
replacement to deal with the strict latency faced by users while retrieving contents
from long-distance data centers. To cope up with this latency while simultaneously
improving users’ QOS poses a limitation which can be handled through caching at
edge nodes. However, where to cache and what to cache so that a higher cache
hit rate is achieved also poses another significant issue which is addressed in this
research. In this paper we have approached the problem of dynamic caching along
with the selection of edge node that leads to better cache hit rate. We have also
proposed the use of UAVs as aerial Base Station(BS) to assist in peak hours where
a ground base station is not enough to support the surge in user requests.It also
elaborates the optimal relocation of UAVs to e↵ectively support user mobility, which
then caters a cluster of users by the K-means clustering algorithm. In addition, to
maximize the cache hit ratio we have proposed a cooperative deep reinforcement
learning algorithm which ensured a global increase in cache hit ratio and also an
ecient allocation of storage. We have shown simulations on UAV reallocation based
on user mobility patterns and also achieved higher global cache hit ratio using our
proposed multi-agent actor-critic algorithm. In this paper, emphasis was given on
how to cache and where to cache based on the cooperation of UAV and GBS which
open doors for further research.