SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Liu L, Yang J. Comput. Ind. Eng. 2023; 184: e109564.

Copyright

(Copyright © 2023, Elsevier Publishing)

DOI

10.1016/j.cie.2023.109564

PMID

unavailable

Abstract

Unmanned aerial vehicle (UAV)-aided continuous emergency communications have recently emerged as a key solution to provide data transmission for disaster areas, thanks to their flexible deployment and high mobility. In practice, due to the limited onboard energy and state deterioration, UAVs need energy supplement and maintenance. However, existing researches mainly focus on UAV deployment and rarely study policies related to their operations and maintenance. To ensure the continuous and reliable execution of communication tasks, a dynamic operations and maintenance policy is proposed to assign tasks and determine maintenance activities for UAVs. First, a dynamic operations and maintenance policy composed of a task assignment policy and a maintenance policy is proposed. Next, the dynamic operations and maintenance joint optimization problem is formulated as a Markov decision process (MDP) to optimize the performance of the UAV swarm, including coverage, fairness, operations and maintenance cost. Then, a deep reinforcement learning approach is tailored to optimize the proposed MDP, where the repeated states are eliminated by state preprocessing, and an action mask method is utilized to satisfy operational constraints. Finally, the proposed approach is tested by its application in the operations and maintenance of a UAV swarm for continuous emergency communication.


Language: en

Keywords

Continuous emergency communication; Deep reinforcement learning; Dynamic operations and maintenance; Markov decision process; Unmanned aerial vehicle swarm

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print