SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Rosayyan P, Paul J, Subramaniam S, Ganesan SI. Int. J. Intell. Transp. Syst. Res. 2023; 21(1): 48-62.

Copyright

(Copyright © 2023, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s13177-022-00334-0

PMID

unavailable

Abstract

This paper proposes a reinforcement learning-based collaborative multi-agent actor and critic scheme (RL-CMAS) under edge computing architecture for emergency vehicle preemption. The RL-CMAS deployed a parallel training process at the cloud side for building knowledge and well accelerating learning. Priority of message and model of message offloading strategy have been developed. The simulation results show that the proposed RL-CMAS is efficient in detecting even complex data. Finally, a comparison was made with other benchmark methods, namely, Regular scheduling algorithm, Alameddine's DTOS algorithm, and independent multi-agent actor-critic. The result showed the proposed method outperforming the other three bench marking methods. The proposed RL-CMAS provides reduction in message processing delay, total delay, and an increase of message delivery success ratio of 14.22%, 18.21%, and 8.86% respectively.


Language: en

Keywords

Actor-critic network; Edge computing technique; Emergency vehicle flow control; Emergency vehicle preemption; Reinforcement learning

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print