SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Shen Y, Peng J, Kang C, Zhang S, Yi F, Wan J, Wang X. Int. J. Veh. Des. 2023; 92(2/3/4): 336-356.

Copyright

(Copyright © 2023, Inderscience Publishers)

DOI

10.1504/IJVD.2023.134751

PMID

unavailable

Abstract

In this paper, lane change process of vehicle is divided into two stages: lane change decision and lane change movement, and put forward the double-layer deep reinforcement learning architecture, the use of upper structure deep Q network (DQN) exchange way to control decision, and send the lane changing information to the lower deep deterministic policy gradient (DDPG) of vehicle trajectory control, After the lane change process the collaborative optimisation of DQN was completed through the feedback of vehicle position information before and after lane change., the collaborative optimisation of DQN is completed. The results show that, the proposed two-layer deep reinforcement learning architecture can increase the average velocity of the agent vehicle by 2-5% and reduce the average lateral speed and lateral acceleration by 12.5% and 12.2% respectively in the lane-changing process. Compared with no collaborative optimisation, the optimal lane change timing of the effectively cooptimised two-layer architecture is 34.64%.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print