SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Liu H, Wang T, Li W, Ye X, Yuan Q. Accid. Anal. Prev. 2024; 198: e107476.

Copyright

(Copyright © 2024, Elsevier Publishing)

DOI

10.1016/j.aap.2024.107476

PMID

38325183

Abstract

Lane-changing (LC) intention recognition models have seen limited real-world application due to a lack of research on two-lane two-way road environments. This study constructs a high-fidelity simulated two-lane two-way road to develop a Transformer model that accurately recognizes LC intention. We propose a novel LC labelling algorithm combining vehicle dynamics and eye-tracking (VEL) and compare it against traditional time window labelling (TWL). We find the LC recognition accuracy can be further improved when oncoming vehicle features are included in the LC dataset. The Transformer demonstrates state-of-the-art performance recognizing LC 4.59 s in advance with 92.6 % accuracy using the VEL labelling method compared to GRU, LSTM and CNN + LSTM models. To interpret the Transformer's 'black box', we apply LIME model which reveals the model focuses on eye-tracking features and LC vehicle interactions with preceding and oncoming traffic during LC events. This research demonstrates that modelling additional road users and driver gaze in LC intention recognition achieves significant improvements in model performance and time-to-collision warning capabilities on two-lane two-way roads.


Language: en

Keywords

Eye-tracking features; Lane-change intention recognition; LIME; Multi-head attention mechanism; Transformer

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print