SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Behera A, Wharton Z, Keidel A, Debnath B. IEEE Trans. Intel. Transp. Syst. 2022; 23(3): 2874-2881.

Copyright

(Copyright © 2022, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2020.3027240

PMID

unavailable

Abstract

Automatic recognition and prediction of in-vehicle human activities has a significant impact on the next generation of driver assistance and intelligent autonomous vehicles. In this article, we present a novel single image driver action recognition algorithm inspired by human perception that often focuses selectively on parts of the images to acquire information at specific places which are distinct to a given task. Unlike existing approaches, we argue that human activity is a combination of pose and semantic contextual cues. In detail, we model this by considering the configuration of body joints, their interaction with objects being represented as a pairwise relation to capture the structural information. Our body-pose and body-object interaction representation is built to be semantically rich and meaningful, which is highly discriminative even though it is coupled with a basic linear SVM classifier. We also propose a Multi-stream Deep Fusion Network (MDFN) for combining high-level semantics with CNN features. Our experimental results demonstrate that the proposed approach significantly improves the drivers' action recognition accuracy on two exacting datasets.


Language: en

Keywords

Activity recognition; body pose and contextual descriptor; Computational modeling; deep learning; Feature extraction; Image recognition; in-vehicle activity monitoring; intelligent vehicles; Monitoring; neural network-based fusion; Semantics; Transfer learning; Vehicles

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print