SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Praharsha CH, Poulose A. Comput. Biol. Med. 2024; 180: e108945.

Copyright

(Copyright © 2024, Elsevier Publishing)

DOI

10.1016/j.compbiomed.2024.108945

PMID

39094328

Abstract

Driver monitoring systems (DMS) are crucial in autonomous driving systems (ADS) when users are concerned about driver/vehicle safety. In DMS, the significant influencing factor of driver/vehicle safety is the classification of driver distractions or activities. The driver's distractions or activities convey meaningful information to the ADS, enhancing the driver/ vehicle safety in real-time vehicle driving. The classification of driver distraction or activity is challenging due to the unpredictable nature of human driving. This paper proposes a convolutional block attention module embedded in Visual Geometry Group (CBAM VGG16) deep learning architecture to improve the classification performance of driver distractions. The proposed CBAM VGG16 architecture is the hybrid network of the CBAM layer with conventional VGG16 network layers. Adding a CBAM layer into a traditional VGG16 architecture enhances the model's feature extraction capacity and improves the driver distraction classification results. To validate the significant performance of our proposed CBAM VGG16 architecture, we tested our model on the American University in Cairo (AUC) distracted driver dataset version 2 (AUCD2) for cameras 1 and 2 images. Our experiment results show that the proposed CBAM VGG16 architecture achieved 98.65% classification accuracy for camera 1 and 97.85% for camera 2 AUCD2 datasets. The CBAM VGG16 architecture also compared the driver distraction classification performance with DenseNet121, Xception, MoblieNetV2, InceptionV3, and VGG16 architectures based on the proposed model's accuracy, loss, precision, F1 score, recall, and confusion matrix. The drivers' distraction classification results indicate that the proposed CBAM VGG16 has 3.7% classification improvements for AUCD2 camera 1 images and 5% for camera 2 images compared to the conventional VGG16 deep learning classification model. We also tested our proposed architecture with different hyperparameter values and estimated the optimal values for best driver distraction classification. The significance of data augmentation techniques for the data diversity performance of the CBAM VGG16 model is also validated in terms of overfitting scenarios. The Grad-CAM visualization of our proposed CBAM VGG16 architecture is also considered in our study, and the results show that VGG16 architecture without CBAM layers is less attentive to the essential parts of the driver distraction images. Furthermore, we tested the effective classification performance of our proposed CBAM VGG16 architecture with the number of model parameters, model size, various input image resolutions, cross-validation, Bayesian search optimization and different CBAM layers. The results indicate that CBAM layers in our proposed architecture enhance the classification performance of conventional VGG16 architecture and outperform the state-of-the-art deep learning architectures.


Language: en

Keywords

Deep learning; Attention module; Autonomous driving system (ADS); Autonomous vehicles; Convolutional neural network (CNN); Driver distraction classification; Driver monitoring systems; Image classifications; Visual Geometry Group (VGG)

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print