We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article


Hsu WY, Lin WY. IEEE Trans. Image Process. 2020; ePub(ePub): ePub.


(Copyright © 2020, IEEE (Institute of Electrical and Electronics Engineers))






Current deep learning methods seldom consider the effects of small pedestrian ratios and considerable differences in the aspect ratio of input images, which results in low pedestrian detection performance. This study proposes the ratio-and-scale-aware YOLO (RSA-YOLO) method to solve the aforementioned problems. The following procedure is adopted in this method. First, ratio-aware mechanisms are introduced to dynamically adjust the input layer length and width hyperparameters of YOLOv3, thereby solving the problem of considerable differences in the aspect ratio. Second, intelligent splits are used to automatically and appropriately divide the original images into two local images. Ratio-aware YOLO (RA-YOLO) is iteratively performed on the two local images. Because the original and local images produce low-and high-resolution pedestrian detection information after RA-YOLO, respectively, this study proposes new scale-aware mechanisms in which multiresolution fusion is used to solve the problem of misdetection of remarkably small pedestrians in images. The experimental results indicate that the proposed method produces favorable results for images with extremely small objects and those with considerable differences in the aspect ratio. Compared with the original YOLOs (i.e., YOLOv2 and YOLOv3) and several state-of-the-art approaches, the proposed method demonstrated a superior performance for the VOC 2012 comp4, INRIA, and ETH databases in terms of the average precision, intersection over union, and lowest log-average miss rate.

Language: en


All SafetyLit records are available for automatic download to Zotero & Mendeley