SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhang Y, Zhang H, Wang G, Yang J, Hwang JN. IEEE Trans. Vehicular Tech. 2020; 69(1): 151-162.

Copyright

(Copyright © 2020, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TVT.2019.2954876

PMID

unavailable

Abstract

The technology for simultaneous localization and mapping (SLAM) has been well investigated with the rising interest in autonomous driving. Visual odometry (VO) is a variation of SLAM without global consistency for estimating the position and orientation of the moving object through analyzing the image sequences captured by associated cameras. However, in the real-world applications, we are inevitably to experience drift error problem in the VO process due to the frame-by-frame pose estimation. The drift can be more severe for monocular VO compared with stereo matching. By jointly refining the camera poses via several local keyframes and the coordinate of 3D map points triangulated from extracted features, bundle adjustment (BA) can mitigate the drift error problem only to some extent. To further improve the performance, we introduce a traffic sign feature-based joint BA module to eliminate and relieve the incrementally accumulated pose errors. The continuously extracted traffic sign feature with standard size and planar information will provide powerful additional constraints for improving the VO estimation accuracy through BA. Our framework can collaborate well with existing VO systems, e.g., ORB-SLAM2, and the traffic sign feature can also be replaced with feature extracted from other size-known planar objects. Experimental results by applying our traffic sign feature-based BA module show an improved vehicular localization accuracy compared with the state-of-the-art baseline VO method.


Language: en

Keywords

3D map points; associated cameras; autonomous driving; Autonomous vehicles; baseline VO method; bundle adjustment; Bundle adjustment; bundle adjustment (BA); camera pose refinement; cameras; Cameras; continuous traffic sign feature extraction; distance measurement; drift error problem; feature extraction; Feature extraction; frame-by-frame pose estimation; image matching; improved vehicular localization accuracy; incrementally accumulated pose errors; local keyframes; monocular visual odometry; Monocular visual odometry (VO); monocular VO; motion estimation; moving object; object detection; ORB-SLAM2; orientation estimation; planar information; pose estimation; position estimation; road traffic control; Roads; robot vision; simultaneous localization and mapping; Simultaneous localization and mapping; size-known planar objects; SLAM (robots); standard size; stereo image processing; stereo matching; traffic sign; traffic sign detection; traffic sign feature-based joint BA module; Visualization; VO estimation accuracy; VO process

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print