SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhang Z, Qin J, Wang S, Kang Y, Liu Q. J. Intell. Robotic Syst. 2022; 105(1): e4.

Copyright

(Copyright © 2022, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s10846-022-01606-3

PMID

unavailable

Abstract

Drivable area understanding is an essential problem in the fields of robot autonomous navigation. Mobile robots or other autonomous vehicles need to perceive their surrounding environments such as obstacles, lanes and freespace to ensure safety. Many recent works have made great achievements benefiting from the breakthrough of deep learning. However, those methods resolve the challenge in a separated way which cause repeated utilization of resources in some occasions. Thus, we present a unified lane and obstacle detection network, ULODNet, which can detect the lanes and obstacles in a joint manner and further frame the drivable areas for mobile robots or other autonomous vehicles. To better coordinate the training of ULODNet, we also create a new dataset, CULane-ULOD Dataset, based on the widely used CULane Dataset. The new dataset contains both the lane labels and obstacle labels which the original dataset do not have. At last, to construct an integrated autonomous driving scheme, an area intersection paradigm is introduced to generate the driving commands by calculating the obstacle area proportion in the drivable regions. Moreover, the well-designed comparison experiments verify the efficiency and effectiveness of the new algorithm.


Language: en

Keywords

Autonomous navigation; Computer vision; Environment perception; Mobile robot

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print