SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Minoura H, Yonetani R, Nishimura M, Ushiku Y. IEEE Robot. Autom. Lett. 2021; 6(2): 287-294.

Copyright

(Copyright © 2021, Institute of Electrical and Electronics Engineers)

DOI

10.1109/LRA.2020.3043169

PMID

unavailable

Abstract

Forecasting human activities observed in videos is a long-standing challenge in computer vision and robotics and is also beneficial for various real-world applications such as mobile robot navigation and drone landing. In this work, we present a new forecasting task called crowd density forecasting. Given a video of a crowd captured by a surveillance camera, our goal is to predict how the density of the crowd will change in unseen future frames. To address this task, we developed the patch-based density forecasting networks (PDFNs), which directly forecasts crowd density maps of future frames instead of trajectories of each moving person in the crowd. The PDFNs represent crowd density maps based on spatially or spatiotemporally overlapping patches and learn a simple density dynamics of fewer people in each patch. Doing so allows us to efficiently deal with diverse and complex crowd density dynamics observed when input videos involve a variable number of crowds moving independently. Experimental results with several public datasets of surveillance videos demonstrate the effectiveness of our approaches compared with state-of-the-art forecasting methods.


Language: en

Keywords

Computer vision for transportation; deep learning for visual perception; Estimation; Forecasting; Spatiotemporal phenomena; Surveillance; surveillance robotic systems; Trajectory; Vehicle dynamics; Videos

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print