We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article


Li D, Zhang Z, Chen X, Huang K. IEEE Trans. Image Process. 2019; 28(4): 1575-1590.


(Copyright © 2019, IEEE (Institute of Electrical and Electronics Engineers))






Retrieving specific persons with various types of queries, e.g. a set of attributes or a portrait photo, has great application potential in large-scale intelligent surveillance systems. In this paper, we propose a richly annotated pedestrian (RAP) dataset which serves as a unified benchmark for both attributebased and image-based person retrieval in real surveillance scenarios. Typically, previous datasets have three improvable aspects, including limited data scale and annotation types, heterogeneous data source, and controlled scenarios. Differently, RAP is a large-scale dataset which contains 84,928 images with 72 types of attributes and additional tags of viewpoint, occlusion, body parts and 2,589 person identities. It is collected in the real uncontrolled scene and has complex visual variations in pedestrian samples due to the change of viewpoints, pedestrian postures, and cloth appearance, etc. Towards a high-quality person retrieval benchmark, an amount of state-of-the-art algorithms on pedestrian attribute recognition and person reidentification (ReID), are performed for quantitative analysis with three evaluation tasks, i.e., attribute recognition, attribute-based and image-based person retrieval, where a new instance-based metric is proposed to measure the dependency of the prediction of multiple attributes. Finally, some interesting problems, e.g., the joint feature learning of attribute recognition and ReID, and the problem of cross-day person ReID, are explored to show the challenges and future directions in person retrieval.

Language: en


All SafetyLit records are available for automatic download to Zotero & Mendeley