SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Xu G, Zhang Y, Zhang Q, Lin G, Wang J. Fire Safety J. 2017; 93: 53-59.

Copyright

(Copyright © 2017, Elsevier Publishing)

DOI

10.1016/j.firesaf.2017.08.004

PMID

unavailable

Abstract

In this paper, a deep domain adaptation based method for video smoke detection is proposed to extract a powerful feature representation of smoke. Due to the smoke image samples limited in scale and diversity for deep CNN training, we systematically produced adequate synthetic smoke images with a wide variation in the smoke shape, background and lighting conditions. Considering that the appearance gap (dataset bias) between synthetic and real smoke images degrades significantly the performance of the trained model on the test set composed fully of real images, we build deep architectures based on domain adaptation to confuse the distributions of features extracted from synthetic and real smoke images. This approach expands the domain-invariant feature space for smoke image samples. With their approximate feature distribution separated from non-smoke images, the recognition rate of the trained model is improved significantly compared with the model trained directly on mixed dataset of synthetic and real images. Experimentally, several deep architectures with different design choices are applied to the smoke detector. The ultimate framework can get a satisfactory result on the test set. We believe that our method own strong robustness and may offer a new way for the video smoke detection.


Language: en

Keywords

Deep architecture; Domain adaptation; Feature distribution; Synthetic smoke image; Video smoke detection

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print