SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Cheng J, Liu L, Liu B, Zhou K, Da Q, Yang Y. Int. J. Intell. Syst. 2022; 37(11): 8968-8987.

Copyright

(Copyright © 2022, Hindawi / Wiley Periodicals)

DOI

10.1002/int.22976

PMID

unavailable

Abstract

Unsupervised domain adaptation aims to train a classification model from the labeled source domain for the unlabeled target domain. Since the data distribution of the two domains are different, the model often performs poorly on the target domain. The existing methods align the global features of the source domain and the target domain, and learn the domain invariant features to improve the performance of the model, which ignores the difference between the foreground features and the background features, and does not consider the structural information in the image foreground object. Therefore we proposed a method called foreground object structure transfer (FOST), it avoids the problem of ignoring differences in the structure information of foreground features and background features, exploits foreground feature enhancement from source-to-target transfer during adaptation and structural contrast loss to drive the domain alignment process. FOST relies on prior knowledge to distinguish foreground and background features, and considers the structural information of the object, which makes the intra-class spatial distribution more compact, the interclass spatial distribution more separated, improves the transferability and improves the classification efficiency. Extensive experimental results on various benchmarks under different domain adaptation settings illustrated that our FOST compares favorably against the state-of-the-art domain adaptation methods, we achieved the accuracies of 95.3%, 91.3%, 76.6%, and 87.55% on the ImageCLEF-DA, Office-31, Office-Home, and Visda-2017 data sets, respectively.


Language: en

Keywords

contrastive learning; object structure; unsupvised domain adaptation

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print