SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Li S, Yao T, Li S, Yan L. Int. J. Intell. Syst. 2022; 37(12): 12235-12251.

Copyright

(Copyright © 2022, Hindawi / Wiley Periodicals)

DOI

10.1002/int.23084

PMID

unavailable

Abstract

The increasing popularity of social media facilitates the propagation of fake news, posing a major threat to the government and journalism, and thereby making how to detect fake news from social media an urgent requirement. In general, multimodal-based methods can achieve better performance because of the complementation among different modalities. However, the majority of them simply concatenate features from different modalities, failing to well preserve the mutual information in common features. To address this issue, a novel framework named semantic-enhanced multimodal fusion network is proposed for fake news detection, which can better capture mutual features among events and thus benefit the detection of fake news. This model consists of three subnetworks, namely multimodal fusion and event domain adaptation networks as well as the fake news detector. Specifically, the multimodal fusion network aims to extract deep features from texts and images and fuse them into a common semantic feature known as a snapshot. Then, the fake news detector can learn the representation of posts. Finally, the event domain adaptation network can single out and remove the peculiar features of each event, and keep shared features among events. The experimental results show that the proposed model outperforms some state-of-the-art approaches on two real-world multimedia data sets.


Language: en

Keywords

deep learning; domain adaptation; fake news detection; multimedia; natural language processing

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print