SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Coscia M, Rossi L. J. R. Soc. Interface 2020; 17(167): e20200020.

Copyright

(Copyright © 2020, Royal Society)

DOI

10.1098/rsif.2020.0020

PMID

32517634

Abstract

Many people view news on social media, yet the production of news items online has come under fire because of the common spreading of misinformation. Social media platforms police their content in various ways. Primarily they rely on crowdsourced 'flags': users signal to the platform that a specific news item might be misleading and, if they raise enough of them, the item will be fact-checked. However, real-world data show that the most flagged news sources are also the most popular and-supposedly-reliable ones. In this paper, we show that this phenomenon can be explained by the unreasonable assumptions that current content policing strategies make about how the online social media environment is shaped. The most realistic assumption is that confirmation bias will prevent a user from flagging a news item if they share the same political bias as the news source producing it. We show, via agent-based simulations, that a model reproducing our current understanding of the social media environment will necessarily result in the most neutral and accurate sources receiving most flags.


Language: en

Keywords

social networks; social media; content policing; echo chambers; fake news; flagging

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print