SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Newman A, Bavik YL, Mount M, Shao B. Appl. Psychol. 2021; 70(3): 1380-1402.

Copyright

(Copyright © 2021, International Association of Applied Psychology, Publisher John Wiley and Sons)

DOI

10.1111/apps.12302

PMID

unavailable

Abstract

Online platforms such as Amazon's Mechanical Turk (MTurk) are increasingly used by researchers to collect survey and experimental data. Yet, such platforms often represent a tumultuous terrain for both researchers and reviewers. Researchers have to navigate the complexities of obtaining representative samples from online participant cohorts, ensuring data quality, ethically incentivizing participant engagement, and maintaining transparency. Reviewers, on the other hand, have to navigate the complexities of evaluating the efficacy of such data collection and execution efforts in answering important research questions. In order to provide clarity to these issues, this article provides researchers and reviewers with a series of recommendations for effectively executing and evaluating data collection via online platforms, respectively.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print