SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Nugent WR. Educ. Psychol. Meas. 2009; 69(1): 62-78.

Copyright

(Copyright © 2009, SAGE Publishing)

DOI

10.1177/0013164408318762

PMID

unavailable

Abstract

Critical to meta-analysis is the presumption that effect sizes based on different measures are directly comparable. Recent theoretical work has shown that an invariance condition—universe score, or construct, validity invariance—must hold for either observed score or reliability-corrected effect sizes based on different measures to be directly comparable. To date, however, no research has been done investigating how crucial violations of this invariance condition are for either effect size discrepancy across different measures or the outcomes of a meta-analysis. This article reports the results of a simulation study of the possible effects that violations of construct validity invariance have on (a) the differences in population construct level standardized mean difference effect sizes based on different measurement procedures; (b) the variability in population construct level correlation effect sizes based on different measurement procedures; and (c) the results of a meta-analysis. Results suggest that considerable variability in effect sizes can exist across measurement procedures that fail to meet universe score validity invariance and that this variability has the potential to negatively affect meta-analytic results.

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print