A Test-Retest Reliability Generalization Meta-Analysis of Judgments Via the Policy-Capturing Technique Academic Article uri icon

abstract

  • Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.

published proceedings

  • Organizational Research Methods

author list (cited authors)

  • Zhu, Z. e., Tomassetti, A. J., Dalal, R. S., Schrader, S. W., Loo, K., Sabat, I. E., ... Fyffe, S

citation count

  • 0

complete list of authors

  • Zhu, Ze||Tomassetti, Alan J||Dalal, Reeshad S||Schrader, Shannon W||Loo, Kevin||Sabat, Isaac E||Alaybek, Balca||Zhou, You||Jones, Chelsea||Fyffe, Shea

publication date

  • May 2021