Interrater reliability of Violence Risk Appraisal Guide scores provided in Canadian criminal proceedings. Academic Article uri icon

abstract

  • Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record

published proceedings

  • Psychol Assess

altmetric score

  • 1.1

author list (cited authors)

  • Edens, J. F., Penson, B. N., Ruchensky, J. R., Cox, J., & Smith, S. T

citation count

  • 24

complete list of authors

  • Edens, John F||Penson, Brittany N||Ruchensky, Jared R||Cox, Jennifer||Smith, Shannon Toney

publication date

  • December 2016