Drug and violence prevention: Rediscovering the critical rational dimension of evaluation research
- Additional Document Info
- View All
Following criticism of government-funded drug prevention activities of the early 1990s, a spate of "best practice" or "science-based" lists of alcohol, drug and violence prevention programs have been produced by federal agencies in recent years. The writings of Donald T. Campbell on validity have had a profound influence on the development of the methodological quality scales that have been utilized in the review processes used to generate these lists. Implicit in this approach to the identification of science-based prevention programs is the idea that science is equivalent to research methodology and study design. Following Karl Popper and Campbell, I contend that, while certain designs are clearly better than others in dealing with threats to internal validity and allow for better generalization of results beyond the study population, utilization of these designs in and of itself is not sufficient to designate an evaluation study as "scientific." Nor can the accumulation of data from such studies be used to proclaim an entire area of research a "science," as has occurred with the field of so-called "prevention science." Rather, the fundamental criterion by which to judge the scientific status of a theory is falsifiability. If the field of drug and violence prevention is truly a science, then it should be subjecting its predictions about the effects of intervention programs to genuinely critical tests and not attempting to verify these hypotheses. It is argued that it has failed to do this, and two specific examples of prevention programs that appear on a number of science-based lists of prevention programs are discussed. © Springer 2005.
author list (cited authors)