Evidence Continuum

CrimeSolutions uses rigorous research to inform you about what works in criminal justice, juvenile justice, and crime victim services. We set a high bar for the scientific quality of the evidence used to assign ratings — the more rigorous a study’s research design (e.g., randomized control trials, quasi-experimental designs), the more compelling the research evidence.[1]

In the CrimeSolutions rating process, we address program and practice evaluations that sit along an evidence continuum with two axes: (1) Effectiveness and (2) Strength of Evidence. See Exhibit 1 below.

Where a program or practice sits on the Effectiveness axis tells us how well it works in achieving criminal justice, juvenile justice, and victim services outcomes. Where it sits on the Strength of Evidence axis tells us how confident we can be of that determination. Effectiveness is determined by the outcomes of an evaluation in relation to the goals of the program or practice. Strength of evidence for programs is determined by the rigor and design of the outcome evaluation, and by the number of evaluations. Strength of evidence for practices is determined by the rigor and design of the studies included in the meta-analysis.

Programs fall along two continuums: Effectiveness and Strenght of Evidence
Exhibit 1: Continuum of Evidence (View larger image.)

On CrimeSolutions, programs and practices[2] fall into one of four categories, listed below in order of effectiveness from the continuum:

  • Rated as Effective: Programs and practices have strong evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes when implemented with fidelity.
  • Rated as Promising: Programs and practices have some evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes. Included within the promising category are new, or emerging, programs for which there is some evidence of effectiveness.
  • Inconclusive Evidence: Programs and practices that made it past the initial review but, during the full review process, were determined to have inconclusive evidence for a rating to be assigned. See Reasons for Rejecting Evaluation Studies. Interventions are not categorized as inconclusive because of identified or specific weaknesses in the interventions themselves. Instead, our reviewers have determined that the available evidence was inconclusive for a rating to be assigned. Download Programs reviewed but not assigned a rating or Practices reviewed but not assigned a rating.
  • Rated as No Effects: Programs have strong evidence that the program did not have the intended effects or had harmful effects when trying to achieve justice-related outcomes. While programs  rated No Effects may have had some positive effects, the overall rating is based on the preponderance of evidence.

In addition to the four categories above, we also maintain a list of programs and practices for which the available evaluation evidence was not rigorous enough for our review process or that fell outside the scope of CrimeSolutions. Download Screened-out program evaluations or screened-out meta-analyses for practices.

The majority of programs that have an evidence rating are Promising (61 percent). An almost equal percentage of programs are rated Effective (19 percent) as are rated No Effects (20 percent). Table 1 shows, by percentage, the distribution of programs that have been identified for potential review, that have had their evidence reviewed by the CrimeSolutions criteria, and that have resulted in an evidence rating.

Table1: Percentage of Programs by Rating and Category (as of September 2023)
Evidence Rating Programs With Evidence Reviewed Programs That Have Evidence Rating
Effective 9% 13%
Promising 36% 59%
Inconclusive 38% NA
No Effects 17% 27%

For greater details on how a program or practice is slotted into one of these five categories, see:

Date Published: November 29, 2019