National Institute of Justice National Institute of Justice. Research. Development. Evaluation. Office of Justice Programs
skip navigationHome  |  Help  |  Contact Us  |  Site Map   |  Glossary
Reliable Research. Real Results. skip navigation
skip navigation Evidence Continuum uses rigorous research to inform you about what works in criminal justice, juvenile justice, and crime victim services. We set a high bar for the scientific quality of the evidence used to assign ratings — the more rigorous a study’s research design (e.g., randomized control trials, quasi-experimental designs), the more compelling the research evidence.[1]

In the rating process, we address program and practice evaluations that sit along an evidence continuum with two axes: (1) Effectiveness and (2) Strength of Evidence. See Exhibit 1.

Where a program or practice sits on the Effectiveness axis tells us how well it works in achieving criminal justice, juvenile justice, and victim services outcomes. Where it sits on the Strength of Evidence axis tells us how confident we can be of that determination. Effectiveness is determined by the outcomes of an evaluation in relation to the goals of the program or practice. Strength of evidence for programs is determined by the rigor and design of the outcome evaluation, and by the number of evaluations. Strength of evidence for practices is determined by the rigor and design of the studies included in the meta-analysis.

Exhibit 1: Continuum of Evidence

Programs fall along two continuums: Effectiveness and Strenght of Evidence

On, programs and practices[2] fall into one of four categories, listed below in order of effectiveness from the continuum:

  • Rated as Effective: Programs and practices have strong evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes when implemented with fidelity.
  • Rated as Promising: Programs and practices have some evidence to indicate they achieve criminal justice, juvenile justice, and victim services outcomes. Included within the promising category are new, or emerging, programs for which there is some evidence of effectiveness.
  • Inconclusive Evidence: Programs and practices that made it past the initial review but, during the full review process, were determined to have inconclusive evidence for a rating to be assigned. See Reasons for Rejecting Evaluation Studies. Interventions are not categorized as inconclusive because of identified or specific weaknesses in the interventions themselves. Instead, our reviewers have determined that the available evidence was inconclusive for a rating to be assigned. Download Programs reviewed but not assigned a rating (xlsx) or Practices reviewed but not assigned a rating (xlsx).
  • Rated as No Effects: Programs have strong evidence indicating that they had no effects or had harmful effects when implemented with fidelity.

In addition to the four categories above, we also maintain a list of programs and practices for which the available evaluation evidence was not rigorous enough for our review process or that fell outside the scope of Download Screened-out program evaluations (xlsx) or screened-out meta-analyses for practices (xlsx).

The majority ofprograms that have an evidence rating are Promising (61 percent). An almost equal percentage of programs are rated Effective (19 percent) as are rated No Effects (20 percent). Table 1 shows, by percentage, the distribution of programs that have been identified for potential review, that have had their evidence reviewed by the criteria, and that have resulted in an evidence rating.

Table1: Percentage of Programs by Rating and Category
Evidence Rating Programs With Evidence Reviewed Programs That Have Evidence Rating
Effective 13% 19%
Promising 41% 61%
Inconclusive 44% NA
No Effects 13% 20%

For greater details on how a program or practice is slotted into one of these five categories, see:


[note 1] The discussion of the evidence continuum on this page is based largely on work done by the Centers for Disease Control and Prevention. See “Understanding Evidence Part 1: Best Available Research Evidence. A Guide to the Continuum of Evidence of Effectiveness” (pdf, 24 pages).

[note 2] For Practices, each outcome receives a separate rating of Effective, Promising, or No Effects; the practice as a whole does not receive a rating. See How We Review and Rate a Practice From Start to Finish.