| ||Literature Coverage Dates||Number of Studies||Number of Study Participants|
|Meta-Analysis 1||1976 - 1997||4||0|
|Meta-Analysis 2||1988 - 2005||4||7178|
Wilson, Gallagher, and MacKenzie (2000) examined the effectiveness of corrections-based education, vocation, and work programs for adult offenders through a meta-analysis of 33 experimental and quasi-experimental evaluations. Studies were included in the meta-analysis if they 1) evaluated an education, vocational, or work program for convicted adults or persons identified by the criminal justice system, 2) provided a postprogram measure of recidivism (including arrest, conviction, self-report, technical violation, or incarceration), 3) included a nonprogram comparison group (a comparison group that did not receive an educational, vocational, or work program), and 4) were published after 1975 in English.
A thorough search of the literature led to the inclusion of 33 eligible studies. The program comparison–contrast was the unit of analysis, allowing for multiple program comparison–contrasts per study. The 33 studies reported 53 program comparison–contrasts that were identified and coded for the analysis. More than 40 percent of the studies (14 out of 33) were journal articles or book chapters. The other studies were either government documents (10 out of 33) or unpublished manuscripts (9 out of 33). The studies generally had large sample sizes. The median number of participants across the program groups was 129, and the median number across the comparison groups was 320 (a total number of participants was not provided). Slightly fewer than half of the studies included only male participants. Female participants were included in 19 studies; however, they generally represented fewer than 21 percent of the study sample, therefore it is difficult to generalize findings from the analysis to women. In the remainder of the studies, it was unclear whether study participants included both men and women. Information on the age and racial/ethnic breakdown of the study samples was not provided.
There were only 4 studies (out of 33) that examined the relative effects of correctional industries/work programs. The form of effect size selected was the odds ratio. Recidivism was the primary outcome of interest. This was measured as a dichotomy (i.e., the percentage or proportion of program and comparison participants who recidivated).Meta-Analysis 2
The 2006 meta-analysis by Aos, Miller, and Drake updated and extended an earlier 2001 review by Aos and colleagues. The overall goal of the review was to provide policymakers in Washington State with a comprehensive assessment of adult corrections programs and policies that have the ability to affect crime rates. This meta-analysis concentrated exclusively on adult corrections programs.
A comprehensive search procedure was used to identify eligible studies. Studies were eligible to be included if they 1) were published in English between 1970 and 2005, 2) were published in any format (i.e., peer-reviewed journals, government reports, or other unpublished results), 3) had a randomly assigned or well-matched comparison group, 4) had intent-to-treat groups that included both complete and program dropouts, or sufficient information was available that the combined effects could be tallied, 5) provided sufficient information to code effect sizes, and 6) had at least a 6-month follow-up period and included a measure of criminal recidivism as an outcome.
The search resulted in the inclusion of four studies of correctional industries programs in prison. The four studies included more than 7,000 adult participants. One study was published in a journal. The other studies were government reports or unpublished evaluations. No information was provided on the age, gender, or racial/ethnic breakdown of the studies’ samples, nor on the location of the programs.
The mean difference effect size was calculated for each program. Adjustments were made to the effect sizes for small sample sizes, evaluations of “non–real world” programs, and for the quality of the research design. The quality of each study was rated using the University of Maryland’s five-point scale; only studies that received a rating of 3 or higher on the scale were included in the analysis (a rating of 3 means a study used a quasi-experimental design with somewhat dissimilar treatment and comparison groups but there were reasonable controls for differences). Once effect sizes were calculated for each program effect, the individual measures were added together to produce a weighted average effect size for a program or practice area. The inverse variance weight was calculated for each program effect, and those weights were used to compute the average. The fixed-effects model was used for the analysis.