A standardized, quantitative index representing the magnitude and direction of an empirical relationship. More specifically, the effect size is a value that reflects the magnitude of the treatment effect. An effect size from an outcome evaluation represents the change in an outcome measure from before a program is implemented to the follow-up period. The effect size of the treatment group can be compared to the effect size from the control group to determine if there are any differences, and if so, whether those differences are statistically significant (which allows for greater confidence that the difference was due to the program). See Statistical Significance for more information. The most common types of effect sizes in the criminal justice and delinquency literature are the standardized mean difference effect size; odds ratios and risk ratios; and correlation coefficients.
In program evaluation, the effect size is typically hypothesized a priori to guide decisions about needed sample size and the likelihood of Type I and Type II errors (See Type I Error and Type II Error for more information). In a meta-analysis, the effect sizes from the various evaluation studies are standardized to be in the same form. By representing the findings of each study included in a meta-analysis in the same form, this permits a synthesis of those findings across studies. After evaluation data are analyzed, an actual effect can usually be estimated from the data, and this value is often used as a basis for comparative effectiveness research on alternative interventions.
The magnitude of an effect size is often judged using “rules of thumb” from social science research. For example, standardize mean difference effect sizes (Cohen’s d or Hedge’s g) are judge using the following rules: small=0.20; medium=0.50; large=0.80. These are not hard cut-off points but rather approximation. There are different standards for each type of effect size.