| ||Literature Coverage Dates||Number of Studies||Number of Study Participants|
|Meta-Analysis 1||2001 - 2008||7||209|
|Meta-Analysis 2||2001 - 2010||5||306|
Rodenburg and colleagues (2009) reviewed studies that examined the effect of Eye Movement Desensitization and Reprocessing (EMDR) on posttraumatic stress disorder (PTSD) symptoms in children. They performed a keyword search of electronic databases after which they applied the ancestry method
in locating studies referenced in the articles from the initial search. Authors, investigators, and clinicians were contacted as needed to clarify statistical and design questions. Study eligibility included 1) presence of a comparison group, 2) child participants who had been treated for posttraumatic stress reactions, 3) random assignment of children to the treatment group and comparison group, 4) participants who were 18 years or younger, and 5) the availability of posttreatment trauma scores.
Seven studies were included in the review. All seven studies were published between 2001 and 2008. In total, the studies reported on 109 children treated with EMDR and 100 children in the comparison group, with an age range of 4 to18 years. All studies measured posttraumatic stress reactions in some way.
The scales that were most frequently used to measure PTSD symptoms were the Children’s Reaction Inventory (CRI), Child Report of Post-Traumatic Symptoms (CROPS), Impact of Events Scale (IES), and the Parent Report of Posttraumatic Symptoms (PROPS). One study (Rubin et al. 2001) used the Child Behavior Checklist (CBCL) to measure child-internalizing problems (depression and anxiety, withdrawal, and somatic complaints). While this measure was not specifically developed to measure traumatic stress reactions, it was included in the analysis. Excluded for trauma measurement were scores on the Subjective Unit of Disturbance (SUD) and on the Validity of Cognition (VOC) Scale, because those measurements are highly vulnerable to demand characteristics (Acierno et al.1994).
The effect size metric was calculated using Cohen’s d
, and the overall mean effect size was calculated using a fixed effects model. Across the seven studies, children in the control groups were 1) on the wait list for treatment, 2) received treatment as usual, or 3) received cognitive–behavioral treatment. Meta-Analysis 2
The Washington State Institute for Public Policy (2016) performed a multivariate meta-regression analysis of studies that evaluated cognitive–behavioral therapy for depression and anxiety. The authors used four primary search procedures to locate studies including 1) an examination of bibliographies of systematic and narrative reviews; 2) a review of citations in the individual studies; 3) independent literature searches of research databases using search engines such as Google, ProQuest, EBSCO, ERIC, PubMed, and SAGE; and 4) contacting authors of primary research to learn about ongoing or unpublished evaluation work. The most important criteria for inclusion was the presence of a control or comparison group or use of advanced statistical methods to control for unobserved variables or reverse causality.
Five studies were included in the meta-analyses. All studies were published between 2001 and 2010. A total of 306 children were included in the analysis across all studies, with ages ranging from 6 to 15 years old, with an average age of 10 years. The studies took place in Australia, the United Kingdom, and the United States. The studies measured at least 1 of the following three outcomes: post-traumatic stress, major depressive disorder, or anxiety disorder. No information is reported about the specific scales used.
An effect size was calculated for each program effect. For continuously measured outcomes, effect size is calculated with a Cohen’s d effect size and a Base variable, which is measured as a standard deviation of the outcome measurement. For dichotomously measured outcomes, effect size is calculated with a D-cox effect size and a Base variable, which is measured as a percentage. Once effect sizes are calculated, the individual measures were summed to produce a weighted average effect size. The inverse variance weight was calculated for each program effect, and these weights were used to compute the average.
Some studies included in the analysis had small sample sizes, which have been shown to upwardly bias effect sizes, especially when samples are less than 20. The Hedges correction factor was used to adjust all mean-difference effect sizes, (where N
is the total sample size of the combined treatment and comparison groups).