Panel Paper: Scaling-up an Early Childhood Professional Development Program: Exploring Variation in Treatment Effects By Cities and Centers

Friday, November 9, 2018
Taft - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Terri J. Sabol1, Dana Charles McCoy2, Kathryn E. Gonzalez2, Sarah Guminski1, Luke Miratrix2 and Larry Hedges1, (1)Northwestern University, (2)Harvard University

Improving early educational quality is central to building children’s school readiness and reducing achievement gaps (Pianta, Barnett, Burchinal, & Thornburg, 2009). Teacher professional development initiatives have demonstrated positive impacts on classroom quality and children’s outcomes (e.g., Hamre et al., 2012). At the same time, average effect sizes for these programs remain both modest and variable, ranging across evaluations between 0.20 to 1.00 SDs for classroom quality and 0.10 to 0.50 SDs for child outcomes (e.g., Dickinson & Caswell, 2007; Landry et al., 2009).

Despite these average effects, less is known about the variability of effectiveness within programs. Doing so is important for understanding where, when, and for whom professional development programs may be most effective, as well as how to optimize resource allocation and scalability. The aim of this study is to quantify the variation in effectiveness of a large-scale, two-phase professional development program – the National Center for Research on Early Childhood Education Professional Development Study (NCRECE PDS) – across participating cities and centers.

In particular, we estimate the variability in the NCRECE PDS’s effects on classroom quality, measured by the Classroom Assessment Scoring SystemTM (Pianta et al., 2008) which includes emotional support, classroom organization, and instructional support; and student language, literacy, and executive function outcomes (measured predominantly by the Woodcock Johnson). Participants for this study included 1375 racially/ethnically diverse low-income three- and four-year-old children and 365 teachers from 219 early childhood education centers across 9 U.S. cities. In phase 1 of the NCRECE PDS, teachers were randomly assigned to a 14-week professional development course or a control group. In phase 2, the teachers were re-randomized to a year-long coaching program or a control condition.

Prior research indicated that the NCRECE PDS intervention had positive average effects on classroom instructional quality and children’s executive functioning, but not other outcomes (Hamre et al., 2012). Using site fixed effects (either city or center) and treatment-by-site interactions, we found that the variation of treatment impact on children and teachers was substantial. For example, the cross-site standard deviation of center-specific impacts on instructional support was 1.10 SD (ranging from -1.76 to 2.47 SD), 1.41 SD for classroom organization (ranging from -1.98 to 2.62 SD), and 1.14 SD for emotional support (ranging from -2.05 to 2.80 SD) at the end of phase 2. However, when employing new methodology to test for significant treatment impact variation, which uses random slopes for the treatment assignment and tests for significance using a Q statistic (e.g., Bloom and Weiland, 2014; Raudenbush & Bloom, 2015), we did not detect statistically significant impact variation across sites (either cities or centers) in the outcomes tested.

These results suggest that the NCRECE PDS may have had differential benefits for particular classrooms and children. However, we were unable to precisely estimate these differences due to limited statistical power. Future directions will explore how to maximize power in studies of treatment impact variation, as well as explore the characteristics of classrooms, centers, and communities that benefit the most or the least from the intervention.