Panel Paper: Methods for Disentangling the Effects of Individual Components of Coaching in Head Start

Saturday, November 9, 2013 : 1:45 PM
3015 Madison (Washington Marriott)

*Names in bold indicate Presenter

Eboni Howard1, Marie-Andree Somers2 and James Taylor1, (1)American Institutes for Research, (2)MDRC
Social interventions consist of multiple components that are bundled together with the objective of improving the outcomes of individuals or groups. Decisions about which components to bundle are based primarily on theory and professional experience, rather than empirical evidence about the individual effects of these components. Evaluations then focus on studying the intervention as a whole. This approach to evaluating a social intervention makes it difficult to determine which specific components of an intervention are important for affecting outcomes, which in turn makes it challenging for practitioners to adapt or scale-up interventions to meet their local needs. For this reason, evaluation science needs to move towards policy experiments that open the “black box” of social interventions by testing the effect of individual intervention components.  

As an example of this approach, AIR and MDRC have developed a study design to evaluate the effect of individual coaching components used in early education childhood settings, particularly Head Start programs. A coaching component is defined here as any element or feature that can be separated out in order to study its individual effect on the outcomes of interest. The study design is intended to provide Head Start programs with reliable evidence on the effect of different levels of coaching components, so that the programs can then implement stronger coaching interventions.

Our presentation will cover two aspects of the study design. First, we will review coaching components that could be systematically varied (for example, from “low” to “high” intensity) to estimate their effect on teacher and classroom outcomes. Second, we will discuss the recommended experimental design for estimating the effect of these coaching components – a factorial design.  Factorial designs are the preferred approach for several reasons. First, factorial designs provide findings that are useful for policymakers and practitioners who are creating or adapting interventions in the field because they account for—and provide information on—interaction effects between components. Second, although factorial designs are often disregarded by evaluators because they require more experimental conditions than other designs, they require a smaller sample to statistically detect a component effect of given magnitude, which can outweigh the complications and cost of having to implement a larger number of conditions.

The findings from a factorial design are also well suited to feed into rigorous intervention design and evaluation. Specifically, evaluators could use the results from the factorial experiment to design an “optimal” coaching model whose components meet some minimum threshold for effect size or cost effectiveness.  The impact of this optimal model could then be evaluated against a control or “business as usual” condition. This staged approach to developing social interventions is also known as the multiphase optimization strategy (MOST) (Collins, Dziak, & Li, 2009; Collins, Murphy, Nair, & Strecher, 2005). This approach can be an excellent way to move intervention science forward, to better understand implementation of complex interventions,  and to build a coherent body of knowledge about which specific components work and do not work in a particular intervention area.