Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Poster Paper: Experiments in Schools: Methodological Considerations for Conducting Random Assignment in Educational Settings

Friday, November 13, 2015
Riverfront South/Central (Hyatt Regency Miami)

*Names in bold indicate Presenter

Ann-Marie Faria, Nicholas Sorensen, Jessica Heppen, Mindee O'Cummings, Suzanne Stachel, Ryan Eisner, Dionisio Garcia-Piriz and Lily Heine, American Institutes for Research
During the last decade, there has been an explosion of experiments conducted in schools. The establishment of the U.S. Department of Education's Institute of Education Sciences (IES) in 2002 promoted an increase in the rigor of education research, and many federal programs have mandated that only “evidence-based programs” may be funded. Randomized control trials (RCTs) provide the highest level of evidence for program impact because its design incorporates a systematic process for distributing a variety of other potential causal factors across groups (i.e., randomization). RCTs have been used to evaluate the impact of a variety of educational practices and policies, including school vouchers, Head Start, and Reading First. In the field of education, the U.S. Department of Education’s What Works Clearinghouse maintains the evidentiary standards for program effectiveness, and RCTs are the only research designs that will allow claims of effectiveness to be made “without reservations” (IES, 2014).

This paper will summarize the methodological pros and cons of using a matched-pair cluster randomization (MPCR) in a school-based trial. Some literature suggests that MPCR is more successful than simple or blocked random assignment at creating equivalent treatment and control groups on both measured and unmeasured characteristics at baseline. The use of MPCR, however, can have implications on attrition for matched school pairs and data collection. Researchers need to weigh the benefits of MPCR against the potential costs.

This paper will provide an in-depth example of using MPCR to study the impact of the Early Warning Intervention and Monitoring System (EWIMS). The EWIMS Impact Study is examining the effect of EWIMS on both school and student outcomes. In total, 73 schools were randomly assigned across three states. Schools were matched using Mahalanobis’s metric matching based on key covariates that also determined schools’ eligibility to participate in the study (determined by characteristics such as school size and graduation rate). The matched pairs were subsequently randomly assigned to either the treatment group or a delayed treatment control group.

Using the example of the EWIMS study, the proposed paper will examine the pros and cons of using MPCR in evaluating educational programs and contrast them with traditional blocked random assignment at the school level. We also will discuss the realities of implementing MPCR in the field with schools in terms of design, school recruitment, data collection, attrition, and data analysis.