Poster Paper: Real-World Challenges to Randomization and Their Solutions

Saturday, November 4, 2017
Regency Ballroom (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Kenya Heard1,2, Elisabeth O'Toole1,2, Rohit Naimpally1,2 and Lindsey Bressler1,2, (1)J-PAL North America, (2)Massachusetts Institute of Technology


Researchers, policymakers, and practitioners are increasingly turning to randomized evaluations to evaluate policy and estimate the causal effect of programs. Compared to other research designs, randomized evaluations are appealing due to their methodological rigor and ability to produce results that are accessible to a variety of audiences (Gueron 2016). Evidence from randomized evaluations has helped policymakers at city, state, and federal levels make informed decisions about which interventions best address social issues.

However, randomized evaluations can be challenging to design and implement in practice—as Glennerster (2017), Gueron (2016), and Karlan and Appel (2016) note. Many challenges arise from program characteristics that, at first glance, may lead stakeholders to conclude that a randomized evaluation is infeasible in their local context.

A number of challenges that evaluators face during a randomized evaluation can be classified as either program design challenges or implementation challenges. While planning an evaluation, constructing a pure comparison group can be difficult if a program functions as an entitlement, has strict eligibility criteria, or when program implementers have sufficient resources to extend the program to every eligible individual in the study area. During the implementation phase of an evaluation, issues can arise if the control group learns of the experiment and reacts unfavorably, which may lead to attrition, or if the control group benefits from or is harmed by the intervention, instead of being unaffected by it. Either scenario could bias the results of an experiment. These challenges help to explain why, despite their appeal, randomized evaluations are not as common as they could be in evaluating social programs (Buck and McGee 2015).

Using the context of ongoing and completed randomized evaluations of programs to reduce poverty, this poster highlights a selection of challenges that researchers and implementing partners face and proposes research design solutions. We detail several research designs that accommodate existing programs and mitigate foreseeable implementation challenges to demonstrate the flexibility of randomized evaluations across contexts. With an understanding of how to adjust randomized evaluations to work with diverse programs, researchers, policymakers, and practitioners will be better poised to design evaluations and produce rigorous evidence to inform policy.

References

Buck, Stuart, and Josh McGee. 2015. “Why Government Needs More Randomized Controlled Trials: Refuting the Myths.” Houston: Laura and John Arnold Foundation.

Glennerster, Rachel. 2017. “The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” In Handbook of Field Experiments, edited by Esther Duflo and Abhijit Banerjee. Amsterdam: Elsevier.

Gueron, Judith M. 2016. “The Politics and Practice of Social Experiments: Seeds of a Revolution.” Working Paper. MDRC, May. http://www.mdrc.org/sites/default/files/2016_Gueron_MDRC_Working_Paper.pdf.

Karlan, Dean, and Jacob Appel. 2016. Failing in the Field: What We Can Learn When Field Research Goes Wrong. Princeton: Princeton University Press.