Panel Paper: Implications of Attrition in Randomized Control Trials: Evidence from Early Education Interventions in Sub-Saharan Africa

Friday, July 24, 2020
Webinar Room 2 (Online Zoom Webinar)

*Names in bold indicate Presenter

Alejandro Ome1, Alicia Menendez1,2 and Cally Ardington3, (1)NORC at the University of Chicago, (2)University of Chicago, (3)University of Cape Town


In this study, we analyze the determinants and consequences of survey attrition in the context of three large-scale early reading impact evaluations recently conducted in Ethiopia, South Africa and Zambia. We apply different econometric techniques to address attrition to analyze how sensitive are program evaluation results to these corrections.

We find that attrition rates vary widely across these three studies. While attrition for the South Africa’s and Zambia’s evaluations are between 10 and 20 percent, for the Ethiopia’s evaluation is about 40 percent. We also find that while attrition is not correlated with treatment status, it is correlated with other students’ factors, in particular with reading performance at baseline. This can have implications on the external validity of the results, as the endline samples are no longer comparable to the baseline samples.

We correct for attrition using the most popular methods in the empirical literature, namely inverse probability weighting, Lee bounds, and Manski bounds. We find that IPW are similar to the Intent to Treat results, suggesting that attrition has minor impacts on the validity of results. However, given the underlying assumption of selection on observables, it is hard to say if this is enough to discard the importance of attrition. Manski bounds are very wide and non-informative. Lee bounds are generally informative and relatively narrow, which is not surprising given that, within each study, attrition rates between treatment and control groups are relatively similar.