Panel Paper: The Comparative Regression Discontinuity Design: A Simulation Study of Its Sensitivity to Violations of Its Key Assumptions about Causal Generalization Away from the Treatment Cutoff

Thursday, November 7, 2019
Plaza Building: Concourse Level, Governor's Square 10 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Kathryn A. Hendren, Mansi Wadhwa and Thomas Cook, George Washington University


The comparative regression discontinuity design (CRD) seeks to improve basic regression discontinuity (RD), most importantly regarding identifying causal effects away from the treatment cutoff and not just at it. CRD requires measuring an untreated outcome along the full length of the assignment variable then adding this measure to usual RD characterized by posttest assessments within both the treated and untreated segments of the assignment variable. When a pretest measure of the study outcome is used as the untreated comparison function, empirical evidence from design experiments (LaLonde, 1986) indicates that CRD can sometimes provide valid causal claims away from the cutoff (Wing & Cook, 2013; Tang, Cook, Kisbu-Sakarya, Hock, and Chiang, 2017). The present paper explicates the assumptions on which such a conclusion depends and then uses a simulation study to test CRD’s sensitivity to violations of these assumptions. The paper tests the simulation estimates against each other and against results for the same estimand from the randomized experiment that generated the data on which the simulation was based. Using literacy data from the National Head Start Impact Study, we simulated ten scenarios involving different degrees of deviation from parallel slopes for the three observed untreated segments – pretest observed in the untreated and treated segments and posttest observed in the untreated segment – and also for the unobserved potential outcome slope that indexes what would have happened to the treated group absent treatment. We also varied the amount of deviation around these slopes. To establish the degree of final bias, all estimates were compared against average treatment effects on the treated from the experiment. The findings indicate that CRD is highly sensitive to the untreated outcome slope deviating from observed slopes, but is more forgiving of non-parallel pretest scores either above or below the cutoff. When multiple slopes are nonparallel, the pattern of deviation can either exacerbate or reduce the bias, depending on the specific scenario. Increasing the level of variance around a slope serves to exacerbate whatever bias is introduced by deviations from parallel slopes. Users of CRD who seek to make general causal inferences will need to be cautious, restricting themselves to large sample studies, clear trends across the assignment variable, and with no evidence of a discontinuity at the cutoff in the comparison regression function. Under these conditions, we plot how tolerable are specific levels of deviation in the pretest observations in the treated regression segment and in the posttest observations in the untreated segment.

LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. The American economic review, 604-620.

Wing, C., & Cook, T. D. (2013). Strengthening the regression discontinuity design using additional design elements: A within‐study comparison. Journal of Policy Analysis and Management, 32(4), 853-877.

Tang, Y., Cook, T. D., Kisbu-Sakarya, Y., Hock, H., & Chiang, H. (2017). The comparative regression discontinuity (CRD) design: An overview and demonstration of its performance relative to basic RD and the randomized experiment. In Regression discontinuity designs: Theory and applications(pp. 237-279). Emerald Publishing Limited.

Full Paper: