Panel Paper:
A Meta-Analysis of Within-Study-Comparisons Comparing Regression Discontinuity to Random Assignment
*Names in bold indicate Presenter
The methodology used blends two research traditions. The first is the within study comparison (WSC) literature that seeks to establish whether RCTs and quasi-experiments (QEs) produce equivalent or different causal estimates when each design shares the same treatment group. Such studies, also called design experiments, use the RCT as the unbiased causal benchmark and assess the degree of final bias by comparing the RCT estimate to the adjusted posttest difference in the QE. What varies in a WSC, therefore, is how the comparison group is formed – at random in the RCT and systematically in the QE. In the RD case, the systematic assignment procedure is via a cutoff allocation mechanism that, in the sharp case, completely determines treatment allocation. When the two final impact estimates are identical, no bias is evident; when they differ, bias is evident and the type of QE design used and the type of adjustments used to control for selection bias are held to be inadequate.
Our analysis accounts for clustering of contrasts within WSCs and we use both frequentist and Bayesian methods to test the robustness of our results and to facilitate interpretation for a diverse audience.