Panel Paper: A Review of Methods for Assessing Sensitivity of Quasi-Experimental Effect Estimates to Unobserved Confounders

Thursday, November 7, 2019
Plaza Building: Concourse Level, Governor's Square 10 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Fatih Unlu, RAND Corporation, Douglas Lauen, University of North Carolina, Chapel Hill and Elizabeth Stuart, Johns Hopkins University


Despite their widespread use, all quasi-experimental (QE) estimators of program effects rely on an assumption that relies on the absence of unobservable confounders, i.e., variables that are common causes of both the outcome and the treatment. While this is assumption cannot be verified, a number of approaches have been proposed to assess the sensitivity or robustness of results from QE designs to potential unobserved confounders. The first approach entails calculating the threshold associations of a potential omitted variable with a given outcome of interest and the observed covariates that would either a) turn a significant effect estimate into an insignificant one (e.g., Frank & Xu, 2017; Rosenbaum, 2010) or b) to “fully explain away” a specific treatment effect estimate (e.g., VanderWeele and Ding, 2017). A second approach approximates the magnitude of the bias due to omitted confounders and calculate bias-adjusted impact estimates and confidence intervals under specific assumptions regarding the strength of the of the potential omitted variable with a given outcome of interest and the observed covariates (e.g., VanderWeele and Arah, 2011; Liu, Kuramoto, and Stuart, 2013; and Oster, 2017). Finally, another set of approaches estimate bounds for the treatment effect estimates (e.g., Lee, 2009; Lee, 2011; and Oster, 2017).

The proposed paper makes two contributions to the sizeable but fragmented research base on sensitivity analysis approaches for QE estimators. First, it conducts a comprehensive review of the various approaches and methods, highlighting their common features, their differences, and identifying specific situations or conditions under which a given approach may be preferable to the others. Second, it demonstrates the implementation the approaches utilizing a large and longitudinal dataset (which includes administrative data for over 700,000 students across 10 school years) used for the QE evaluation of early college high schools in North Carolina. This empirical exercise uses multiple outcome measures with potentially different confounders (e.g., high school graduation, postsecondary degree attainment, criminal convictions, and voting) and complements the theoretical comparison of the various sensitivity and robustness analysis approaches.

References

Frank, K.A. and Xu, R., (2017). KONFOUND: Stata module to quantify robustness of causal inferences. Statistical Software Components.

Lee, D. S. (2009). “Training, Wages, and Sample Selection: Estimating Sharp Bounds on Treatment Effects”, Review of Economic Studies, 76(3), 1071-1102.

Lee, W. C. (2011). Bounding the bias of unmeasured factors with confounding and effect modifying potentials. Stat Med;30:1007-17.

Liu, W., Kuramoto S. J., &, Stuart, E. A. (2013). An introduction to sensitivity analysis for unobserved confounding in non-experimental prevention research. Prevention Science, 14(6):570-80.

Oster, E. (2017). Unobservable selection and coefficient stability: Theory and evidence. Journal of Business & Economic Statistics, 1-18.

Rosenbaum, P. R. (2010). Design Sensitivity and Efficiency in Observational Studies. Journal of the American Statistical Association 105 :490, 692-702.

VanderWeele T.J. & Arah O.A. (2011). Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders. Epidemiology, 22:42-52.

VanderWeele, T.J. and Ding, P. (2017). Sensitivity analysis in observational research: introducing the E-value. Annals of Internal Medicine, 167:268-274.