Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Causal Inference Vs. Prediction: When Selection on the Observables Is Almost Always Violated

Saturday, November 14, 2015 : 10:55 AM
Pearson I (Hyatt Regency Miami)

*Names in bold indicate Presenter

Hye-Sung Kim, The University of Georgia
Causal inference, identifying and estimating causal effects, is the key interest of public administration researchers whose work crucially influences evidence-based policy making. Identifying and estimating the impact or causal effect of an intervention, however, depends fundamentally on the assumption of no omitted variables or selection on the observables. This assumption, unfortunately, is almost always violated when researchers are working with observational data.

This paper first raises a concern for over-optimism among applied researchers who use quasi-experiment approaches treating such approaches as equivalent to “as if” randomization and making causal claims. Examples from leading journals in public administration include use of matching estimation, regression approach with a number of control variables, and propensity score estimation. All of these methods assume selection on the observables or no omitted variable bias: improving the balance or conditioning on observables, however, are not equivalent to randomization. In other words, improving balance or conditioning on the observables do not remove the selection bias when we do in fact have omitted variables: unless treatment assignment is randomized, there is no “solution” to make treatment assignment and errors in the outcome independent of each other as many hope.

This paper suggests that, instead of hoping that the selection on the observables is the correct assumption, researchers should provide more transparent information as for the magnitude of potential bias on the estimate and what information the estimate actually presents. This paper discusses three practical ways to avoid misleading interpretations of causal effects in the observational studies and provide more useful information to practitioners.

First, researchers who use the above approaches by making the assumption of selection on the observables should conduct sensitivity analysis to provide information on how the estimate would change in the presence of omitted variables or unmeasured confounders as the correlation between the omitted variables and the errors in the outcome variable increases (Imbens (2003), Rosenbaum (2002), Blackwell (2014)).

Second, researchers may consider shifting the focus of research from estimating the exact causal effects to getting a precise prediction of marginal effects, measuring the extent to which the conditional expectation of the outcome variable changes as the independent variable of interest changes. While this estimate includes not only the causal effect of the treatment variable but also selection bias, researchers can provide clear information about the marginal effects given the model specification.

Third, given that the prediction is the main objective of a study, researchers may implement machine learning methods such as Kernel-Based Regularized Least Squares (Hainmueller and Hazlett (2013)). This method reduces the misspecification bias present in the linear regression or other approaches making assumptions on the functional form. In addition, the interpretation of the marginal effects in Kernel-Based Regularized Least Squares is as simple and straightforward as in linear regression models.

I present data applications to all three suggested approaches to demonstrate the improvement of information in interpreting the estimation results.