Panel Paper: Bounding Approaches for Generalization

Thursday, November 2, 2017
Field (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Wendy Chan, University of Pennsylvania


Policymakers have grown increasingly interested in understanding for whom interventions work. This question assesses the extent to which results from experimental studies generalize to target populations of individuals and requires probability sampling in practice. However, probability sampling in educational interventions is rare so that generalizations from non-random samples typically involve the use of model-based methods that rely on several strong assumptions. Because these assumptions may not hold in practice, this study considers bounding approaches that make fewer or different assumptions on the data. Bounding methods yield interval estimates instead of single point estimates of the parameters of interest. We derive bounds under three frameworks: bounds using the data alone, bounds under a monotonicity assumption, and bounds under propensity score stratification with monotonicity. Under monotonicity, we illustrate how bounds may be tightened when the outcomes in the population are at most as large as those in the sample. Although monotonicity itself is also a strong, untestable assumption, we argue that its plausibility may be suggested from the intervention’s theory of change and prior theoretical evidence, which strengthens the credibility of the bounds. Under propensity score stratification, we illustrate how stratifying individuals in the sample and population yield tighter bounds due to increased homogeneity among the subgroups. We compare the bounds under the three frameworks in a simulation study and assess the extent to which they are informative of the population parameter. We apply the bounding approaches to a completed cluster randomized trial in education and provide empirical bounds of the population average treatment effect of the experimental impact.