Panel Paper: Do HLM, RCSE, and Design-Based Estimators Differ in Practice?

Friday, November 3, 2017
Dusable (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Charles Tilley, Mathematica Policy Research


Design-based methods use the potential outcomes framework to connect statistical methods to the building blocks of causal inference. They differ from model-based methods, including HLM and robust cluster standard error (RCSE) methods for clustered designs. The design-based methods tend to make fewer assumptions about the nature of the data and also more explicitly account for known information about the experimental and sampling designs. While these theoretical features suggest there are advantages to using design-based methods, it is unclear how much practical difference it makes using these methods versus the more conventional model-based methods.

This presentation will cover results from a study that addresses this question by re-analyzing data from nine RCTs in the education area using both design- and model-based methods. To investigate the full scope of differences between the methods, the study uses data from different types of RCT designs commonly used in social policy research: (1) non-clustered designs in which individuals are randomized, (2) clustered designs in which groups are randomized, (3) non-blocked designs in which randomization is conducted for a single population, and (4) blocked designs in which randomization is conducted separately within partitions of the sample.

The study focuses on two analyses that compare model- and design-based methods, both of which suggest there is little substantive difference in the results between the two methods. For both analyses, the study uses a reference model-based method similar to the one used in the original evaluation. In the first analysis, the study compares the reference model-based method to a design-based method with underlying assumptions that most closely align with those of the reference method. In the second analysis, the report presents a sensitivity check that compares the reference method to an alternative design-based method. The alternative method is based on the default settings in the RCT-YES software, which were developed by a panel of expert methodologists, and correspond to an alternative set of plausible assumptions. The findings from both analyses suggest that model- and design-based methods yield very similar results in terms of the magnitude of impact estimates, statistical significance of the impact estimates, and implications for policy.

To contextualize the differences in impact estimates between design- and model-based methods, the study also presents a third analysis that compares estimates from HLM and RCSE methods. Importantly, this analysis suggests that the differences between the design- and model-based methods are similar to the differences that would arise between the two model-based methods.

Based on the studies we consider, the impact findings using the various design- and model-based methods are similar. The magnitude of the differences, however, sometimes depends on the specific underlying assumptions, for example, how clusters and blocks are weighted for the analysis. Our study suggests that researchers should select estimators with assumptions that best suit the goals of their study regardless of whether they use a design- or model-based approach.