Panel Paper:
A Comparative Assessment of the Bias and Precision of Difference-in-Differences and Ancova Estimators
*Names in bold indicate Presenter
Researchers primarily use two estimators that differ by how they utilize the baseline measures. The first one is the difference-in-differences (DID) estimator in which pre- and post-treatment changes in the outcome levels of the treatment group is compared with that for the comparison group and the difference between the two is attributed to the treatment. When the pre-treatment data is available for multiple baseline time points, various Comparative Short Interrupted Time Series (C-SITS) methods can be implemented within the DID framework. The second estimator entails controlling for the single or multiple baseline measures as independent variables in regression models, which is also called ANCOVA.
These two estimators have important analytic differences which may have important implications for the bias and precision of the resulting impact estimates. Specifically, the identifying assumption of the DID/C-SITS approach is that the treatment assignment is primarily related to time invariant inherent attributes of program participants which are typically controlled for via fixed effects while the ANCOVA approach relies on a different assumption that pertains to the ignorability of treatment assignment conditional on the baseline observations of the outcome measure. Angrist and Pischke (2009) argue that using DID (ANCOVA) leads to over (under) estimates of positive treatment effects if in fact the assumption for the other approach holds. As for precision of the two estimators, there are different views: Fitzmaurice, Laird, and Ware (2004) claim that the ANCOVA estimator tend to have more statistical power while Oakes and Feldman (2001) suggest conditions under which the DID estimator may be more precise.
To our knowledge, there is not an existing resource that conducts a comparative examination of all aspects of the two estimators and provides recommendations for applied researchers. The proposed paper aims to fill this important gap. Specifically, we will synthesize the existing knowledgebase on the bias and precision of the two estimators and conduct original theoretical and simulation analyses to provide a complete examination of the conditions under which the two estimators are expected to yield similar results and the specific instances under which one estimator would be preferable to the other one. Our analyses are underway and we expect to complete a draft of our paper by the end of the summer.