Saturday, November 9, 2013
:
4:10 PM
3017 Monroe (Washington Marriott)
*Names in bold indicate Presenter
Fatih Unlu and Cristofer Price, Abt Associates, Inc.
The Comparative Short Interrupted Time Series (C-SITS) design is a frequently employed quasi-experimental method, in which the pre- and post-intervention changes observed in the outcome levels of a treatment group is compared with those of a comparison group where the difference between the former and the latter is attributed to the treatment. The increase in the availability and quality of extant data (e.g., state test scores, graduation rates, and college application rates in primary and secondary education and cognitive, language, and socio-emotional assessments in pre-school settings) has made the use of C-SITS designs a more viable option for assessing the impacts of interventions. Despite the recent growth in its use, the existing resources on how to assess statistical power and calculate sample size requirements for this design are still very limited. One such resource is Schochet (2008) which shows that the variance of the difference-in-difference estimator (which can be considered as a special application of C-SITS) critically depends on sample sizes and the cluster-level (if applicable) and individual-level correlations between the pre- and post-test outcome measures. Extending Bloom (1999 and 2003), Dong and Maynard (2013) consider a particular application of the C-SITS model which includes separate linear time trends for the treatment and comparison group and the treatment effect is estimated separately for each follow-up year. They show that the variance of this C-SITS estimator depends on (i) sample sizes, (ii) number of baseline years, (iii) follow-up year of interest, (iv) the proportion of outcome variance that lies across successive cohorts of treatment and comparison units (i.e., cohort-level intra-class correlation), and (v) how much of this variance is explained by covariates included in the model. It is important to note that these studies model the treatment effect as fixed (i.e., it is not assumed to vary across treatment units). Two limitations of the existing research on this topic are the unavailability of:
- Plausible values one can use for these critical parameters in the design stage of a study; and
- Variance formulae for alternative C-SITS specifications such as models (i) with year fixed effects in lieu of group-specific time trends, (ii) that estimate an average impact estimate across all follow-up years, (iii) with cluster-level data only (as opposed to models with individual-level data nested in clusters), (iv) with various forms of baseline projections, and (v) that assume random treatment effects.
With the proposed paper we aim to address these limitations by (i) deriving expressions for the variance of the alternative C-SITS estimators mentioned above and (ii) providing plausible values for the critical variance parameters calculated using real outcome data commonly used in two fields: state test scores in education (these data were collected and analyzed for other purposes) and earnings in labor markets (these data will be obtained from the Current Population Survey). In addition, we compare the variances of the all C-SITS estimators considered for the range of plausible design parameters.