*Names in bold indicate Presenter
A growing literature uses the method of within study comparison to evaluate the performance of quasi-experimental methods relative to benchmarks provided by a randomized experiment. There have been several within study comparisons of matching methods and regression discontinuity designs. In contrast, there are very few within study comparisons of the kind of ITS and DID approaches that are so common in the applied literature. We help address this deficiency in this paper by conducting a within study comparison of ITS and DID methods using experimental data from the Cash and Counseling Demonstration Project, which studied the effects of a “consumer-directed” care program on the health and expenditure patterns of a disabled Medicaid enrollees. The original study was conducted in three different states (Arkansas, New Jersey, and Florida), randomly assigned people to treatment and control groups, and followed each participant for 12 months before the intervention and 12 months after the intervention.
We used experimental data to conduct several within study comparisons. We created a simple ITS design within each of the three states by deleting control group information and estimating treatment effects in a regression framework that allowed for intercept and slope changes after the introduction of the treatment. Next, we studied standard and flexible versions of the DID approach by combining the treatment group data from one state with the control group data from the other states to simulate a setting in which a policy change occurred in one state and not in the others. Finally, we studied a more elaborate approach in which matching methods were used to construct an out of state control group and then DID methods were applied. In each case, we evaluated the performance of each method in terms of reproducing benchmark estimates from the randomized experiment. We also studied the effectiveness of several tests and strategies for ruling out known threats to validity.