Friday, November 7, 2014
Apache (Convention Center)
*Names in bold indicate Presenter
Design-replication studies assess the comparability of treatment effect estimates from experimental and observational designs that share the same treatment group but different comparison groups. Our design-replication study uses, as a benchmark, a large-scale randomized field experiment that tested the effectiveness of norm-based messages designed to induce voluntary reductions in water consumption during a drought. To form non-experimental comparison groups, we use data on approximately 150,000 households from two neighboring counties. We explore the ability of fixed-effects, panel data designs to replicate the experimental benchmark’s estimated treatment effect. Such designs are well suited for evaluating the impacts of local government programs where participating households do not self-select into the program, but they may have sorted themselves across administrative units based on fixed characteristics that also affect the outcomes. We explore a variety of designs, from simple linear, fixed-effects panel data estimators to more advanced non-parametric panel data estimators. Design-replication studies, however, have often been criticized because they typically use a single treatment group and a single non-experimental comparison group. Without additional comparison groups, one cannot examine the robustness of the design or method – does it perform just as well when applied to another comparison group for which it should, based on theory and field knowledge, perform just as well. With two potential pools of comparison households, we can explore the robustness of successful designs or methods to changes in the comparison group.