Panel Paper: Do Estimated Impacts on Earnings Depend on the Source of Data Used to Measure Them? Evidence from Previous Social Experiments

Thursday, November 6, 2014 : 8:30 AM
Laguna (Convention Center)

*Names in bold indicate Presenter

Burt S. Barnow, George Washington University and David H. Greenberg, University of Maryland, Baltimore County
Both data obtained from surveys and data maintained by government units for administrating programs have been used to track the outcomes of social programs (e.g., earnings, educational achievement, and government benefit receipt) and, hence, to estimate their impacts.  Sometimes both have been used.  As discussed in this paper, and focusing mainly on estimates of program effects on earnings, there is evidence from randomized field tests that have used data from both sources to estimate the same impacts that the data used to measure program effects can seriously affect the findings and may lead to differing policy recommendations.  This paper examines the reasons impact estimates from survey and administrative data often differ.  

The paper first briefly compares administrative and survey data in terms of their limitations.  The paper then develops a simple model to examine the mechanisms that can cause impacts estimated with administrative data to differ from those estimated with survey data.  Following that, it looks at the relatively few experiments that have used both administrative data and survey data to estimate the same impacts and examine the extent to which these estimates differ and how these differences are treated in evaluations of the programs tested by social experiments.  Of the eight investigated experiments, the differences were substantial and important in all but one, with the survey-based impacts always larger than those estimated with administrative data.  In general, biases resulting from reporting errors, especially unbalanced reporting errors, appear to be a more important source of the differences than response bias.  The bias from reporting errors seems to result both because administrative data understate earnings impacts and survey data overstate them; and in the latter situation, the overstatement is often worse for the treatment group than the control group. In those experimental evaluations that have analyzed data from both surveys and administrative sources, differences in impacts in earnings have been treated in diverse ways; but they have proven very difficult to reconcile.  The choice of how to treat these differences appears to have especially important implications for cost-benefit analyses of the tested programs.  Both sources of data are subject to weaknesses, in considerable part because some or all of the earnings of members of the sample population are inevitably missed by both, albeit for different reasons. The implications of the differences, which are troubling, and some suggestions for addressing them are discussed at the end of the paper, along with the more common situation in which data from only surveys or administrative data are available.