*Names in bold indicate Presenter
The paper first briefly compares administrative and survey data in terms of their limitations. The paper then develops a simple model to examine the mechanisms that can cause impacts estimated with administrative data to differ from those estimated with survey data. Following that, it looks at the relatively few experiments that have used both administrative data and survey data to estimate the same impacts and examine the extent to which these estimates differ and how these differences are treated in evaluations of the programs tested by social experiments. Of the eight investigated experiments, the differences were substantial and important in all but one, with the survey-based impacts always larger than those estimated with administrative data. In general, biases resulting from reporting errors, especially unbalanced reporting errors, appear to be a more important source of the differences than response bias. The bias from reporting errors seems to result both because administrative data understate earnings impacts and survey data overstate them; and in the latter situation, the overstatement is often worse for the treatment group than the control group. In those experimental evaluations that have analyzed data from both surveys and administrative sources, differences in impacts in earnings have been treated in diverse ways; but they have proven very difficult to reconcile. The choice of how to treat these differences appears to have especially important implications for cost-benefit analyses of the tested programs. Both sources of data are subject to weaknesses, in considerable part because some or all of the earnings of members of the sample population are inevitably missed by both, albeit for different reasons. The implications of the differences, which are troubling, and some suggestions for addressing them are discussed at the end of the paper, along with the more common situation in which data from only surveys or administrative data are available.