*Names in bold indicate Presenter
The paper will be organized around key study design features. We will first discuss how the study obtained several representative samples of TAA eligible workers who received UI benefits from 26 randomly selected states. Some of these workers ultimately received TAA services (the main impact analysis sample) and some did not (these workers were used to help understand program participation decisions). We will then discuss an innovative staged approach the study used to obtain defensible comparison samples. Matched comparison samples were initially obtained for each treatment sample using UI claims records. These workers were from the same local labor market areas and had similar jobs as the treatment samples, but were not TAA eligible. Nearest neighbor propensity score matching methods with replacement were used for selection within each state. Two comparison workers were selected for each treatment worker.
In the second stage, using detailed baseline data collected in the first survey, the comparison groups were re-matched to the treatment groups. The first survey collected much more detailed pre-UI claim information than was available in the UI claims data initially used for matching. This re-matching turned out to be critical, because there were important differences in the baseline survey variables between the treatments and initially-matched comparisons. Thus, matching on the richer set of survey variables led to higher quality matches. Kernel matching methods were used for re-matching and produced equivalent distributions of characteristics for the treatment and comparison samples on a large number of demographic and local area variables, including prior work and earnings histories (including industries and occupations). We obtained different matched comparison samples—based on those who received a first UI payment and those who exhausted UI—to generate impact estimates that would likely bound the true impacts.
The final part of the paper will discuss data collection and impact estimation methods. Outcome data were obtained from two rounds of telephone surveys covering the four years after job loss. The interview response rate was 63 percent. We also collected administrative UI wage records to estimate impacts on employment and earnings that do not suffer from potential survey nonresponse bias and are based on much larger samples than the survey samples. Thus, the UI wage records data were critical for assessing the robustness of the main survey-based impact findings.