Panel Paper: Adjusting Experimental Evaluation Designs to Encourage Site Participation: Evidence From the Wia Gold Standard Evaluation

Friday, November 8, 2013 : 9:45 AM
DuPont Ballroom H (Washington Marriott)

*Names in bold indicate Presenter

Sheena McConnell, Peter Schochet, Linda Rosenberg and Andrew Clarkwest, Mathematica Policy Research
Recruiting sites for experiments is challenging. Asking program staff not to serve a group of randomly-selected people in a control group is contrary to much of the staff’s training and what they have been instructed to do throughout their careers. Some staff view experiments as unethical. Others are concerned about researchers interfering with the day-to-day running of their program.  Many experiments rely on sites volunteering for the study, but then because these volunteer sites may be fundamentally different from other sites, researchers lose the ability to generalize the findings.

This paper discusses how we addressed these challenges in a large, experimental evaluation of the Workforce Investment Act (WIA) Adult and Dislocated Worker Programs funded by the U.S. Department of Labor. In designing the experiment, we explicitly considered the effect of the design on site recruitment. We were cognizant that researchers had failed to recruit a nationally-representative sample of sites for the national experimental study of JTPA, the predecessor to WIA.  In addition, it was challenging because the sites were not mandated to participate in the study and we did not have the funds to make large payments to the sites.

Despite these challenges, we successfully recruited a nationally representative sample of 26 local workforce investment areas from among the 30 randomly selected for the study—a success rate of 87 percent.  No site pulled out of the evaluation. We credit our success to an experiment that was designed to be more acceptable to the sites without compromising the validity of the experiment. For example, we minimized the size of the control or restricted-service groups.  In some sites, less than 5 percent of the program applicants were assigned to the control groups.  We were able to do this by randomly assigning a large number of people—in total about 35,000. We also allowed the control groups recipients to receive basic, light-touch, employment services (such as access to labor market information). We reduced the length of the period in which sample members remained in the control groups. We also allowed program staff to make exceptions to random assignment with our permission, although few in fact requested to make these exceptions. 

We also learned that some approaches to “selling random assignment” were more effective than others. For example, program staff were not receptive to the argument that random assignment is “fair” and “you do not serve everyone who is eligible currently so random assignment will not affect that.” They were, however, receptive to the message that the evaluation is necessary for the long-run health of the program and that we would not reduce the total number of people they could serve. It also was advantageous to recruit a large site early in the recruitment period that staff in other sites respected.  The U.S. Department of Labor support—in making initial calls to the sites and sometimes accompanying us on visits—was also critical.