Panel Paper: External Validity in U.S. Education Research: Evidence from the What Works Clearinghouse

Thursday, November 2, 2017
Field (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Patrick Sean Tanner, Learning Policy Institute


As methods for internal validity improve, methodological concerns have shifted toward assessing how well the research community can extrapolate from individual studies. Under recent federal granting initiatives, over $1 billion has been awarded to education programs that have been validated by a single randomized or natural experiment. If these experiments have weak external validity, scientific advancement is delayed and federal education funding might be misallocated. By analyzing 2603 effects from 534 trials clustered within 309 interventions that meet federal standards for evidence, this research describes how well a single study’s results are predicted by additional studies of the same intervention in addition to analyzing how well study samples match the target populations of interventions. I find that U.S. education trials are conducted on samples of students who are systematically less white and more socioeconomically disadvantaged than the overall target population of students. Moreover, I find that effect sizes tend to decay in the second and third trials of interventions. The effects from the second trials of interventions are typically half the size of effects from the first trials of the same interventions. I analyze how much of this decay can be explained by study characteristics, sample demographics, and selective reporting of results within studies.