Roundtable: Low-Cost Randomized Controlled Trials (RCTs): Putting the Pricetag On Rigorous Policy Research

Thursday, November 8, 2012: 10:15 AM-11:45 AM
Hanover A (Radisson Plaza Lord Baltimore Hotel)

*Names in bold indicate Presenter

Organizers:  Steven Glazerman, Mathematica Policy Research
Speakers:  Paul decker, Mathematica Policy Research, Inc., Ricky Takai, Jon Baron, Coalition for Evidence-Based Policy and Roland Fryer, Harvard University
Moderators:  Robert Granger, William T. Grant Foundation

Policymakers are increasingly squeezed from two sides. On one side, the standards of evidence for establishing the effectiveness of policies and programs is very high, with randomized experiments being the gold standard for high stakes decisions. On the other side, budget pressures at every agency and level of government are forcing policymakers to scale back programs and capacity in every area, including data collection and analysis. As a result, advocates for rigorous policy research have sought ways to generate more evidence for lower cost, to avoid going back to the dark ages of evidence-free decision-making. A recent report by the Coalition for Evidence-Based Policy offers hope to policymakers by highlighting RCTs that have generated highly credible findings on seemingly small budgets. The paper argues that increasing availability of administrative data, routinely collected for non-research purposes, can cut down on the major cost-driver of program evaluations: data collection. Another perspective, however, is that highlighting rigorous evaluations with small price tags could be misleading. The price to an agency is not the same as the cost, borne by various parties, of conducting the research. The use of program administrative data for research purposes may be a greater challenge than meets the eye. And studies that forgo more skilled or experienced staff, peer review, and quality control or that skimp on monitoring compliance and designing and testing data collection instruments may have to compromise on quality, scope, or rigor. The proposed roundtable will feature a discussion and debate of this topic among researchers who are experienced at program evaluation, both small-scale and nimble efforts and large-scale comprehensive studies, as well as the perspective of evaluation funders and consumers. Participants will use the paper by the Coalition for Evidence-Based Policy as a launching point and have the opportunity to introduce new information on where the money goes when funders of social programs invest in evaluating the impacts of their interventions. This topic is central to the theme of informing strategies for managing budget cuts. We will discuss lessons learned from low-cost evaluations (documented the CEBP paper) and from larger scale efforts and the costs and benefits of investing in evaluation research in times of tight public resources. We suspect this roundtable will appeal to a very broad cross-section of APPAM members. The panelists represent the best of the key perspectives (producers and consumers of evaluation research) and will foster a deeply engaging discussion.

See more of: CrossCutting
See more of: Roundtable