Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel: Regression Discontinuity Designs (RDDs): Extensions to Reduce Bias and Increase Precision
(Tools of Analysis: Methods, Data, Informatics and Research Design)

Friday, November 13, 2015: 1:30 PM-3:00 PM
Brickell South (Hyatt Regency Miami)

*Names in bold indicate Presenter

Panel Organizers:  Lisa Dragoset, Mathematica Policy Research
Panel Chairs:  Thomas Wei, U.S. Department of Education
Discussants:  Jeffrey Smith, University of Michigan and Elizabeth Stuart, Johns Hopkins University


Reducing Bias and Increasing Precision in Rdds By Adding a Pretest or Nonequivalent Comparison Group
Yang Tang1, Thomas Cook1, Yasemin Kisbu-Sakarya2 and Lisa Dragoset3, (1)Northwestern University, (2)Koc University, (3)Mathematica Policy Research



Estimating Causal Effects of Education Interventions Using a Two-Rating RDD: Lessons from Simulations
Kristin E. Porter1, Sean F. Reardon2, Fatih Unlu3, Howard Bloom1 and Joseph Robinson-Cimpian4, (1)MDRC, (2)Stanford University, (3)Abt Associates, (4)University of Illinois, Urbana-Champaign


Regression discontinuity designs (RDDs) are increasingly used in public policy analysis to evaluate the effects of programs and interventions. For example, the U.S. Department of Education (ED) commissioned two large-scale evaluations of education interventions that use an RDD to calculate impacts on student outcomes—The Impact Evaluation of Title I Supplemental Educational Services and the Impact Evaluation of School Improvement Grants (SIG). This panel discusses three extensions of RDDs: (1) calculating standard errors using a new bootstrapping method, (2) adding a pretest or nonequivalent comparison group, and (3) analyzing multiple assignment variables in the context of optimal bandwidth selection. This panel directly relates to the conference theme of evidence-based policy because the first paper was written as part of ED’s large-scale evaluation of SIG, and all three papers explore methods that can reduce bias and increase precision, thereby increasing the usefulness of RDDs in calculating impacts of programs and interventions.