Behavioral Science and Evaluation: Collaboration to Enhance Policymaking
(Tools of Analysis: Methods, Data, Informatics and Research Design)
Saturday, November 5, 2016: 10:15 AM-11:45 AM
Morgan (Washington Hilton)
*Names in bold indicate Presenter
Roundtable Organizers: Anne Marie Chamberlain, IMPAQ International, LLC
Moderators: Anne Marie Chamberlain, IMPAQ International, LLC
Speakers: Matthew Darling, ideas42, Benjamin L. Castleman, University of Virginia, Megan Lizik, U.S. Department of Labor and Neha Nanda, IMPAQ International, LLC
Increasingly, policymakers are asking behavioral scientists to collaborate with program evaluators (e.g., recent RFPs from the U.S. Department of Labor, and The World Bank). The purpose of this roundtable is to discuss the dynamics of this collaboration, and strategies for fostering synergy in the policymaking arena.
Behavioral science aims, ultimately, to produce social change, and has succeeded in numerous areas, including agriculture, workforce development, and health. Evaluation, too, aims to produce social change. Given this common purpose, clients’ requests for collaboration, and the behavioral scientists’ enviable position at the policy table (e.g., the Whitehouse Social and Behavioral Sciences Team, and the 2015 Executive Order on Using Behavioral Insights) now is a critical moment for examining the behavioral science and evaluation partnership. This roundtable will do so by facilitating a discussion based on points and perspectives raised by experts who are working in such partnerships.
Roundtable experts will encourage discussion on behavioral science and evaluation commonalities, as well as how to leverage differences between the disciplines. For example, behavioral science and evaluation share a conceptual goal of improving programs. They also share several tools or methods, such as quick turnaround studies or the “Plan-Do-Study-Act” inquiry cycle, design based implementation research (DBIR), and design thinking in general. Behavioral science and evaluation differ, for example, in how they typically interact with a program. Behavioral science tends to focus on more proximal outcomes, such as increasing the rate of program registration or reducing program attrition. Program evaluation, on the other hand, usually focuses on more distal outcomes, such as whether program participants earn a degree or secure employment. Panelists will lead a roundtable discussion to examine the intersection of these disciplines and the various ways in which the behavioral science and evaluation partnership can create mutual benefit and synergy.
Anne Chamberlain, a senior research associate at IMPAQ International, LLC, has been leading program evaluations in multiple policy areas for almost twenty years. Matthew Darling, vice president of ideas42, will provide perspective as an applied behavioral economist. Ben Castleman is a professor of education and public policy at the University of Virginia, whose research applies insights from behavioral economics, to improve college access and success for low-income and non-traditional students. Megan Lizik is a senior evaluation specialist at the U.S. Department of Labor, where part of her work is overseeing contracts that combine behavioral science and evaluation.