Panel Paper:
The Criticality of Context in Evidence-Based Evaluations of Targeted Programs
*Names in bold indicate Presenter
Cheryl B. Leggon, Associate Professor
School of Public Policy, Georgia Institute of Technology
Over the past fifty years, concerns over the adequacy of the science and engineering (S&E) workforces in the United States have led to both policy and programs designed to increase participation in S&E fields. These concerns coupled with demographic changes in the US population, have resulted in a variety of programs focusing on increasing the participation of groups that historically were underrepresented in the US S&E workforces in terms of gender (females) and/or race and ethnicity—African Americans, Mexican Americans, Native Americans, and Puerto Ricans. Some of these programs were funded by the federal government such as, for example, the National Science Foundation Graduate Fellowships, and others by the private sector. Whether public or private, funders want to know what the return on their investment is; they want clear indicators of the extent to which programs they have funded have achieved their stated goals and objectives.
This paper discusses the criticality of context—institutional, socio-historical, and political—in interpreting data in evidence-based program evaluation, and assessing the extent to which a program can be sustainable and scalable in other contexts. Data derive from relevant research literatures (to which the author has contributed) as well from the author’s 25 years of experience as an evaluator.