Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Developing an Evidence Life Cycle for Competitive Grantmaking: The AmeriCorps Program at the Corporation for National and Community Service

Friday, November 13, 2015 : 1:50 PM
Johnson II (Hyatt Regency Miami)

*Names in bold indicate Presenter

Diana Epstein1, Adrienne DiTommaso1 and Robin A Ghertner2, (1)Corporation for National and Community Service, (2)US Department of Health and Human Services
In an increasingly constrained budget environment, the Corporation for National and Community Service (CNCS), a federal agency, has recognized the need to assess the effectiveness of its programs. Memoranda issued by the Executive Office of the President communicate the Administration’s expectations for embedding evidence and evaluation in grant programs as well as strengthening agencies’ capacity to build and use evidence. This paper describes the process for how CNCS has developed an “Evidence Life Cycle” for AmeriCorps, its largest competitive grant program, and how the field of grantees is responding.

     For an AmeriCorps program, the evidence life cycle begins with defining a formal logic model, continues with evaluation planning, moves to evaluation execution and reporting, and ends with formally defining the strength of evidence supporting the program model. Beginning in 2013, evidence and evaluative thinking have been included as selection criteria for applicants, as they are required to articulate a theory of change in the application narrative and submit a logic model. Grantees are also required to submit both an evaluation plan and an evaluation report at designated times in their multi-year grant cycle.

     A 2014 assessment of grantees’ evaluation plans revealed eight distinct capacity needs, including guidance on evaluation budget adequacy, how to select and manage an external evaluator, how to write a comprehensive evaluation plan, clarifying the relationship between performance measurement and evaluation, how to write strong research questions, selecting an evaluation design, and how evaluation results are used for program improvement. This needs assessment has informed the development of a new technical assistance initiative designed to build grantees’ evaluation capacity.

     When recompeting for funding, grantees are required to submit formal evaluation reports resulting from evaluation activities. These reports are used to meet compliance requirements and as supporting documentation to assess a grantee’s evidence base. They provide CNCS with the best available assessment of the type and quality of evaluations that grantees have undertaken. Results from an analysis of these evaluation reports will be presented in the paper.

     The final stage of the evidence life cycle is classifying a program’s evidence base. The Obama Administration’s focus on evidence-based policymaking includes a number of new “tiered evidence initiatives,” including the Social Innovation Fund (SIF) at CNCS. Building on SIF’s experience, the 2014 AmeriCorps competition included evidence tiers for the first time. As expected, applicants largely clustered in the lower evidence tiers: 41% pre-preliminary, 28% preliminary, 22% moderate, and 7% strong. Preliminary results from the 2015 grant competition indicate that the distribution of evidence was similar to the previous year, where most applicants fell in the pre-preliminary (44%) and preliminary (29%) categories, with a relatively small number exhibiting moderate (6%) and strong (4%) evidence. The paper will present more detailed results on the evidence breakdown and the grantee characteristics correlated with the evidence levels. It will also include an analysis of the alignment between grantees’ stated level of evidence and their actual evidence rating provided by an independent contractor.