Roundtable: Are Federally-Funded Teen Pregnancy Prevention Programs Effective? Current Rigorous Evaluation of Four Program Types
(CrossCutting)

Friday, November 9, 2012: 9:45 AM-11:15 AM
International A (Sheraton Baltimore City Center Hotel)

*Names in bold indicate Presenter

Organizers:  Seth Chamberlain, US DHHS/Administration for Children and Families
Speakers:  Dirk Butler, US DHHS/Administration for Children and Families, Amy Farb, U.S. Department of Health & Human Services, Heather Tevendale, US DHHS/Centers for Disease Control and Prevention and Matthew Stagner, University of Chicago
Moderators:  Lisa Trivits, U.S. Department of Health & Human Services

In Federal Fiscal Year 2010, for the first time, Congress appropriated funds for comprehensive teen pregnancy prevention programs: the Teen Pregnancy Prevention (TPP) program ($105M per annum), and the Personal Responsibility Education Program (PREP) ($75M per annum). In a time of scarce resources, Congress dedicated three-quarters of the funds for replication of evidence-based programs. Congress also set-aside a quarter of the funds for the development and testing of new, innovative programs. Research in other program areas has demonstrated that attempts to scale up or otherwise replicate evidence-based programs can result in diminished effects (e.g. Welsh, Sullivan, and Olds, 2010). In order to determine whether replicated programs are effective, and to build the evidence-base, these new programs are accompanied by a strong emphasis on rigorous random-assignment evaluation. This roundtable will bring together three speakers and a moderator who are federal staff responsible for four large-scale rigorous evaluations in four types of teen pregnancy prevention programming: • replication and scale-up of evidence-based programs via competitive grants; • replication and scale-up of evidence-based programs via state formula grants; • innovative, new, or untested programs; and • blending of multiple evidence-based strategies in community-wide initiatives. A last speaker familiar with teen pregnancy prevention program meta-analyses will review the quality of the evaluations underway, and discuss how these evaluations will fill gaps in the evidence base and provide nuanced understanding of what is effective in teen pregnancy prevention programming. The moderator and speakers can speak to a range of issues confronted in past and current teen pregnancy prevention program evaluation, including: • With regard to rigorous evaluation: o Standards: What standards – e.g. timing of randomization; e.g. strength of program implementation – will random-assignment evaluations meet? What standards will quasi-experimental evaluations of community-wide initiatives meet? o Recruitment of sites: What challenges have been encountered in recruiting school- and community-based programs for evaluation, and how have they been resolved? o Measurement of outcomes: Which outcomes will be measured, and will they be commonly measured across evaluations? What will the timing of follow-ups be? • With regard to replication of evidence-based programs: o Design and Implementation: How have grantees and states designed and implemented their programs? o Replication and fidelity: How have grantees and states translated and enforced issues related to replication, such as fidelity? o Comparative effectiveness: How will results be compared with others of the same program? • With regard to innovative, new, or untested programs: o Adaptations: Which types of adaptations differentiate these types of programs from the replications funded by the federal government? o Documenting new programs: How will implementation evaluation document innovations, including grantee fidelity to the new program models? • With regard to the community-wide initiatives: o Community context: How are the multi-component initiatives designed in response to the specific community context? How have communities selected components of their programs? o Design of quasi-experimental studies: Which challenges that are unique to community-level experiments has the evaluation confronted, and how have they been resolved?


See more of: CrossCutting
See more of: Roundtable