Panel Paper: Evaluating the Head Start Designation Renewal System

Friday, November 4, 2016 : 8:30 AM
Columbia 2 (Washington Hilton)

*Names in bold indicate Presenter

Margaret Burchinal, University of North Carolina at Chapel Hill and Teresa Derrick-Mills, Urban Institute


This paper will describe the evaluation of an accountability and performance assessment system, the Head Start Designation Renewal System (DRS), developed to respond to mandates in the 2007 reauthorization of Head Start that required monitoring to identify which grantees are not delivering high-quality, comprehensive services and require that they recompete for their grants.  In 2011, Office of Head Start (OHS) began implementing the DRS which added monitoring teacher-child interactions using the Classroom Assessment Scoring System (CLASS) to the existing criteria, resulting in seven conditions for determining grantee quality.  To date, most designations (99%) have resulted from two of the conditions: (1) Receiving a deficiency (i.e., a systemic or substantial failure) in meeting Head Start Program Performance Standards during a monitoring review by OHS, and (2) CLASS scores below a minimum threshold or in the lowest 10% of scores in any one the CLASS domains (i.e., Emotional Support, Classroom Organization, or Instructional Support).

This paper addresses the research questions (1) Did grantees designated for competition differ on selected quality measures from grantees not designated? (2) Was the new criterion regarding the CLASS providing reliable and valid monitoring data of the grantees?  First, we compared grantees that were and were not designated.  Second, we examined the specific criterion that resulted in designation.  We compared grantees not designated with those designated due to deficiencies in meeting Head Start Program Performance Standards and with those designated due to low CLASS scores to examine the two designation criteria in separate analyses.  The evaluation tested whether grantees that were and were not designated for competition differed on the quality of their Head Start classrooms, child health and safety practices, parent involvement, or center and grantee administration. We also compared CLASS domain scores as collected by the DRS monitoring team and collected by the evaluation team to assess agreement and potential bias.  

Seventy-one randomly selected grantees (35 designated for competition, 36 not designated) were assessed in:  randomly selected classrooms using the  CLASS and 3 aligned measures –the Early Childhood Environmental Rating Scale (ECERS)-Revised, ECERS- Extension, and the adapted Teacher  Style Rating Scale; their centers using the Program Administration Scale  to examine parent involvement, staff qualification, governance, and fiscal administration and the combination of two child health and safety questionnaires by National Association for the Education for Young Children and the state of California to measure health and safety practices; and their grantee directors using the PAS and a measure of technical assistance and professional development.  In addition, tax data were analyzed. Analyses tested for differences between designated and not designated grantees on these quality measures. Although we cannot share the results at this time because will not be publicly available until Summer 2016, we believe that they will have implications for future implementation of the DRS and for state quality rating and improvement system efforts.