Panel Paper:
Evaluating the Head Start Designation Renewal System
*Names in bold indicate Presenter
This paper addresses the research questions (1) Did grantees designated for competition differ on selected quality measures from grantees not designated? (2) Was the new criterion regarding the CLASS providing reliable and valid monitoring data of the grantees? First, we compared grantees that were and were not designated. Second, we examined the specific criterion that resulted in designation. We compared grantees not designated with those designated due to deficiencies in meeting Head Start Program Performance Standards and with those designated due to low CLASS scores to examine the two designation criteria in separate analyses. The evaluation tested whether grantees that were and were not designated for competition differed on the quality of their Head Start classrooms, child health and safety practices, parent involvement, or center and grantee administration. We also compared CLASS domain scores as collected by the DRS monitoring team and collected by the evaluation team to assess agreement and potential bias.
Seventy-one randomly selected grantees (35 designated for competition, 36 not designated) were assessed in: randomly selected classrooms using the CLASS and 3 aligned measures –the Early Childhood Environmental Rating Scale (ECERS)-Revised, ECERS- Extension, and the adapted Teacher Style Rating Scale; their centers using the Program Administration Scale to examine parent involvement, staff qualification, governance, and fiscal administration and the combination of two child health and safety questionnaires by National Association for the Education for Young Children and the state of California to measure health and safety practices; and their grantee directors using the PAS and a measure of technical assistance and professional development. In addition, tax data were analyzed. Analyses tested for differences between designated and not designated grantees on these quality measures. Although we cannot share the results at this time because will not be publicly available until Summer 2016, we believe that they will have implications for future implementation of the DRS and for state quality rating and improvement system efforts.