Panel Paper: Early Care and Education Center Quality and Child Outcomes: A Meta-Analysis of Six Datasets

Friday, November 4, 2016 : 11:15 AM
Columbia 2 (Washington Hilton)

*Names in bold indicate Presenter

Sandra Soliday Hong1, Terri Sabol2 and Margaret Burchinal1, (1)University of North Carolina at Chapel Hill, (2)Northwestern University


To support the early learning and development of young children, many early childhood education (ECE) programs and policies regulate and monitor the quality of early care and education programs.  Yet, the empirical basis for these monitoring systems and the related rapid roll out of preschool rating systems (e.g., QRIS) is somewhat weak, and is based on studies that explored classroom-level quality indicators and/or did not use empirically rigorous methods. That research has relied in large part on the premise that process quality (e.g., responsiveness of teacher) and structural indicators of quality (e.g., group size, teacher qualifications) collectively promote early learning and development.  However, questions have emerged about the extent to which many of the process and structural quality ratings and combined program-level ratings used in QRIS directly relate to children’s early learning and development. This study examined the extent to which (1) select quality indicators widely used in these monitoring systems; and (2) a simulated center quality rating based on psychometric principles of scale development predict child outcomes among preschoolers in center-based programs.

In the present study, we use secondary data from six large studies of child care quality and children’s school-readiness that collected information on both structural and process quality in early care and education settings serving 3 and 4-year-old children (n = 2,078 programs; e.g., ECLS-B, Head Start FACES 2006 and 2009; North Carolina and Georgia pre-k evaluations). We categorize individual quality indicators (e.g., group size) based on professional standards (e.g., high quality<=14 four-year-old children, low quality >20 children). We then applied basic psychometric principles of scale development—including attending to dimensionality of ECE quality variables and item scoring of selected items— to generate overall program ratings. Two and three-level models were used to account for nesting of children within centers and all indicators were aggregated to the center level. Coefficients for each set of analyses in each study were combined in a meta-analysis by weighting mean coefficient, standard errors, and sample sizes across studies.

In terms of individual quality indicators, the meta-analyses indicated that teacher education was related to pre-literacy skills and director education was related to language, pre-literacy, and math skills. Neither the child-teacher ratio and group size nor the global program quality (ECERS) were reliably related to any of the child outcomes. Curriculum use and training were associated with improved social skills. Observed teacher-child interactions (CLASS) was a small, but significant predictor of language and pre-literacy skills. Effect sizes were small to moderate (.03 < d < .13).

The simulated QRIS rating significantly predicted language, pre-literacy, and math when process quality was not included in the rating and of language and pre-literacy when process quality was added. Overall, results indicated that gains in child outcomes could be predicted from a center-level rating when that scale was designed with psychometric principals in mind and indicators were selected and scored based on research literature. Our findings suggest that research on center-level quality has the potential to inform how ECE policies regulate and monitor quality.