Panel Paper: The Effects of Accountability Incentives in Early Childhood Education: Evidence from Tennessee

Saturday, November 10, 2018
Wilson A - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Daphna Bassok1, Thomas Dee2 and Scott Latham2, (1)University of Virginia, (2)Stanford University

Nearly all U.S. states have recently adopted Quality Rating and Improvement Systems (QRIS) in an effort to enhance the quality of early childhood education (ECE) at scale. QRIS are accountability systems that aim to create both reputational and financial incentives for ECE providers to improve. QRIS measure ECE program quality along multiple dimensions and make this information available to both the providers and to consumers. Despite large federal investments in these programs, their rapid roll-out, and the costs of collecting the underlying data, we know little about whether these accountability reforms operate as theorized and lead to quality improvements.

This study aims to fill this gap, focusing on unusually rich data from Tennessee’s statewide QRIS, one of the oldest and most well-established in the country. We use a 16-year panel of annual provider-level data that includes the full universe of licensed child care providers since the QRIS was implemented (including both center-based programs as well as family care homes). It provides detailed measures of ECE quality, including director qualifications, professional development, ratios and group sizes, etc. Importantly, it also includes multiple classroom observations, including the item-level quality measures collected in each observed classroom. Finally, the data include administrative information about providers including open & close dates, subsidized enrollment, capacity, auspice, and street addresses. This uniquely rich panel dataset allows for the most in-depth look at longitudinal, state-wide quality improvement in ECE markets conducted to date.

We combine descriptive and causal analysis to characterize the dynamic evolution of child care quality in the decade and a half since this system has been implemented. First, we document large improvements across all measures of quality that are measured and incentivized by Tennessee’s QRIS. For example, we find that from 2002-2015, center-based providers improved their observational rating scores by about .5 SD. Second, we provide compelling descriptive evidence that providers responded strategically to the incentives embedded in the accountability system, as a disproportionate number of providers earned the minimum score necessary to qualify for a particular QRIS rating.

Finally, we examine the causal effect of a specific incentive contrast embedded within Tennessee’s QRIS using a regression discontinuity (RD) design. Providers were quasi-randomly assigned to a higher or lower quality rating based on an arbitrary cut point along a continuous measure of observational quality. We use this to examine the causal effect of higher versus lower quality rating on a host of subsequent measures of program quality as well as closure rates and rates of subsidized enrollment. Our data allow us to examine not only whether QRIS led to improvements in program quality, but also how programs responded, distinguishing between more and less superficial responses.