Panel Paper: Identifying Naturally-Occurring Direct Assessments of Social-Emotional Competencies: The Promise of Student Assessment Metadata

Thursday, November 8, 2018
Lincoln 3 - Exhibit Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Albert Cheng, Harvard University, Collin Hitt, Southern Illinois University School of Medicine, James Soland, NWEA and Gema Zamarro, University of Arkansas


Social-emotional learning (SEL) is a familiar concept in educational psychology that is gaining new traction in education research, practice, and policy. One reason for the renewed interest in SEL is the growing research evidence of the importance of social-emotional competencies (soft skills) for educational attainment, employment, and other long-run student outcomes. Moreover, the recent federal Every Student Succeeds Act provides states and educators with new opportunities to focus on SEL by requiring them to include non-academic indicators of student success in their accountability plans.

Measuring SEL is integral to fostering and studying it, and most existing measures comprise student surveys. Yet ample research suggests that these measures can suffer from self-report biases, contextual variability, and a general lack of reliability. These shortcomings of self-report measures can undermine the SEL-related inferences about students, teachers, and schools that researchers, educators, and policymakers wish to make.

In this paper, we review new and growing research and present new results that suggests a parallel measurement approach to safeguard against biased scores from common self-report measures. We identify ordinary tasks such as taking achievement tests and surveys that students complete during regular schooling activities. Although these assessments are not meant to measure SEL at all, completing them theoretically requires not only knowledge of the assessed content but also a willingness to engage with the questions and the ability to remain focused. We empirically show that measures of engagement with these tasks — including how long students spend on items, whether they skip items, or provide inconsistent answers — (a) provide useful data related to constructs like academic self-management, self-regulation, motivation, and conscientiousness and (b) explain variation in longer-run educational outcomes. These task-based measures, then, are akin to natural experiments in economics: they are not meant to be direct assessments (just as natural experiments are not designed to randomize study participants), but variation in outcomes from these naturally occurring phenomena can provide meaningful and previously unobserved information as if they were intended for those purposes.

Crucially, these task-based measures are constructed from metadata in student surveys and tests. We underscore how the use of new technologies of data collection such as the use of computerized surveys and tests have facilitated the collection of these new forms of metadata.

After reviewing what is known about the emerging task-based approaches to assessment, we provide concrete examples of how metadata from this new approach can be used by researchers and education practitioners to complement traditional self-reported measures or to provide previously unobserved information about student SEL. This information, in turn, can be used to take subsequent steps to improve student outcomes and evaluate policy or practice. After discussing some challenges and limitations to these task-based measures, we conclude with our recommendations for using metadata from this approach to improve the measurement of SEL.

Full Paper: