Poster Paper: Great Evidence, Lousy Findings: Why Methods-Based Evaluation Approaches Might Hinder Effective Program Development

Friday, November 3, 2017
Regency Ballroom (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Daniela Schroeter and Brad Watts, Western Michigan University


Randomized controlled trials are a crucial for shaping evidence-based policy and practice. To that end, the federal government has funded rigorous studies to evaluate the effectiveness of interventions in higher education. While randomized controlled trials yield the highest level of evidence, they may hinder innovation and development. This paper introduces a multi-year study of a growth mindset intervention in a university setting. Growth mindset has received significant attention over the past years and suggests that low-cost, low dose rapid student-level interventions can yield significant student learning outcomes that impact retention and completion, thus ultimately reducing cost of education. However, detailed descriptions of interventions that have been shown to work are limited.

Mindset research focuses on how the beliefs or ways in which individuals think impacts their behavior and attitude and is the theoretical basis for the interventions. Within the context of a First In The World grant, mindset theory was tested with a focus on addressing the anxiety individuals face related to their perceived ability to succeed, a typically universal experience, which is heightened at key transition moments (e.g., beginning college or other post-secondary education) (Yeager & Walton, 2011). Mindset research also shows that students’ perception of whether or not they can succeed is influenced by whether they think intelligence is a fixed or malleable trait (Dweck, 2007). Although some research has shown that with hard work, the brain can strengthen in much the same ways that muscles can, and ‘grow’ intelligence (Dweck, Walton, & Cohen, 2014), other research suggests that when students are confronted with adversity and believe their capacity for learning is fixed, they are more likely to fall prey to self-fulfilling prophecies and believe they are not cut out for a particular discipline or college. The view of intelligence as being fixed is frequently applied to beliefs about math, in that students ‘get it’ or they do not (Ashcraft, 2002).

Designed as a randomized controlled trial that closely follows What Works Clearinghouse standards, the evaluation of the student level intervention uses an objectives-based evaluation approach to test the intervention’s effectiveness. Findings from several iterations of the experiment yield no significant results. This begs the question “do we measure what matters” at a time where the treatment is developmental at best? Additional information is needed to strengthen the intervention so that outcomes are conceivable. Such information can only be obtained through alternative evaluation approaches which may negatively impact the core study design. As a result, innovation and intervention improvement are hindered. This paper presents findings from the randomized controlled trial and introduces pragmatic, constructivist, and transformative evaluation approaches that may facilitate innovation and improvement of the intervention to ultimately be successful in higher education settings.