Poster Paper:
What Works for College Student Success: A Meta-Analytic Study of Motivation Interventions
*Names in bold indicate Presenter
Specifically, this meta-analytic study reviews existing literature about motivational interventions using the gold standard “randomized control” design and published after 1980. Not only do we provide an overall average effect size (ES) for motivation interventions but we also provide a better understanding of which type of interventions are helpful for college students. We do so by focusing on the various targeted competencies, populations, and activities carried out in these interventions.
Our search procedure identified 32 published studies that used random assignment and manipulated an intrapersonal competency (either directly or indirectly), as identified by the National Research Council. These competencies include growth mindset, sense of belonging, utility value, academic self-efficacy, conscientiousness, intrinsic motivation, and positive future self. In addition, the studies in our analysis measured an academic performance outcome (e.g., competency test, course grade, course exam, and GPA). The 32 studies resulted in 72 unique ES estimates.
Overall, we found that interventions were significantly effective, resulting in an average ES of g = 0.42 (95% confidence interval = [0.30 to 0.54]). Further, unadjusted average ES estimates for descriptive variables were uniformly positive and significantly different from zero. ES estimates varied in terms of type of competency, outcome, and target population. For example, growth mindset/attribution retraining interventions included the largest number of ES estimates; the average ES estimate for this group was 0.35. Utility value interventions, on the other hand, had an average ES of 0.09. In terms of outcome measures, the largest average ES were for those categorized as competency test and the smallest for those categorized as course grade. Lastly, average ES estimates were larger for those categorized as targeting an URM group as compared to those that did not.
Preliminary regression results, using a multi-level modeling approach, suggest that differences in ES estimates were only significant for outcome measure and target population -- not for other moderators such as intrapersonal competency.
Future analysis will include testing potential interaction effects as our descriptive results suggest that average ES estimates vary in terms of outcome measure by competency group. We also plan to investigate the issue of publication bias.