Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Evaluating the "Bang for the Buck" of Colleges: An Experimental Information Intervention Based on College Scorecard

Thursday, November 12, 2015 : 3:30 PM
Tuttle North (Hyatt Regency Miami)

*Names in bold indicate Presenter

Helen Kilber, University of Washington
The U.S. Department of Education's College Scorecard aims to enable users to compare schools based on “where you can get the most bang for your educational buck” (Obama, State of the Union, 2013). I test the effect of receiving “college scorecard” information in a small (N = 322) experiment involving high school seniors in two Portland, Oregon high schools, one a lower income school (66% Free and Reduced Price Lunch) and the other a higher income school (21% free and reduced price lunch).

Students were asked to rank two short lists of colleges, including public and private four-years and community college/public four-year combinations, from “best” to “worst” investment from the perspective of two high achieving, low income students.  Students were randomly assigned to receive either “basic” information (college location and sector) or “basic” information and “college scorecard” information (net price for a low income student, average graduation rates, average mid-career salary).  I compare students’ rankings to an “ideal” ranking constructed using a simple human capital model using Kendall's tau.

In both tasks, students with college scorecard information were significantly more accurate than students with basic information (Task 1, F(1,316) = 84.909, p < 0.001, ω2  = 0.2066; Task 2, F(1,316) = 56.235, p < 0.001, ω2  = 0.1473).  Without scorecard information, students’ rankings showed either no relationship to the ideal ranking (Task 1,  or considerable disagreement with it (Task 2, ).  This provides causal evidence that college scorecard information improves students’ ability to rank colleges based on their “bang for the buck”. I find no significant effect of school attended (lower or higher income) on ranking accuracy.  This suggests that the effect of scorecard information is not moderated by school environment or, given the large differences in ACT scores between the schools, students’ academic ability.

I also investigate the effect of task difficulty on the impact of scorecard information.  Hoxby and Avery (2012) identified the puzzle of high-achieving low-income students failing to apply to highly selective private colleges where tuition subsidies can mean very low costs of attendance.  Not only are they often cheaper, highly selective private schools average graduation rates are usually higher than those of the public four-years to which high-achieving low-income students frequently apply.  Thus they offer “more for less” and are an obviously better investment.  It was obvious to students in this study: 76 percent of students with scorecard information ranked a “more for less” private college #1 while 15 percent of students without that information did so. 

But this is an unusual choice situation.  Students will generally need to trade off net price, graduation rates, and average salary to correctly rank a college.  This is a harder task, even harder if community college transfer rates must also be considered.  I find evidence that, when such tradeoffs are required, college scorecard information has less impact on ranking accuracy. This evidence is not causal, but nonetheless raises questions about how much the average student will benefit from college scorecard information.