Panel Paper:
Evaluating the "Bang for the Buck" of Colleges: An Experimental Information Intervention Based on College Scorecard
*Names in bold indicate Presenter
Students were asked to rank two short lists of colleges, including public and private four-years and community college/public four-year combinations, from “best” to “worst” investment from the perspective of two high achieving, low income students. Students were randomly assigned to receive either “basic” information (college location and sector) or “basic” information and “college scorecard” information (net price for a low income student, average graduation rates, average mid-career salary). I compare students’ rankings to an “ideal” ranking constructed using a simple human capital model using Kendall's tau.
In both tasks, students with college scorecard information were significantly more accurate than students with basic information (Task 1, F(1,316) = 84.909, p < 0.001, ω2 = 0.2066; Task 2, F(1,316) = 56.235, p < 0.001, ω2 = 0.1473). Without scorecard information, students’ rankings showed either no relationship to the ideal ranking (Task 1, or considerable disagreement with it (Task 2, ). This provides causal evidence that college scorecard information improves students’ ability to rank colleges based on their “bang for the buck”. I find no significant effect of school attended (lower or higher income) on ranking accuracy. This suggests that the effect of scorecard information is not moderated by school environment or, given the large differences in ACT scores between the schools, students’ academic ability.
I also investigate the effect of task difficulty on the impact of scorecard information. Hoxby and Avery (2012) identified the puzzle of high-achieving low-income students failing to apply to highly selective private colleges where tuition subsidies can mean very low costs of attendance. Not only are they often cheaper, highly selective private schools average graduation rates are usually higher than those of the public four-years to which high-achieving low-income students frequently apply. Thus they offer “more for less” and are an obviously better investment. It was obvious to students in this study: 76 percent of students with scorecard information ranked a “more for less” private college #1 while 15 percent of students without that information did so.
But this is an unusual choice situation. Students will generally need to trade off net price, graduation rates, and average salary to correctly rank a college. This is a harder task, even harder if community college transfer rates must also be considered. I find evidence that, when such tradeoffs are required, college scorecard information has less impact on ranking accuracy. This evidence is not causal, but nonetheless raises questions about how much the average student will benefit from college scorecard information.
Full Paper: