Panel Paper:
Plagued By Selection: The Value of Using Post-Schooling Earnings Outcomes in the College Scorecard to Guide College Decisions
*Names in bold indicate Presenter
Prior work on the appropriateness of using post-schooling labor market outcomes to evaluate higher education institutions has relied on small institutional samples or administrative data from a single state to examine earnings differences across schools (Cunha & Miller, 2014; Eide, Hilmer, & Showalter, 2015; Minaya & Scott-Clayton, 2016). As a result, we are not aware of any studies that have examined the determinants of earnings differences across the census of public and private, non-profit four-year degree-granting institutions in the United States or evaluated whether earnings data in the Scorecard enables students to accurately compare schools.
Recent evidence has also shown that the labor market returns to college are larger for some programs than others (Hastings, Neilson & Zimmerman, 2013; Hershbein & Kearney, 2014; Kirkeboen, Leuven & Mogstad, 2016; Weber, 2014; Weber, 2016), yet we believe we are the first to examine the degree to which earnings differences across schools can be explained by differential selection into majors. In doing so, we shed light on the potential value to disaggregating earnings by school and program for consumers, which is particularly timely given that President Trump issued an executive order in March 2019 that directs the U.S. Department of Education to publish program-level earnings data for each college.
Using the universe of public and private, non-profit four-year institutions in the Scorecard dataset, we find that three-quarters of the variation in median earnings across schools is explained by selection factors, and accounting for differences in major composition explains approximately 15 percent of the variation in median earnings across schools that remains after controlling for institutional selectivity, student composition, and local cost of living differences. Our results imply that ignoring selection into majors overstates the between-school variation in median earnings possibly attributable to institutions by over 30 percent. Furthermore, we show that using earnings measures to evaluate school quality is extremely sensitive to the choice of metric used. For example, nearly three-quarters of schools move at least 10 percentiles in the earnings distribution after we control for selection factors. Even after controlling for observable differences between schools, between one-third and one-half of schools move 10 percentiles or more when either the 25th or 75th percentile of school earnings is used instead of the median to evaluate institutional quality. Taken together, our findings suggest that improving the accuracy and efficacy of college search tools requires helping students and families understand and distinguish between multiple adjusted earnings metrics.