*Names in bold indicate Presenter
We draw on administrative data collected by the California Community College Chancellor’s Office on the full population of students observed from the 2006-07 school-year through the 2011-2012 school-year in California’s community colleges. We observe students’ course registrations, as well as their performance in those courses, and the delivery method (online vs. face-to-face) of each course. This allows us to compare the performance of students in online versus face-to-face courses for a variety of outcomes, including the likelihood of course completion, the likelihood of course failure conditional on completion, and course grade conditional on course completion.
There are two main ways that we may be concerned that simple comparisons of student performance in face-to-face versus online classes might be biased. First, we might be concerned that course enrollments might be skewed so that online enrollment was concentrated in courses that were either more or less challenging than the average face-to-face course. In other words, our estimates might be biased because of sorting in how online courses are offered across different types of classes and among different institutions. To address this concern, one of our main sets of models uses course-by-college fixed effects.
Even if we hold the course constant, however, we still might be concerned that students who opt into online sections of a course might systematically differ from their peers in face-to-face sections of the same course. For instance, say students who prefer face-to-face courses are more engaged with college life in general and that engagement is correlated with performance either positively (e.g., if engagement means students are more motivated to do well) or negatively (e.g. if engagement means that students are distracted by other college activities). These differences across the types of individuals who are prone to enroll in face-to-face versus online course sections would bias comparisons of the relative performance of online versus face-to-face students. To address this concern, we use a second analytic strategy: individual fixed effects.
We find evidence that students, on average, have significantly poorer outcomes in online courses, in terms of the likelihood of course completion; the likelihood of course failure conditional on completion; and course grade conditional on completion. These estimates are robust across estimation techniques, for different groups of students, and for different types of classes. Additional tests suggest that these results suffer from relatively little bias and are, if anything, conservative estimates of the association of online course-taking and student performance. Future iterations will include additional analytic strategies, including instrumental variables analysis (candidate instruments include distance of home to college) to tease out causal estimates of the effects of online course-taking on performance.