*Names in bold indicate Presenter
This study examines the role of information effects in public opinion polling in the area of education policy. Specifically, I capitalize on a randomized “split-half” experiment embedded in the Education Next-Program on Education Policy and Governance (PEPG) Survey to examine whether providing participants with accurate information regarding current levels of education funding alters public opinion on the question of whether they think that spending should increase. The polling firm Knowledge Networks administered the survey in the spring of 2011, asking a nationally representative sample of adults (n=2,625) questions on a variety of education-related topics. Participants were solicited by phone to participate and the surveys themselves were administered online in English and Spanish. Knowledge Networks provided internet access to participants as needed. The firm also oversampled subgroups of interest including public school teachers, parents of school-aged children, and “elites” with at least a B.A. and in the top 10% of the state’s income distribution (n=350 for each subgroup).
I find that providing an estimate of current per pupil spending in the survey question cuts the odds almost in half that a respondent will give an answer choice that is more supportive of higher levels of spending. This shift does not appear to vary based on whether the respondent is a teacher, homeowner, or elite. Interestingly, the effect of including the local spending estimate is larger for parents of school-aged children than respondents without children enrolled in school. Parents in the control group are more supportive of spending than non-parents in the control group, but providing information almost closes the gap between parents and non-parents in their support for spending.
I discuss and address potential threats to construct validity, such as the possibility that the effect is the result of respondent priming rather than information effects (Krosnick & Miller, 1996), as well as issues of generalizability. I conclude by proposing future research that would test these effects by simulating the way new knowledge is acquired and applied in the real world and note that opinion shifts may differ if researchers provide respondents with other relevant information, such as how funds are currently allocated. This research may help us learn about how we can shift the public dialogue beyond a conversation about simple funding levels to a more rich conversation about how to ensure the public investments we do make translate to real results for children.