Panel Paper: Speaking on the data's behalf: different presentations of the same data lead to different decisions

Saturday, November 4, 2017
Dusable (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Jesse Chandler, Mariel Finucane, Ignacio Martinez, Alexandra Resch and Jeffrey Terziev, Mathematica Policy Research


Measurement and data collection are the foundation of good decision making, but data does not speak for itself. Instead, it must be translated by researchers and practitioners into actionable information. For example, a data-driven school district might consult research on the impact of a new product before they decide whether it is likely to work for their students. They may even collect their own data in a pilot before deciding whether to adopt the product district-wide. The district will then make decisions based on information extracted from these data including descriptive summaries and inferential statistical tests to help decide whether the observed differences are meaningful.

Unfortunately, the same data can lead to different decisions depending on the inferential test that is used and how the results are presented. In this paper we assess how presenting results in different ways can influence the decisions that people make by presenting them with the exact same data described using either a traditional frequentist test of the null hypothesis (default frequentist) and an associated confidence interval, or one kind of alternative Bayesian test that describes study findings in probabilistic terms.

There are many differences between the default frequentist approach and Bayesian methods. One underappreciated difference is that the default frequentist approach is inherently conservative, and that (often impractically) large sample sizes are necessary to provide clear guidance on which alternative should be preferred. Instead, one outcome will often look most promising, but not statistically differ from the others. While scientists can wait for evidence to accumulate before definitively assessing an idea and have good reasons to favor the parsimony of the null hypothesis, practitioners must make the best decision based on the available information, and cannot afford to be indifferent to the alternatives at hand. For this reason, probabilistic statements may be preferable for some policy decisions.

We provide the first evidence that we are aware of that different presentations of the same data lead to different policy decisions. In our initial experiment (conducted on a convenience sample of adult Americans), people are more likely to switch to a new (and superior) educational technology when they see a probabilistic interpretation of the data than when they see the more conventional frequentist interpretation. Probabilistic information also increased decision confidence among those who wished to switch, suggesting that the greater willingness to recommend change reflects a different interpretation of the evidence rather than a lowered evidentiary threshold. Presenting graphs to supplement text increases willingness to adopt the intervention, with no corresponding change to confidence. We then examine how these methods influence the interpretation of evidence of varying degrees of strength.

Understanding how information presentation influences policy choice is important to both practitioners and those who present the information because format choice is not neutral. This paper is not intended to be the final say on whether one form of analysis is “better” than another, but rather to start a discussion about the role of information presentation in supporting good measurement-based decision making.