Panel Paper: Beyond 'Treatment vs. Control': How Bayesian Design Makes Factorial Experiments Feasible in Education Research

Friday, July 20, 2018
Building 3, Room 210 (ITAM)

*Names in bold indicate Presenter

Steven Glazerman, Mathematica Policy Research


Background: Researchers often wish to test a large set of related interventions or approaches to implementation. A factorial experiment accomplishes this by examining not only basic treatment-control comparisons but also the effects of multiple implementation “factors” such as different dosages (e.g., whether to use a new lesson plans for 30 or 60 minutes per day), and implementation strategies (e.g., whether to deliver lessons using classroom teachers or math specialists), and the interactions between these factor levels (e.g., whether math specialists are more effective in a longer teaching session). Traditional methods may require unfeasibly large sample sizes to perform complex factorial experiments.

Objectives: We present a Bayesian approach to factorial design which substantially increases the power of complex experiments with many factors and factor levels, while correcting for multiple comparisons.

Research Design: Using an experiment we performed for the U.S. Department of Education as a motivating example, we perform power calculations for both classical and Bayesian methods. We repeatedly simulate factorial experiments with a variety of sample sizes and numbers of treatment arms, to estimate the minimum detectable effect (MDE) for each combination.

Results: The Bayesian approach yields substantially lower MDEs when compared with classical methods for complex factorial experiments. For example, to test 72 treatment arms, a classical experiment requires nearly twice the sample size as a Bayesian experiment to obtain a given MDE.

Conclusions: Bayesian methods are a valuable tool for researchers interested in studying complex interventions. They make factorial experiments with many treatment arms vastly more feasible.

Full Paper: