Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel: Power Analysis in Designing Cluster Randomized Trials to Detect the Main, Moderation, and Mediation Effects
(Tools of Analysis: Methods, Data, Informatics and Research Design)

Saturday, November 14, 2015: 1:45 PM-3:15 PM
Pearson I (Hyatt Regency Miami)

*Names in bold indicate Presenter

Panel Organizers:  Nianbo Dong, University of Missouri
Panel Chairs:  Laura Peck, Abt Associates
Discussants:  Spyros Konstantopoulos, Michigan State University


Statistical Power for Short, Comparative Interrupted Time Series Designs with Aggregated Data
Andrew P. Swanlund1, Kelly Hallberg2 and Ryan T. Williams1, (1)American Institutes for Research, (2)University of Chicago



Power Analysis to Detect the Continuous Moderator Effects in Cluster Randomized Trials
Nianbo Dong, University of Missouri, Jessaca Spybrook, Western Michigan University and Ben Kelcey, University of Cincinnati



Statistical Power for Designing Studies of Cluster-Level Mediation
Ben Kelcey and Zuchao Shen, University of Cincinnati


Cluster randomized trials (CRTs) have been widely applied for program evaluation to generate evidence for policy making. Policy makers and researchers are not only interested in the program’s main effects (“what works”), but also moderation effects (“work for whom, under what conditions”) and mediation effects (“the mechanism through which the program works”). A critical consideration in designing CRTs to detect these effects that are practically or theoretically meaningful is statistical power analysis. Without power analysis, researchers won’t know failing to detect significant effects is due to program failure or small sample size. Extensive research has been done on power analysis for CRTs to detect the main effects, for example, Bloom (1995, 2005, 2006), Hedges & Rhoads (2010), Konstantopoulos (2008a, 2008b, 2009, 2010, 2012), Murray (1998), Raudenbush (1997), Raudenbush & Liu (2000), Raudenbush, Martinez, & Spybrook (2007), and Schochet (2008). In addition, among the excellent computer programs available for conducting power analysis for CRT are Optimal Design (Raudenbush, Spybrook, Congdon, Liu, & Martinez, 2011), CRT-Power (Borenstein & Hedges, 2012), and PowerUp! (Dong & Maynard, 2013). However, research on power analysis to detect moderator effects and mediator effects in CRTs is very limited. In particular, computational tools are not available for researchers to conduct such power analyses. The purpose of this panel is to present results of recent advancement in power analysis of CRTs. In the first paper, Jessaca Spybrook, Ben Kelcey, and Ran Shi compare CRTs funded by the Institute of Education Science (IES) in the early years (2002-2004) to those funded a decade later (2011-2013) to determine if there has been a shift in the quality of the design and power analyses of CRTs funded by IES. In the second paper, Nianbo Dong, Jessaca Spybrook, and Ben Kelcey discuss statistical formulations for calculating statistical power, minimum detectable effect size and its confidence interval, and sample size to detect continuous moderator effects in CRTs. Finally, Ben Kelcey and Zuchao Shen present a framework and formulas to help researchers design multilevel mediation studies by deriving formulas to assess the power of a design, describe the complex and atypical behavior of power in studies of mediation, and delineate the conditions under which it is maximized for a given set of parameter values. Together, these papers provide researchers an overview of the quality of the designs and the precision of the CRTs funded by IES over the past decade, and tools and suggestions for power analysis in planning CRTs to detect moderator and mediator effects. This panel would help researchers design CRTs with adequate power to produce rigorous evidence of the effects of programs regarding “work for whom, under what conditions” and “the mechanism through which the program works”.