Panel: Bayesian Methods: Innovative Applications in Research Design, Program Evaluation, and Policy Analysis
(Methods and Tools of Analysis)

Friday, November 9, 2018: 3:15 PM-4:45 PM
Marriott Balcony A - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Panel Chairs:  Jenessa Malin, Administration for Children and Families
Discussants:  Timothy Day, Centers for Medicare & Medicaid Services

Streams: Using a Bayesian Adaptive Design in a Study of Text Messaging Interventions
Ankita Patnaik, Diane Paulsell and Mariel Finucane, Mathematica Policy Research

The Right Tool for the Job: A Bayesian Meta-Regression of Employment and Training Studies
Lauren Vollmer, Emily Sama-Miller and Diane Paulsell, Mathematica Policy Research

Making Bayesian Analyses Accessible through Visualization: Case Study: Meta-Evaluation of the Health Care Innovation Awards
Anupa Bir1, Nikki Freeman1, Rob Chew1, Kevin Smith1, Michael Wenger1, Marcia Underwood1, Martijn van Hasselt2 and Timothy Day3, (1)RTI International, Inc., (2)University of North Carolina, Greensboro, (3)Centers for Medicare & Medicaid Services

There is a growing recognition that p-values may be misused, misinterpreted, or lead to misinformed decisions. Bayesian methods are becoming the primary alternative approach to p-values, and offer a number of advantages over traditional frequentist methods. This panel includes four papers on innovative applications of Bayesian methods in a variety of substantive policy areas, including human services, health, and education. Each paper highlights particular advantages of the Bayesian framework and describes how it has been applied to questions of interest to policymakers. The discussant is Renee Mentnech, Director of the Research and Rapid Cycle Evaluation Group at the Center for Medicare and Medicaid Innovation.  

The first paper describes a Bayesian adaptive design implemented in the context of a Healthy Marriage and Relationship Education program. Participants are randomly assigned to one of three behaviorally informed text messaging interventions designed to encourage program attendance, or to the control group. The design adapts to accumulating evidence, allowing the researcher to collect a small amount of data, review trends, and re-allocate a larger proportion of participants to the conditions that are more promising. This design increases statistical power and allows the researcher to test a larger number of interventions.

The second paper presents a Bayesian meta-regression that synthesizes data from impact evaluations of employment and training interventions for low-income adults. By using a Bayesian approach, the authors are able to “borrow strength” from precisely-estimated relationships to inform less-precisely-estimated relationships. This allows them, for example, to unpack whether including a particular strategy as part of a larger package of employment services leads to improvements in a specific outcome. Further, this approach provides information about the probability that a result exceeds a meaningful threshold, using intuitive language to convey both the strength and magnitude of the finding.

The third paper describes a data visualization tool that communicates the results of a Bayesian meta-analysis through an interactive dashboard. The meta-analysis summarizes the results of 108 Health Care Innovation Awards that aimed to deliver better health, improved care, and lower costs. The dashboard presents the results for four outcomes – total cost, hospital admissions, hospital readmissions, and emergency department visits – and allows users to manipulate sliders to answer their own questions (e.g., “What is the probability that costs were reduced by $10 or more? $20 or more?”). The ability to answer multiple questions in a probabilistic framework is in contrast to the static point estimates that a frequentist analysis provides.

The final paper highlights the fact that Bayesian methods take into account all forms of uncertainty, including uncertainty in model selection. Bayesian model averaging combines all possible models and weights them based on their posterior probabilities. The author examines the predictive performance of Bayesian model averaging, compared to the best single Bayesian model and the best single frequentist model, in forecasting scores on the PISA and TIMSS (international assessments of students’ knowledge). This application provides critical information to policymakers about progress toward education goals, such as reducing the global gender gap in literacy and numeracy.

See more of: Methods and Tools of Analysis
See more of: Panel