Panel: Analytic Approaches for Identifying Effective Strategies for Improving Outcomes in Social Programs
(Methods and Tools of Analysis)

Friday, November 9, 2018: 3:15 PM-4:45 PM
Marriott Balcony B - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Panel Chairs:  Nicole Constance, U.S. Department of Health and Human Services
Discussants:  Nirav Mehta, University of Western Ontario


Best Practices for Detecting Treatment Effect Heterogeneity in Multisite Trials
Luke Miratrix, Masha Bertling, Catherine Armstrong and Ben Weidmann, Harvard University



Lessons from New York City’s Small Schools of Choice about High-School Features That Promote Graduation for Disadvantaged Students
Rebecca Unterman1, Howard Bloom1, Pei Zhu1 and Sean Reardon2, (1)MDRC, (2)Stanford University



Probing Impact Heterogeneity Using Machine Learning Methods in the Evaluation of Early College High Schools in North Carolina
Fatih Unlu1, Julie Edmunds2, Eric M Grebing2 and Elizabeth Glennie3, (1)RAND Corporation, (2)University of North Carolina, Greensboro, (3)RTI International, Inc.


In the context of experimental evaluation, there are two broad methodological approaches to learning about what makes programs more or less effective: analytic and experimental design-based approaches. Analytic approaches use incidental variation in program features to infer the effect of varying these features. An advantage of this approach is that a large range of variation can often be studied, while a disadvantage is that the variation may be non-random such that analyses can identify associations between strategies and effect sizes but cannot attribute causality. Experimental design-based approaches rely on evaluations that specifically design variation in program features to test their relative effects. An advantage is that such designs often allow attribution of causality, while a disadvantage is that only limited variation can typically be accommodated in the design.

 

This panel focuses on analytic methods that investigate how variation in program design and implementation across sites in a multi-site randomized controlled trial might be linked to variation in impacts across those sites. When random assignment is conducted at the individual-level within program sites, these multi-site experiments produce causal estimates of average impact and unbiased estimates of the variance of impacts across sites.

 

The first paper is a theoretical methods paper addressing several approaches to detecting variation in impacts in multi-site randomized controlled trials. The paper examines two primary methodological research questions: (1) What methods are most powerful for detecting cross site variation, and why? and (2) How can one best exploit a covariates modestly predictive of variation to improve the power of an overall test?

 

The second paper develops an empirical specification approach to identify the program characteristics that are most associated with the magnitude of impact for health sector-focused job training programs. The paper analyzes data from the Health Professions Opportunity Grants (HPOG) Impact Evaluation, which includes 42 different HPOG programs each offering a distinct set of training options and services.

 

The third paper analyzes variation in impacts of New York City’s Small Schools of Choice (SSC) to identify high-school features that promote graduation for disadvantaged students. The paper develops a conceptual framework for the mechanisms through which differences in students’ lottery-induced school experience may affect their academic achievement and analyze these data using a multi-site instrumental variables approach.

 

The fourth paper analyzes heterogeneity in the impact of North Carolina's Early College High School Model using recently developed machine learning methods. The paper focuses on heterogeneity based on individual baseline characteristics, building on prior evidence of larger impacts on students who are disadvantaged and underprepared for high school. The new methods provide a more flexible framework that searches for heterogeneity over data-driven and high-dimensional functions of baseline covariates that could reveal evidence for impact heterogeneity which may be missed by conventional subgroup analyses.

 

Together, these papers—and the proposed discussants—form a coherent panel that should be of keen interest to APPAM conference participants on the cutting-edge topic of analysis of impact variation in multi-site evaluations.



See more of: Methods and Tools of Analysis
See more of: Panel