Beyond Prediction: Machine Learning and Causal Inference in Public Policy Research
(Tools of Analysis: Methods, Data, Informatics and Research Design)
*Names in bold indicate Presenter
The increasing availability of rich administrative data has helped researchers reduce the cost of data collection for randomized trials. Instead of using original data collection to measure outcomes, researchers can link datasets from government agencies. Two papers on this panel explore how machine learning can improve record linking in administrative data. In the absence of a common unique identifier, researchers use demographic characteristics like name and date of birth to match data sets, but errors in matching are inevitable. In order to minimize errors, researchers often use “exact matching” (same name and date of birth across datasets) in order to ensure that speculative matches do not lead to errors in the dataset that will be used to evaluate the intervention.
The first paper provides evidence that this typical “conservative approach” to linking administrative data in the absence of a common identifier is likely to lead to attenuation bias in the estimation of effects – bias that can be substantial in many instances. The paper characterizes the nature of this bias and demonstrates how machine learning methods for records matching can minimize this problem. The second paper provides a detailed description of how machine learning-based record linking can be done in practice. The approach minimizes false negatives and false positives by optimally learning the rules that would be used by a well-trained researcher.
The remaining two papers leverage machine learning to improve causal estimation. The third paper is concerned with the problem of drawing causal inferences from small-scale case studies, which is extraordinarily common in policy research. The paper presents a new paradigm for implementing synthetic controls methods for case studies, arguing that the quality of a counterfactual unit for a given treatment unit is testable under mild assumptions already taken by implementers of these methods. Using simulated data, the paper shows that the “Super Learner” ensemble prediction algorithm can outperform traditional estimations in which treated and comparison units are closely matched on the basis of pre-treatment trends.
The final paper applies machine learning techniques to estimate treatment heterogeneity within the context of an innovative randomized experiment that tests the efficacy of behavioral science in improving individuals’ compliance with summonses issued by police officers. The paper demonstrates that different nudges can reduce failures to appear in court. But the key question for policymakers is not what works, on average, but what works for whom? Leveraging machine learning the authors characterize the extent to which there is treatment heterogeneity, and how this can inform designing “personalized nudges” to improve policy efficiency.
See more of: Tools of Analysis: Methods, Data, Informatics and Research Design
See more of: Panel