Quantifying the Policy Reliability of Competing Non-Experimental Methods for Measuring the Impacts of Social Programs
*Names in bold indicate Presenter
Two decades ago, Bell and Orr (1995) introduced the only known method for formally quantifying the policy reliability of QED findings against an experimental benchmark. Their method, based on Bayesian statistical theory, computes a “maximum risk function” showing the probability of an incorrect policy decision for different magnitudes of true impact thought by policymakers sufficient to justify continued or expanded funding for the studied intervention. Bell et al. (1995) applied the method to three QED methods used to measure the impact of job training interventions in the face of selection bias. No other applications are known.
The current paper applies the Bell-Orr methodology to the large body of RCT replication efforts using QED methods that exists today. It reassesses conclusions regarding QED methods judged by the original authors as providing an adequate substitute for an experiment. For any given paper, the adequate-substitute estimate with the smallest standard error is scrutinized as the single most informative case of claimed success. So too is the QED estimate with the smallest standard error among those not judged by the authors to provide an adequate substitute for an experiment—thus assuring balanced examination of both favorable and unfavorable conclusions on the part of contributors to the literature on methods reliability.
Applying the Bell-Orr criterion of policy reliability to the accumulated set of design replication/within-study comparison design results indicates the degree of trust the profession should place in claims from the literature regarding reliance on non-experimental methods when measuring impacts of social programs. Quite different results emerge than one would have thought from simply reading and accepting at face value the past literature.