Panel Paper: Too Many Promises? Social Impact Bonds and ‘Improved’ Social Interventions

Thursday, November 2, 2017
San Francisco (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Eleanor Carter and Clare Fitzgerald, University of Oxford


Measurement matters. For SIBs, measurement fragmentation across projects has complicated tracking the success of the model overall. This paper articulates the justifications given by various stakeholders for initiating SIBs and audits existing evidence through systematic review.

Building on the momentum of outcomes-oriented reforms in public service delivery, Social Impact Bonds (SIBs) are heralded as fixes to some of the most complex and expensive social problems (Social Finance, 2010; Roberts, 2013). Proponents suggest SIBs deliver measureable improvements in social outcomes by aligning the priorities of government, social sector, and socially motivated investors around a common goal: effective and efficient improvements in social condition (Hutchison, 2010; Social Finance, 2010).

This concise description of SIBs has been widely adopted by supporters. We contend, however, that its simplicity conceals a significant divergence between the original theoretical description of SIBs (cf Mulgan et al., 2010), rationales different stakeholders give for choosing SIBs, and actual implementation of SIBs. Three research questions guide this paper: theoretically, what are SIBs for? Through what mechanisms would SIBs work? Do we see evidence of SIBs delivering against such claims?

Literature and qualitative practitioner insights serve to answer the first two questions. Early findings demonstrate a diversity of rationales for pursuing SIBs. Commissioners cite cost-effectiveness, delivery innovation, risk transfer, investment in prevention, pursuit of evidence-based interventions, and alignment of interests across levels of government through co-commissioning (Gustafsson-Wright et al., 2015). For investors, SIBs facilitate ‘blended returns,’ offer metrics for impact, allow mission-aligned investment and/or open new investible opportunities. Service providers’ ambition is to strive for improved interventions, demonstrate impact, and secure more stable, longer-run financial support. The full suite of SIB justifications is then tested and refined through the Delphi method with a leading group of SIB practitioners, policymakers and academics.

Tensions certainly exist within and across motivations for stakeholder groups which hints that not all SIBs will be driven by the same rationale. Instead, each SIB has its own web of motivations, informed by the pressures and preferences of those involved in its setup. Different SIBs exist for different purposes, making it difficult to answer ‘what are SIBs for?’ singularly. This presents challenges in assessing ‘success’: it will depend on the SIB, especially regarding higher-level motivations for pursuing the SIB itself. For this, there exists no standard measure.

To answer the last research question – ‘do SIBs work?’ – we undertake a systematic review of the evidence (the first of its kind), paying attention to the degree to which the emergent evidence base, grey and academic, supports different ‘rationales.’ The lack of a single strategic evaluation metric by which all SIBs can be judged is not necessarily a substantive concern, if those reviewing the success of SIBs (and their suitability for new policy areas) acknowledge that each SIB will have distinct expectations. In time, as SIBs and their evaluations become more extensive, it may be possible to develop an overarching evaluation framework which captures the distinct imperatives and intentions of each SIB, enabling comparison of success against more strategic objectives.