Social Impact Bonds: Agree or Disagree?

It was a pleasure to chat with Jeff Liebman, Steve Goldberg, Paul Bernstein, and Cathy Clark about social impact bonds as part of the panel on “How to Scale Impact Through Social Impact Bonds” at last week’s Social Impact Exchange Conference. Before the panel, I also had a chance to summarize McKinsey’s recent report, From Potential to Action: Bringing Social Impact Bonds to the U.S., which can be downloaded here.

I’m afraid we got into the weeds a bit during the panel discussion about what we as social sector actors know regarding proven programs, assessment, and scaling – and what’s still a work in progress. Did we agree? Or disagree?

We know that some program interventions developed by nonprofits have been built, tested and refined over time, while others have not.  Evidence-based programs have been studied and assessed to the best of our current ability, and we are building our social impact assessment skills and tools constantly.

We know that evidence-based programs have core elements – what Jeannie Oakes from the Ford Foundationcalled the “non-negotionables” during her panel at the conference, “Multiple Pathways to Scaling Impact.”  These core elements change from program to program.  And it’s possible that we may learn over time that what we thought were core elements weren’t and something else really was.  But every program has its core, which should be replicated with as much fidelity as possible.  The rest of the program needs to be adapted and iterated to reflect local nuance and the reality of the community, economy and point of time in which the program is being implemented.

We believe that the process of study, testing and refinement – supported by the traits of curiosity, consistency and diligence – will help the social sector over time arrive at the best solutions possible to solve dynamic, wicked problems.  Social impact assessment is a young field.  It’s only 30 years old according to Rico Catalano at the University of Washington. We are still figuring out how to measure results and how to refine programs to be more effective and serve more people.

So when I say, let’s use SIBs to scale up proven programs, I mean we should put our resources into growing, studying and scaling the programs that we have reason to believe are our very best.  We think these programs are worth the investment of scaling because the best tools we have today tells us they are.  We should scale these programs up thoughtfully, adapting and iterating as we replicate so that we learn.  We will learn more about whether and how the programs work – and how to scale programs successfully.

And while we continue to improve these programs, we must keep an open mind. When innovations come along and our testing tells us these future programs are superior – we will put our resources into growing, studying and scaling future programs in order to deliver the best results for our communities.

Laura Callanan is a consultant at McKinsey & Co. in the Social Innovation Practice.