Request Information from Booth

Loading...

  • Select
  • Submit
  • Success

Social sector organizations worldwide wrestle with one fundamental issue: How do they know they are having an impact? That’s the sector’s $64 million question, and it was at the center of an entertaining and thought-provoking panel discussion at the Chicago Booth London campus.

Hosted by Chicago Booth’s Rustandy Center for Social Sector Innovation, the discussion marked the center’s first event in London, following the spring 2017 announcement of a $20 million gift from alumnus Tandean Rustandy, ’07 (AXP-6). Moderator Robert H. Gertner, John Edwardson Faculty Director of the Rustandy Center and Joel F. Gemunder Professor of Strategy at Chicago Booth, opened the event by remarking that “nonprofits must focus on measuring impact in a way that maximizes success and induces governments and funders to make smart decisions.”

Such measurement is proving difficult. Those same funders increasingly want proof of an intervention’s effectiveness before investing in it. Governments struggle to decide where to allocate support. For Julia Grant, CEO of Pro Bono Economics (PBE), a good place to start is focusing on impact management, rather than measurement. “Measurement implies a passive observer without real accountability,” she suggested. “Management is about identifying and driving impact throughout a whole journey of learning and improvement.”

“The British Legion raises a lot of money every year, which we then pass on to other charities helping war veterans. This session made me think about how we decide where to allocate those funds and how we use data to make those decisions as smartly as possible.” – Richard Toolen, CEO of Icon Asset Management AG and Volunteer for Royal British Legion

Naturally, though, there’s no fairy-dust solution. Instead, panelists suggested that organizations should assess their needs and resources, and act accordingly. Just ask Katherine Mathieson, chief executive of the British Science Association, a UK nonprofit that seeks to inspire more young people to consider science and engineering careers and that has benefitted from the assistance of Grant’s PBE.

Initially, Mathieson’s organization considered randomized control trials (RCTs) with students. “These are great but expensive,” she said, “and the control group gets all the hassle but none of the benefits.” Eventually, she sought PBE’s help in analyzing multiple external datasets to uncover what really influences attainment and interest in science. This allowed the organization to deliver a more evidence-based approach to funders and sharpen decisions about how to target the program’s interventions.

And with that determination came a new question: If RCTs aren’t always a cost- or time-effective option—and there’s the added issue of their inability to capture unobservable mechanisms and motivations—then could big data provide the answer instead?

Marianne Bertrand, faculty codirector of the Rustandy Center and Chris P. Dialynas Distinguished Service Professor of Economics at Chicago Booth, believes data could be one solution. During the panel discussion, she cited work from the University of Chicago’s Urban Labs on developing an algorithm capable of deciding whether defendants should receive bail in Chicago. The aim was to eliminate any potential bias and, for Bertrand, it shows how smart data analysis can make a real, positive impact on major societal issues.

Yet she’s also quick to acknowledge the potential limitations.

“Big data will be transformational and can be especially helpful when deciding who to target with social interventions,” she explains. “But if you’re using past data to predict the future, it may contain the exact historical biases you’re trying to eliminate. There’s also the question of when to rely on data analysis and when to use human judgment.”

And therein lies the rub. Social sector organizations face an increasingly delicate balancing act: they must know when to turn to science and when to invoke more human instincts. At the same time, they must reconcile funders’ demands for up-front evidence with their unwillingness to invest in major data projects rather than on-the-ground interventions.

Wherever they are along that road, for Grant, the key is to keep sight of the ultimate goal. “It’s critical to understand your logic model and how your mission links to the outcomes you want,” she said. “That’s the only way to determine whether your interventions are actually making a difference to those who need it.”

Recommendations