This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Big Issue 3: Evolving Notions of Strategy and Measurement

Date: May 12, 2016

Phil Buchanan

President, CEP

Never Miss A Post

Share this Post:

This post is the fourth in a series of seven excerpting CEP President Phil Buchanan’s new essay, Big Issues, Many Questionswhich explores five pressing issues facing U.S. foundation leaders and boards at this moment in time.

As sociologist Linsey McGoey points out in her new book, and as historians have observed, the earliest American mega-philanthropists cared deeply about effectiveness and impact. The fiction promulgated by business school types in the 1990s and early 2000s that strategy and measurement were new concepts to philanthropy was, well, fiction — simply “not true,” in McGoey’s words. That said, strategy and measurement have never been — and will never be — easy in philanthropy.

Why did we ever think otherwise? In part because the business-knows-best crew — including consulting firms and business school faculty with a newfound interest in philanthropy — made it out as if it were easy. Strategy was discussed and defined in ways that worked in a competitive business context but that made little sense for work in complex systems in which there were no such dynamics.

The emphasis was on each foundation emphasizing its “unique value,” a notion promoted in the late 1990s and early 2000s. But this concept has virtually no relevance in an environment absent competitive dynamics in which impact is the goal, not organizational profit. The belated realization of this led to an eventual reversal by its proponents. Additionally, the focus on “logic models” and “theories of change” that were often thought of as fixed (by the funder), rather than working hypotheses to be tested and iterated, didn’t function well in the real world of foundation work.

Measurement was dumbed down. The charts looked good, but what meaning did they really convey? One nonprofit leader, who helped found an organization serving homeless children, told me of a denial of funding — after a grueling process — by a self-styled “venture philanthropy fund” because his organization’s “cost per child served” was too high. But, of course, none of the comparison organizations served homeless children! His frustrated response? “I can give every poor child a f***ing lollipop if you want a low cost-per-lives-served number! But that won’t create impact.”

Examples like this are real but, thankfully, rarer today than a decade ago. After years in which the fantasy was perpetuated that “social return on investment” — surely the right theoretical idea but not the right practical measurement approach — could actually be calculated with precision, we are beginning to see more of an embrace of the reality of foundation performance assessment.

Lately, there is a growing recognition of the risks in focusing too much on a single measure, such as test scores in education. The Obama administration, having arguably pushed the emphasis in the first place, recently sought to limit testing. An overemphasis on one metric — as though there could be an analog to profit or stock appreciation in the business world — creates distorted incentives that lead to gaming or, worse, outright cheating of the kind that has plagued American public school systems. Moreover, a single metric can never capture everything important.

The right approach to assessment isn’t simple or monolithic. It flows from the goals and strategies of the foundation and varies based on context. What is the foundation holding itself accountable for? Changes in outcomes on the ground? Finding and “scaling” new solutions to tough problems? Strengthening nonprofit organizations working in certain areas? Simply getting money out the door? All of the above? The answer tells us which measures make most sense.

Foundations that want to help bring a new, promising approach “to scale” or wide adoption need to ensure that the approach, in fact, works. In these situations, the most rigorous testing possible should be employed—yes, even randomized control trials. However, if something has been shown to work, there is no need to test it again and again (although it’s dangerous to assume faithful implementation and a constant context, so some re-testing may be necessary). If something is a new, innovative approach that seems promising, by all means fund it — but fund it in a way that provides support for the data collection and analysis to see if it works and under what conditions.

Whatever the approach to assessment, nonprofits need to influence it — even guide it. Too often, foundations don’t support nonprofits in their efforts to collect the data that both parties need to improve. Our research shows that nonprofits care deeply about performance assessment but often lack the support they need to do that work. (Only a little more than one-third receive any support, financial or nonfinancial, from foundations in this area.)

There are exceptions, of course, and they can serve as exemplars to others. Increasingly, there seems to be at least an acknowledgment of the need to support nonprofits in their efforts to assess. Mario Morino’s “Leap of Reason” campaign has been an important catalyst for this conversation.

Foundation performance assessment is about the outcomes a foundation seeks, but it also has to be about the way the foundation works. It should pierce the “bubble of positivity” in which foundations often comfortably reside, and the best way to do that is through comparative, candid feedback. This is a big part of what CEP does, of course. That means feedback from grantees but also from declined applicants, intended beneficiaries, donors (for community foundations), policymakers you might be seeking to influence, and so on.

The board plays a key role in this area. Foundation boards can embrace the complexity — working with staff to define the indicators that make sense to gauge progress and then learning from the data — rather than taking a punitive approach. Our benchmarking of foundation governance suggests that strategy and assessment are major areas of activity for foundation boards today, and this is as it should be.

Foundations need to ask themselves:

  • What do we hold ourselves accountable for and how will we judge performance?
  • What data can inform that judgment and how will we gather it?
  • Are we utilizing an array of measures, recognizing that no single data point can answer all our questions?
  • Are we supporting nonprofits — financially or otherwise — to collect the data they need to improve and to analyze that data to inform improvement?
  • What information does the board review, annually, to spur discussion about how the foundation is doing?
  • How are we getting candid, comparative feedback from relevant populations (grantees and intended beneficiaries, as well as policymakers and others relevant to a foundation’s strategy) to ensure we don’t reside in a “bubble of positivity?”
  • How are we learning from this data and using it to inform our work?

In my next post, I’ll talk about a related and fourth big issue for foundations: aligning action.

Download Big Issues, Many Questions here.

Phil Buchanan is president of CEP. He can be reached at philb@cep.org. Follow him on Twitter at @PhilCEP.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

Unmatched Influence: Remembering Joel Fleishman 
Unmatched Influence: Remembering Joel Fleishman 

Joel Fleishman, who died September 30 at the age of 90, is rightly being remembered as a man who was, in the words of the New York Times, “an unparalleled influencer among the nation’s wealthy and powerful.” Many of the beautiful obituaries and reflections on Joel’s...

read more