Evaluation matters.
When we surveyed foundation CEOs about performance assessment in 2011, we saw that nearly all rely on formal evaluations—including of grantees, program areas, and grant clusters—to inform their understanding of their foundations’ effectiveness. At the median foundation in our dataset of respondents, spending on evaluation represented two percent of the grantmaking budget.
We can argue about whether that’s too much or too little, but it’s no insignificant sum. Yet, in that same survey, we saw that fully 65 percent reported that making their evaluations result in meaningful insights for the foundation was a challenge. Anecdotally, I have heard countless stories of thick stacks of evaluations that appear to have made their most meaningful contributions to foundation effectiveness as doorstops.
That is, in part, because assessing foundation work is exceedingly challenging. Foundations work on society’s toughest challenges. They primarily seek to achieve their goals through their grantees, so they are a step removed from impact. Attribution—and even contribution—is exceedingly challenging to pinpoint. The counterfactuals are often impossible to know. There is no common unit of measurement across myriad programs—no analog to ROI—and there never will be. The list of obstacles goes on and on.
Add to that the reality that too often evaluations are poorly designed, or wildly ambitious, or unclear in their purposes and the temptation to throw up one’s hands is real and understandable!
But that temptation must be resisted. Continual learning and improvement—as well as testing of the logic underlying philanthropic strategies—is vital if foundations are to maximize their impact.
So what tools are available to guide a foundation leader struggling to get the most out of evaluation? I would suggest a close read of Evaluation Principles and Practices: An Internal Working Paper from the William and Flora Hewlett Foundation. It is an outstanding resource.
The report—co-authored by Fay Twersky, director of Hewlett’s Effective Philanthropy Group, and Karen Lindblom, a fellow at the Foundation—emphasizes that it is essential to begin evaluation design early and with a clear purpose in mind, as well as to use evaluation to check assumptions in “the causal chain of a logic model.”[Disclosures: Hewlett is CEP’s second most significant grant supporter, providing $450,000 in general operating support to us in 2012. In addition, Twersky and I have worked together for years, co-founding CEP’s YouthTruth initiative with Valerie Threlfall, and co-authoring a forthcoming piece on listening to beneficiaries in Stanford Social Innovation Review.]
Twersky and Lindblom advise a realism when it comes to evaluation, offering up examples from Hewlett of when evaluations have been too ambitious. They advise beginning with “clear, crisp questions” and differentiating among questions that go to implementation, those that explore outcomes, those that seek to understand impact (which they define as long-term, sustainable change), those that explore context relevant to the work, and those that test assumptions behind strategy and overall theory of change. It’s not that evaluations won’t cover more than one of these areas, but it’s important to understand exactly what you are going after.
They argue for the use of multiple methods and for a “process of triangulation” which “allows one method to compliment the weaknesses of another.” An example:
Twersky and Lindblom discuss the importance of engaging productively with grantees in evaluation—communicating “early and often.” We at CEP have seen the costs of getting this wrong. As we noted in a recent report on nonprofit performance assessment, a majority of foundation grantees we surveyed agreed with the statement, “Our foundation funders are primarily interested in information about my organization’s performance that will be useful to them, rather than information that provides utility to me and my organization.”
But, as the Hewlett report makes clear, it isn’t just grantees that pay the price when funders don’t communicate early and bring grantees into the evaluation design process. Funders suffer, too. “Grantees can serve as a reality check and deepen understanding of the available data and data collection systems,” Twersky and Lindblom write. The result of productive and open dialogue can be an evaluation that will better serve both funders and grantees as they seek to improve their work.
While designed for Hewlett’s use, this “internal working paper” should be a vital resource to any funder wrestling with the challenge of evaluation. The guide is strengthened by many specific, practical examples from Hewlett, which deserves credit for being unusually open about its missteps. Nowhere to be found in this piece is a sense that the folks at Hewlett think they have it all figured it out. Rather, this piece is a window into their struggles, and a sharing of wisdom gained the hard way.
If the paper makes one thing clear, it’s how difficult—and important—it is to avoid the traps that so often befall funders. It also drives home the point that really understanding how things are going, and what must be done to improve, takes tremendous commitment. “Typically, no single evaluation can tell us if a strategy has been successful or is on track,” the authors caution. “Such a comprehensive assessment requires synthesis of multiple evaluations, summary and analysis of relevant performance indicators, and active reflection on and interpretation of the results in context.”
In a climate where simple-sounding solutions to the most difficult philanthropic challenges are marketed relentlessly—and one in which assessment is routinely (and unhelpfully) analogized to tracking performance in the corporate world—this clear-eyed, sober guide is a welcome reminder of what it takes to really do this work well.
I hope it is read not just by those whose job it is to lead evaluation at foundations but, perhaps more importantly, by the CEOs and trustees who are too often unaware of what it really takes to do thoughtful and useful evaluation. It should help them understand that it may be, finally, the foundation evaluator, not the CEO, who has the roughest job of all.
Phil Buchanan is President of CEP and a regular columnist for the Chronicle of Philanthropy. You can find him on Twitter @PhilCEP.