Ed Koch, the former Mayor of New York City, was known to ask people on the street, day after day, “How’m I doin?” New Yorkers readily told him. Koch, native son that he was, then went out and did whatever he had been planning to do before asking, embodying the difference between data gathering and data use.
For decades, we’ve had annual surveys of Americans that ask them about the quality of their public schools and the integrity and effectiveness of the US Congress. Every year the findings reveal a remarkable parallelism: Americans think schools in general are cruddy but their local school is fine. Similarly, Congress overall is corrupt and inept, but our elected representative is doing just fine.
The latest study by The Center for Effective Philanthropy, “How Far Have We Come?” which surveys foundation CEOs about progress toward philanthropic goals, uncovers a similar parallelism: foundations in general could do more to achieve their stated outcomes, but the organizations that the surveyed individuals lead, they’re doing pretty well.
Unlike schools or congress, there are no international tests, comparative statewide exams, or quantitative measures of legislative progress against which foundation effectiveness (and their leaders’ perceptions thereof) can be assessed. Given the state of the data, the CEP study makes a contribution. And, yes, that’s a backhanded compliment.
The real accomplishment of the CEP study is to shine a light on how hard it is to know progress when it’s happening. Foundations don’t know how well they are achieving their goals. And the CEOs are refreshingly candid about this. More than half of the CEO respondents were “somewhat,” “a little,” or “not at all” confident in their assessment of overall progress in the areas in which they work (p. 11, Figure 4). Given this, it would be worth further investigation of what the other 46% of respondent CEOs – who said they were “Extremely” or “Very” confident in their assessment of progress – know that the majority doesn’t. Perhaps they’ve got some secret sauce for confidence? Or they’re simply quite sure that they’re not making much progress. Either way, one hopes that foundation leadership – boards, staff, and CEOs – will reflect on the difficulties they have when they demand measures of progress from their grantees.
If foundation leaders don’t know if they’re making progress, we should assume no one else does. Some might say, “Throw the bums out.” Others might point to the study as more reason to question the lack of accountability of these organizations. If they can’t even tell if they’re getting anywhere, than how can we? Those who want to find reason to throw tomatoes at the entire enterprise of institutional philanthropy will find a garden’s worth of ammunition in this study. Even the authors seemed to recognize this – the final section of the document – which one would normally refer to as a conclusion – is an artful example of inconclusivity. The authors anticipate critics of the study, critics of the findings, and critics of philanthropy writ large and offer up possible interpretations for each of these and a few more.
Frankly, I would have been more concerned if the survey had found “Yes, we know what we’re doing, we know how to measure it, and we think we’re 59.675% of the way there.” I think the tacked-on inconclusive interpretations are a distraction, if not a bailout. The authors claim the CEO’s inability to quantify their progress means foundations want more data, more communication with grantees, and more standardized measures. This claim comes despite their own opening methodological caveats about this being a study of perceptions. If ever there were a set of topics that should be discounted for perception bias in this day and age it’s those that ask, “don’t you wish you had easier ways of comparing data, better measurement tools, and more insightful information.” It’s like asking Americans if they should eat more greens, get more exercise, and participate more in their communities. The answers will be yes. The actions will be….Kochian.
The study puts percentages around what we already know. Showing progress against tough social issues is hard. It depends on whom you ask. What about what others think? Is self-perceived progress even useful in the aggregate? With all the information the study presents about its sample size, it omits one key variable – are the CEOs surveyed held accountable for programmatic results by their boards? How? What are those measures and how are they assessed? That information would help in interpreting the data that CEP did collect, and it might be more useful information to gather in the aggregate. It would certainly shed light on a foundation practice that falls within the realm of “things foundations can actually control.”
Lucy Bernholz is a Visiting Scholar at Stanford University, where she co-leads the Digital Civil Society Lab and at The David and Lucile Packard Foundation. She blogs at www.philanthropy2173.com.