This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

How Can Foundations Learn Better and Share More?

Date: November 13, 2018

Yvonne Belanger

Director of Learning and Evaluation, Barr Foundation

Never Miss A Post

Share this Post:

The Center for Effective Philanthropy’s new report Understanding & Sharing What Works explores, from the vantage point of the foundation CEO, how foundations go about assessing their efforts, how they use this knowledge to make decisions, and how they share what they’ve learned in formal and informal ways with a variety of stakeholders. As I considered this report, I found three findings to be of particular interest, and a few key lessons for the field.

Three Observations

1. Some widespread learning practices aren’t generating useful insight, and some practices that show promise aren’t yet widespread.

It’s encouraging, yet not too surprising, that activities in which funders and grantees are learning together — such as site visits and convenings — rate high for usefulness. It’s more curious to see that foundation CEOs rate approaches that are broadly used as not very useful (p. 10). For instance, the report shows high levels of survey use, but a lot of skepticism about their usefulness. Does this reflect a mismatch between the kind of insight foundations want and the kind of insight surveys can reasonably provide? Or perhaps an overuse of surveys? Good survey work is harder than many realize, and even when done well, surveys can raise more questions than they answer.

Another widespread learning practice is evaluation. In terms of usefulness, CEOs in the study more frequently report that evaluations of portfolios are useful than evaluations of individual grants. But the degree of use goes the other way; based on these data, evaluating individual grants is still a more common practice. Does this reflect the need for greater internal foundation capacity and expertise to attempt portfolio evaluations? Or possibly the challenge, noted elsewhere in the report, of finding the right consultants (p. 11)?

Finally, I suspect that low rates of use sometimes reflect low awareness of emergent practices. When CEOs report learning from beneficiaries via surveys and focus groups, most aren’t finding these techniques useful. However, when to gather beneficiary feedback — and how to do it well — is still an emerging area of practice in philanthropy. So I think it’s too soon to say that these aren’t useful techniques, given significant enthusiasm and innovation in this space.

2. Foundation sharing often falls short of intentions.

Two-thirds of CEOs in the CEP study report that they believe foundations should play a large role in advancing knowledge in society (p. 13), but more than half (p. 15) say that communicating about the lessons they learn is hampered by a lack of adequate time, money, and staff expertise. Given this aspiration to play a larger role in social change, it’s interesting to note that 38 percent of foundation CEOs say that they don’t communicate directly with government agencies at all about their learning.

3. Funders pay close attention to the lessons of their peers, but they might not be getting (or sharing) the most important information.

Three-quarters of CEOs surveyed say evidence of what is and isn’t working at other similar foundations informs the strategies their foundation uses to achieve its programmatic goals at least somewhat (p. 12). But one-third of CEOs specifically mention that their foundation faces pressure from its board of directors to withhold information about failures (p. 16). The sector can’t hope to learn if we repeat the missteps of others because of a mistaken impression that no news means good news.

Key Takeaways

1. To generate useful learning, funders should focus on grantee and stakeholder engagement.

Evaluation efforts are more useful if we are intentional on the front end about what could be learned, who might care, and what decisions the knowledge could inform. That exercise itself can often clarify or shift the focus of an evaluation in helpful ways. Ideally, we should also take the opportunity at the beginning of the process to consider how we might engage stakeholders to shape or advise the evaluation in an ongoing way so that what’s learned is useful to their needs — and can be applied sooner. Finally, we should engage our trustees early in evaluation work so that they can articulate areas of interest for strategic learning. The most useful evaluations will shed light on challenges and barriers as well as areas of progress and impact, so involving all stakeholders up front about the rationale for an evaluation, and keeping a focus on learning, can help this work avoid being seen as simply about success or failure.

2. Less is more.

Given the real capacity constraints for learning and evaluation, at Barr we are asking ourselves not just what more we can do, but rather, what we can stop doing to free up our resources and time to learn in ways that are more effective. For example, how useful are final reports from our grantees? Do they truly inform our decision-making? When we conduct evaluations, can we make this more manageable by reducing the scope of our inquiry to focus on questions that are meaningful for stakeholders and have practical application for decisions?

3. Foundations can and should advance public understanding of how to tackle difficult social problems, but this requires getting comfortable talking about our own shortcomings.

In the current climate of eroding public trust in institutions, foundations can (and probably should) recalibrate their internal conversations to start from a presumption that knowledge should be shared. Hewlett, Bush, and others have demonstrated impressive leadership in this area. Rather than ask a yes/no question about whether we should share or not, we should focus on how much we can share, how soon we can share, and how best to share. At the end of the CEP report, a question that has resonance for us at Barr is: “How can your foundation better communicate its knowledge to people and organizations outside the foundation?” We are currently examining our own practices around openness using GrantCraft’s “Open for Good” guide as a tool to examine our practices and build our capacity for knowledge sharing.

Yvonne Belanger is director of learning & evaluation at the Barr Foundation, where she leads Barr’s efforts to gauge its impact and to support ongoing learning — and application of that learning — among staff, grantees, and the fields in which Barr works. Follow her on Twitter at @ybelanger.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog