So you want to become a more intentional learning organization? For those who are building out a new learning and evaluation function at their foundation, Benchmarking Foundation Evaluation Practices, the new research report from the Center for Effective Philanthropy and Center for Evaluation Innovation, is a welcome resource.
I’m one year into leading The Kresge Foundation’s build out of its strategic learning, evaluation, and research practice, and I’m learning a lot. Reflecting back on my first year leading this work, and in light of the new data in this research, what advice would I give to those who are beginning this organizational change journey? Here are some thoughts:
- Know your history. When we are charged with building something new, we need to look first at the history we are building upon and inquire about what follows. Why do we aspire to become a learning organization? What’s behind this intention? In a recent Foundation Review article about a foundation’s theory of philanthropy, authors Michael Quin Patton, Nathaniel Foote, and James Radner write about the importance of holistic alignment within a foundation. To this end, it is incumbent upon us to understand our organization’s purposes for evaluation and how they relate to the other core foundation functions. Doing so helps us to have a full, and more visible, picture of why we aspire to become a learning organization and what it will take to get there. Only then can we make the case for the resources we need to realize our aspiration.
- Link strategy, learning, and evaluation. We want data and insights to inform decision-making. The call for use of evaluative data and thinking underpins what the report identifies as the top three changes evaluation staff hope to see in the next five years: 1.) Foundations will be more strategic in the way they plan for and design evaluations so that information collected is more meaningful and useful; 2.) Foundations will use evaluation data for decision-making and improving practice; and 3.) Foundations will be more transparent about their evaluations and share what they are learning externally. If I were building out a new learning and evaluation function starting today, this is where I would start. Inquire internally about the extent to which these top three changes evaluation staff want to see align with the way your organization is thinking about the purpose of learning and/or evaluation, its primary audiences, and its uses. Then work backward to operationalize it within your organizational culture.
- Set a priority for evaluation, and learn from individual grants. Ask yourself, “Where will I focus my evaluation efforts and why?” Just over a third of respondents in the CEP and CEI study report that evaluating individual grants is a priority. In the meantime, 88 percent prioritize evaluating foundation initiatives or strategies. At Kresge, we are like the majority of our peers in medium and large foundations that prioritize evaluating initiatives and strategies given the tendency toward higher stakes in duration, demands on time, financial resources, and often higher levels of risk. I wonder, though, what we make of learning from individual grants. I’m reminded of the importance of learning from monitoring reports from individual grants. Each can teach us, and we can cull insights across a portfolio of individual grants within a strategy.
- Model robust dissemination. What happens to evaluation findings? According to the data, 83 percent of respondents report a challenge in having evaluation result in useful lessons for the field. And over two-thirds report that their foundation invests too little in disseminating evaluation lessons externally. I was surprised to read that so few foundations are planning their evaluations with dissemination in mind. This is an opportunity for greater transparency in philanthropy, particularly given the critiques we receive for being opaque and changing course. We need to be honest about what we’re finding — where we’re gaining traction, what we’re finding challenging, where we’ve failed — and share our meaning-making. We can model this in our role as funders, especially as we often laud data-driven decision making as a requirement for many of our nonprofit partners. How are we walking the walk?
- Make new friends. While the report doesn’t speak to the importance of peer networks, we know that evaluation staff are pulled in many directions. For every 10 program staff, respondents report having one evaluation staff. We need one another. As a growing field, we can accelerate our new colleagues’ efforts by sharing our successes, challenges, and insights from our evaluation work. Informally, we can bounce around ideas before bringing them to our leadership for primetime. And knowing we often long for additional resources, our network relationships help us to think about how we might collaborate when we share strategic learning priorities.
- Keep learning. Give your all and make time for your own learning. With most learning and evaluation staff wearing many hats — half of respondents divide their time among at least nine evaluation activities — we need to create space to step back, continue learning, and bring what we are learning to our entire organizations. For example, to support my work at Kresge, I’m working toward certification in emergent learning and introducing this learning practice in the organization. I’m finding great value in both the practice itself, and the community of practice that I’m building through it.
This new report is a great gift to newcomers in the learning and evaluation field in philanthropy because it offers insight on where we are today — and suggests future-focused aspirations. Welcome!
Chera Reid is director of strategic learning, research and evaluation at The Kresge Foundation.
Download Benchmarking Foundation Evaluation Practices here.