Reach out now to receive a discount on a 2025 CEP assessment or advisory project.

Contact Us

Search

Blog

How can we help evaluation and learning deliver on its promise?

Date: October 6, 2016

Tanya Beer

Associate Director, Center for Evaluation Innovation (CEI); Co-Director, Evaluation Roundtable

Never Miss A Post

Share this Post:

When foundation evaluation trailblazer Patti Patrizi conducted the first benchmarking survey of the philanthropic sector’s evaluation practices in 2009, the field was already several years into its embrace of strategic grantmaking and its call for rigorous attention to results. Yet many foundation evaluation directors, whom Patti had organized into an informal network called the Evaluation Roundtable, remained frustrated by low levels of institutional support for evaluation. To understand whether the sector was putting its money (and attention) where its mouth was, Patti started the evaluation benchmarking survey.

That 2009 survey and interviews with Evaluation Roundtable members confirmed that attention to outcomes and strategy wasn’t always translating into sufficient investment or even interest in real evidence about whether and how results were actually being achieved. Nor were foundation evaluation directors seeing much uptake of evaluation and data by program staff to help them navigate the messy swamp of on-the-ground strategy implementation. (For more on this, see Patti’s articles with Elizabeth Thompson, Beyond the Veneer of Strategic Philanthropy  and Eyes Wide Open: Learning as Strategy Under Conditions of Complexity. Both are must reads.) Evaluation directors cited staff attitudes, time, leadership support, and staff capacity as challenges to the use of evaluation at different stages of the strategy lifecycle.

In short, evaluation directors were experiencing a shortage of demand for evaluative work. In response, they have spent this last decade working hard to make the case for evaluation and rigorous learning to program staff, executives, and boards. They also have been working with evaluators to improve their ability to provide data and insights that are more tightly connected to decision-making.

The good news is that this effort is working. Demand for evaluation is up!

This year, we wanted to expand our survey audience. In addition to benchmarking the evaluation practices of Evaluation Roundtable members, we also benchmarked the practices of U.S. and Canadian foundations giving at least $10 million in grants annually. The resulting data, found in the Center for Evaluation Innovation and the Center of Effective Philanthropy’s new joint publication, Benchmarking Foundation Evaluation Practices, shows that foundation evaluation staff participate in or lead an increasing array of evaluative and learning-oriented activities.

In the interviews we conducted with 38 foundation evaluation directors to complement the benchmarking survey data, we learned that many evaluation directors are feeling excited and positive about the progress they’re making on meaningful evaluation and learning within their institutions. We should take a moment to celebrate! This is a big deal.

However, I worry about a down side to this increased demand. I’m not confident that foundation investments in evaluation are keeping pace with their vision of what evaluation should deliver.

The median foundation has 1.5 full time employees dedicated to evaluation, averaging one person to support the work of every 10 program staff. According to our data, these 1.5 staff are responsible, on average, for nine fairly distinct and time-consuming evaluative activities, such as providing research or data to inform grantmaking strategy, refining grantmaking strategy during implementation, and designing and/or facilitating learning processes or events within the foundation, to name a few. As a former foundation evaluation officer, just reading this list makes me anxious about missed deadlines, data shortcuts, fluffy learning events, and contract oversight that’s too spotty to ensure that evaluation findings are high quality and useful.

Compounding the mismatch between expectations and evaluation resources, 91 percent of respondents said that program staff don’t have sufficient time to make use of information collected through or resulting from evaluation work. Program staff already are expected to manage dozens of grantees, build networks, convene partners, support grantee capacity development, spend countless hours developing internal strategy documents, and know their content areas deeply. What time can they realistically carve out for serious learning and reflection? There is a bandwidth problem on both sides of the equation.

What can foundations do to increase the likelihood that evaluation and learning investments will deliver on their promise? Here are a few thoughts:

Get serious about evaluation staffing and resources, on both the evaluation and program side. 

Learn what it really costs — both in terms of direct expenses and staff time — for evaluation officers, contractors, program staff, and grantees to produce and use quality data. Especially as foundations add new evaluative tasks, such as creating foundation-wide performance dashboards or implementing a cross-foundation learning agenda, first guesses about the budget and time it will take are probably wrong. If we want better evaluation use and learning, we must treat it as a real job responsibility for both program and evaluation staff — and for the grantees asked to participate.

Make tougher choices about evaluation and learning activities. 

As the current directors of the Evaluation Roundtable, the most common call we get is from foundations asking how other foundations have structured and focused their evaluation function. While we advocate for learning from peers about what has worked for them, we often wonder whether evaluation directors and CEOs are discerning enough about what kind of evaluation and learning function is the best fit for their particular needs, resources, and culture (see a forthcoming article in the Fall 2016 edition of the Foundation Review on this).

New evaluation directors — or those serving incoming CEOs who almost inevitably re-organize the foundation to match their vision — find themselves managing new demands without letting go of past evaluation practices. The result can be a mishmash of evaluation and learning work that sends conflicting messages about the foundation’s evaluation philosophy and expectations, and which makes each piece suffer in quality because there are too many balls in the air. We encourage foundations to take stock of their evaluative needs with a dispassionate (evaluative!) look at the real value of current practices. Then zero in on the tasks that, if done well, will make the most difference.

Treat evaluation and learning as capacities that must be built over time. 

Over the years, we have watched some foundation evaluation leaders design comprehensive monitoring, evaluation, and learning “systems” out of whole cloth. After furious planning and sketching and calculating, they roll out new dashboards; theories of change and strategy templates; evaluation frameworks, guidelines, and expectations for program areas; grant reporting requirements; learning agendas, etc. Program staff can feel crushed under the weight of it all (see note above about bandwidth) and come to view evaluative work as a bureaucratic exercise — another set of hoops to jump through before they can get to the real work. Once that happens, the evaluative tools and templates have lost their real power, which is to help strengthen the way we think together about what we are trying to accomplish, and to improve our approach for getting there.

Rigorous strategic learning is not a technical problem solved by simply having the right tool, the right template, or even the right data and findings at hand. It is a practice, a way of working and thinking, a set of habits — a capacity.  As such, it must be cultivated over time and in a way that clearly connects to program staff needs. Our experience working with foundations suggests that evaluation and learning practices might have more staying power, and might be more useful to program staff, when they have been: a.) introduced as smaller scale experiments that staff can help shape and adapt; b.) observed and tested for the value they add to the work; and c.) only then scaled or integrated into the foundation-wide workflow so that staff can experience it as integral to their work rather than something that “belongs” to the evaluation staff.

………

The exciting news is that many foundation evaluation leaders have made great strides in building the demand and capacity for evaluation and learning within their institutions and have thoughtful approaches for continuing to deepen it. Foundations that have made good progress should share their stories, warts and all, with peers. What was learned about the time and resources it takes to get good value from the work? What tradeoffs were made when setting priorities and what tensions are still being managed? What does it take to build the habits and capacities of a team to learn together rigorously and regularly? There’s a lot of wisdom out there to build on.

The data in Benchmarking Foundation Evaluation Practices shows that foundations “get it.” They know evaluation and evaluative thinking can add substantial value to strategy development and implementation, and have integrated evaluation and learning activities accordingly. But now that demand is up and evaluation staffing and resourcing has not grown at the same pace, it is time for many foundations to do some tailoring of activities or rethinking of roles and responsibilities to ensure that demand is met with a quality response.

Tanya Beer is associate director of the Center for Evaluation Innovation (CEI) and co-director of the Evaluation Roundtable.

Download Benchmarking Foundation Evaluation Practices here.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

What Do Grantees Think of Intermediary Funders?
What Do Grantees Think of Intermediary Funders?

Over the past decade, there has been a significant rise in both the interest in and number of intermediary organizations — those that primarily regrant funds from institutional sources on their behalf, including nonprofits that act as regrantors, collaborative or...

read more