Making Data and Evaluation Work for Foundations and Nonprofits

Johanna Morariu

Research into evaluation use gets me all fired up! It has a way of putting a number on experiences and feelings common to many of us. As the lead author of the State of Evaluation project, one area I care deeply about is how nonprofits engage with, use, and experience evaluation. Reading Benchmarking Foundation Evaluation Practices, the recent report from the Center for Effective Philanthropy (CEP) and the Center for Evaluation Innovation (CEI), I was struck by the nonprofit evaluation story that peeks through and provides some important insights into how foundations can support grantees to get more out of evaluation — and, in turn, help foundations to get more out of evaluation, themselves.

Many foundations have a heavy data dependency on grantees. Grantees may report data about the quantity and quality of services or activities, the size and characteristics of a service population, estimates of audiences reached with communications and messaging, the impact of programs and initiatives, and advocacy and policy change outcomes, among many other things. Foundations rely on this data for accountability and learning — to gauge if the grant activities were acceptable, if the grantee is effective, if progress is being made toward goals, and to make decisions about future programming and grantmaking. But what do we know about the quality of this data?

Resourcing: Foundations can send mixed signals to grantees about support for evaluation, even when nonprofits are expected to report data as part of a grant requirement. According to the CEP/CEI report, 41 percent of foundation respondents indicated that there was not a common approach to supporting grantees’ evaluation efforts because funding for evaluation efforts differed widely across the foundation’s program or strategy areas, and 9 percent of respondents indicated that no grant dollars were provided to support grantees’ evaluation efforts. (The other 50 percent of responses included optional or required approaches to supporting evaluation — great!) And 59 percent of nonprofits that evaluate spend two percent or less of their organizational budgets on evaluation-related costs (staffing, consultants, tools, etc.), State of Evaluation 2016 reveals.

Staffing: Foundations and nonprofits both struggle with sufficient evaluation capacity. Of the foundations in the CEP/CEI study, for every 10 program staff, the median foundation has about one full-time equivalent (FTE) staff person regularly dedicated to evaluation work. A ratio of 1:10 (staff dedicated to evaluation vs. program staff) is meaningful, but may be stretched thin considering the range of evaluation services called for within a foundation (planning and assessing foundation, strategy, and/or grantee performance; commissioning and managing third party evaluations; supporting learning; etc.).

On the grantee side, six percent of nonprofits that engage in evaluation have evaluation staff, State of Evaluation 2016 shows. Six percent! Evaluators needn’t and shouldn’t have a monopoly on data, but the current situation likely represents an underinvestment in evaluation and an overloading on already maxed out executive and program staff to collect just enough data to fulfill grant reporting requirements.

It’s clear that in the areas of resourcing and staffing, foundations and nonprofits have room for improvement to get to high quality data and evaluation use. Foundations (and nonprofits) can do more to improve their ability to get more from evaluation: twitter-web

  • Foundations should ask grantees about their evaluation capacity. Who collects data? How and when is it collected? How is it stored, cleaned, and analyzed?
  • Foundations should consider data quality. What were the conditions in which the data were collected? What aspects of the data are likely to be stronger or more reliable? What might be the weaknesses in the data? What is a responsible way to use the data?
  • Foundations should support data and evaluation. What are our data requests of grantees (explicit and implicit)? How can we support grantees? How do we need to build our own capacity?

Let’s up our game on data together. High quality data isn’t just the quickest way to an evaluator’s heart — it’s also one of the most promising ways to accelerate learning, improvement, and impact.

Johanna Morariu is the director of Innovation Network, a consulting firm that supports social sector organizations to collect and use data for decision-making. You can find her on Twitter at @J_Morariu.

evaluation, research
Previous Post
To Increase Impact, Foundations Must Leverage Our Privilege
Next Post
Changing the Conversation about Measuring Fundraising Effectiveness

Related Blog Posts