This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Putting Evaluation in Perspective: Past and Current Day

Date: October 25, 2016

Jara Dean-Coffey

Founder and principal, Luminare Group

Never Miss A Post

Share this Post:

The Benchmarking Foundation Evaluation Practices report released last month continues to add to our collective understanding of what happens behind the green curtain in philanthropy as it pertains to evaluation. Started by Patti Patrizi in 2009 and continued by CEI (this time in partnership with CEP), this thoughtful exploration surveys foundation staff with evaluation-related responsibilities to provide an overview of demographics, practices, challenges, and opportunities moving forward in foundations’ evaluation practices. I was not surprised by any of the findings, and like CEI’s Tanya Beer noted in her recent blog post, progress is being made.

In thinking about my response to this report, I realized I was contextualizing the findings based on the following:

  • Evaluation is a relatively new field — its origins started in 1950, emanating from health and education research situated in academic institutions.
  • Evaluation in philanthropy in any real organized way started in 1990.
  • Strategic learning entered philanthropy in 2010 (although Peter Senge’s “learning organization” was introduced in 1990).
  • We are at a moment in time in U.S. history where the institutional and structural barriers which prevent equity —and the consequences of that reality — play out every day via video, print, and human stories. It is a benchmark moment in our evolution as a nation.

Given this historical context, it is no wonder that, as the data in the report shows:

  • There are varied ways in which evaluation is situated, funded, and staffed in foundations. (page 15)
  • Forty-one percent of foundations have no common approach to evaluating efforts across their portfolios. (page 22)
  • Respondents note a variety of factors that make using evaluation information a challenge for program staff. (page 27)

All that being said, what is promising is the degree to which respondents report that program staff is likely to use evaluation data to understand and make decisions pertaining to their work (page 28). Sixty-eight percent of respondents say senior management engages the appropriate amount in communicating to staff that it values the use of evaluation and evaluative information, and 92 percent say their foundation’s board provides moderate or high support for the use of evaluation or evaluative data in decision-making. This is promising.

Clearly the component pieces for foundations becoming truly learning organizations as defined by Senge are possible. So, what is getting in the way? I suspect two things are needed.

Shift 1 – Elevate evaluative thinking as a key organizational capacity and leadership competency.

Evaluation at its core is about intention, inquiry, and information, and is typically episodic. As long as this skill rests in the domain of 1.5 full time employees (the number of staff that about half of foundations report regularly dedicating to evaluation work), and in some cases a unit, the usefulness of evaluation is limited. If foundation staff and leadership are not in the habit of engaging in deliberate, consistent, and systematic practices related to evaluative thinking — and policies and resources do not support doing so — their ability to be users, consumers, and producers of evaluation is compromised.

Shift 2 – Acknowledge and explore the current evaluation paradigm and its relevance and usefulness.

The origins of evaluation in academia more than a half century ago reflect a time, place, and mindset that privilege particular ways of being, knowing, and truths. (See Mertens for more). The methods we use are artifacts of that reality and so is the data and analysis. As foundations and their nonprofit partners engage in work far different than the controlled environments of medicine and education, and increasingly work toward social justice or equity, we need to ask ourselves: what do we need to do differently to assure that our evaluation approaches are not inadvertently working at cross purposes under the delusion of objectivity and rigor?

Both of these shifts relate and can contribute to the three things evaluation staff say they hope to see in five years, which include foundations being more strategic in the way they plan for and design evaluations so that information collected is meaningful and useful, foundations using evaluation data for decision-making and improving practice, and foundations being more transparent about their evaluations and sharing what they are learning externally.

Jara Dean-Coffey is the principal and founder of jdcPartnerships. Follow her on Twitter at @jdeancoffey.

Download Benchmarking Foundation Evaluation Practices here.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

What Do Grantees Think of Intermediary Funders?
What Do Grantees Think of Intermediary Funders?

Over the past decade, there has been a significant rise in both the interest in and number of intermediary organizations — those that primarily regrant funds from institutional sources on their behalf, including nonprofits that act as regrantors, collaborative or...

read more