Any time the philanthropic sector has an opportunity to learn more about each other and the work that we do, we all benefit. A perfect example is the latest report from the Center for Effective Philanthropy and the Center for Evaluation Innovation, Benchmarking Foundation Evaluation Practices.
My key takeaway from this report: the sector has limited capacity to take on meaningful evaluation. And foundations bypass developmental or formative evaluation at their own risk. Without undertaking this important work up front, capacity for evaluation practices will remain woefully inadequate.
While the report provides many valuable insights into the evaluation capacities, experience, and expenditures of more than 100 foundations, it also confirms that many foundations lack or possess limited capacities to conduct evaluation in a meaningful and appropriate way.
For example, 66 percent of foundations surveyed for the report do not have a dedicated evaluation department. Only five percent of all respondents have an advanced degree in evaluation. And given the variation in rigor and methodological emphasis of different graduate programs, we simply cannot assume that the 50 percent of survey respondents with advanced degrees all share the same research or evaluation expertise.
At the same time, we all know that foundation staff are already spread thin and many wear multiple hats. The report tells us that of those foundations that do not have a dedicated evaluation unit, much of the evaluation-related work is conducted by members of the program or operations staff or members of the executive team. While some of these individuals may have been formally trained as researchers or evaluators, we can assume that some have not and are nonetheless responsible for conducting or managing evaluations. For both trained and untrained evaluators, they are both conducting evaluation related activities on top of their other foundation responsibilities.
So where does that leave us now? How do we take the information and insights from the CEP/CEI report and turn them into action to help build critical research, evaluation, and learning capacities across the philanthropic sector?
The key for me is revealed in the report itself. Consider that 80 percent of survey respondents never or only occasionally engage in developmental or formative evaluation for foundation initiatives, and even fewer engage in this kind of evaluation with their grantees.
It should not be a shock to us that in this era of Impact! Impact! Impact! — or what I call the #impactindustrialcomplex — we see many foundations bypass the critical steps of developmental or formative evaluation. This is where I believe valuable opportunities to build data and evaluation capacities reside, as well as opportunities to alleviate the burden on foundation staff tasked with conducting rigorous and meaningful evaluation.
Build a Vision
I get it. Developmental and formative evaluations are hard and often frustrating — but, alas, essential. Developmental and formative evaluations are where we sit down as a team and build consensus around what we want to do, how we plan to do it, and how this all holds up once we implement it. It is also where we do the hard work of researching what outputs, outcomes, and indicators are appropriate for what we plan to do and outline the relationships between them that represents our “story of change.” Additionally, this is where we observe the differences in goals, preferences, expertise, and experience associated with evaluation that exist across (and sometimes within) foundation teams. It is also where we discuss and reconcile these differences and build consensus on a shared vision for the initiative and our story of change.
But this is why this process is so valuable! We build valuable knowledge and understanding of our initiatives as we research and choose appropriate components (activities, outputs, and outcomes) and meaningful ways to measure them (indicators). We also better understand the context in which they are implemented and the potential factors that may contribute to our overall impact.
Validate an Approach
These often overlooked evaluations (developmental and formative) also provide us with the opportunity to validate our framework and to test it to make sure that the components are appropriate and meaningful and remain so after implementation. This is critical because without validation we can have little confidence that the result of a summative evaluation can be attributed solely to our initiative and not some other factor that we did not consider. So, yes, all of those “successful” impact evaluations completed by foundations that bypassed the developmental and formative work may simply be the result of factors that have nothing to do with the initiative itself.
Create a Culture of Collaboration
Through these evaluation processes, foundations also learn about their colleagues and better understand their work and how it may contribute to their own. In particular, staff learn about the data and knowledge possessed by different foundation teams, as well as the evaluation expertise and experience of their colleagues. This builds foundational capacity by allowing staff to leverage the knowledge and experience of colleagues, learn from one another, and (hopefully) foster a culture of collaboration.
Learn and Share
The knowledge and information captured during these evaluations can also be the foundation of a knowledge management and sharing system within the foundation. The benchmarking report tells us that two-thirds of respondents say that their foundations share evaluation findings “quite a bit” or “a lot” with foundation staff, which obviously is great news! Yet, by incorporating information and lessons learned from developmental/formative evaluations, we can also begin to better understand why and/or how our initiatives had a specific impact. We can also share critical information about contextual factors that are important for better understanding specific initiatives, but also for work on other initiatives that the foundation is conducting in the same or similar contexts.
Additionally, learning and sharing this information and data helps build foundation capacity by collecting and organizing data and information that would normally reside in the head of a program or grant officer. This helps reduce the time and resources needed to locate and access data, information, or expertise within teams and across the foundation. This is especially important for foundations that traditionally work in silos or have staff that wear many hats. It also helps prevent the loss of capacity when a staff member leaves the foundation or transitions to another role.
Obviously these are just a few examples of how to build evaluation capacity within a foundation as a result of engaging in critical developmental and formative evaluations, but there are many other ways, as well. The key is to take the first step, and the report from CEP and CEI provides us with a great place to start. Any attempt to enhance your foundation’s knowledge, understanding, and use of data or evaluation will begin that process.
We know that evaluation is challenging and often frustrating, especially developmental and formative evaluations, but they are not impossible and definitely not without considerable rewards. There are some great examples of foundations that have engaged in these critical evaluations and will be the first to tell you how beneficial they were. If you are such a foundation, I’d love to hear from you. Let’s join together in the ongoing quest to build the capacities of the sector.
Dr. David Goodman is the director of impact at Fluxx, where he engages with both grantmakers and nonprofits to empower them to use data to turn a single success story into a thousand similar success stories. He can be reached at firstname.lastname@example.org, and on Twitter at @MeasureDoc.