By Meg Long and Clare Nolan
As stakeholders in the social sector, we all care deeply about impact. We are driven to move the needle on the systemic and social conditions that will improve individual and community outcomes in the fields in which our sector works — education, health, community development, the environment, and civic society, among others. Because of this drive, it’s vital that we think deeply and continuously about how we can improve the likelihood that the work we do leads to impact.
Evaluation and continuous learning — the “R&D” of the social sector — can help us in our quest. We have to both improve the quality and influence of evaluation in philanthropy and help philanthropy become more open and responsive to new evaluation perspectives and lessons. There are several trends that show us that the relationship between philanthropy, evaluation, and learning as tools for increased impact is evolving rapidly, and will continue to do so:
- Learning and evaluation needs in philanthropy are shifting. Foundations are placing greater emphasis on achieving measurable results and many are now tackling community and systems change. This has challenged evaluators to develop new skills and approaches that go well beyond evaluating discrete programs.
- There are concerns about the usefulness and influence of evaluation. A recent study by the Center for Effective Philanthropy and the Center for Evaluation Innovation highlighted a number of challenges in philanthropic evaluation, including limitations in generating useful insights for the field, lessons for grantees, and meaningful insights for foundation staff. There is also a need for new voices and diverse perspectives to contribute to the field’s thinking about equity issues facing our society.
- Building the evaluation field’s capacity requires new levels of partnership that are not well supported. Most small- and mid-sized evaluation firms operate as intermediaries, providing services and products to strengthen foundations and the social sector. Because most operate as small businesses, however, they are not incentivized to collaborate and are typically ineligible for foundation capacity-building support. Advancing the field will require new partnerships both among evaluators and with funders.
If we are serious about continuing to learn from our work, innovating in the moment, and doggedly advancing a change agenda that produces systemic and societal outcomes, funders and their evaluation partners must think and work differently together. Yet, there appear to be two sets of market forces — driven by issues of supply and demand — standing in our way:
Market Force #1: Supply (i.e., addressing talent needs)
- Old skills, new skills. External consultants and in-house evaluation staff are expected to have wide-ranging skills that go well beyond traditional social science methodologies. Being all things to all people may be an unreasonable challenge. Quality research skills remain essential, even as foundations seek additional skills in areas such as facilitation, equity, change management, adult learning, and systems thinking, to name a few.
- An apprenticeship model. Given the complexity of skills involved in evaluation consulting, talent development takes time, and the apprenticeship model employed by most firms works well. However, funders could do a better job recognizing, accepting, and supporting firms’ talent development efforts, especially since many funders hire former evaluation consultants as staff.
- Need — and readiness — for diversity. The need for new voices and diverse perspectives in the evaluation field is broadly recognized as important to making progress on pressing equity issues. Along with this, foundations must be ready to accept and value different ways of thinking and new perspectives, or diversification efforts will fail. The Equitable Evaluation Project offers a critical perspective on evaluation orthodoxies that get in the way of improved practice on this front.
Market Force #2: Demand (i.e., addressing the influence of evaluation on philanthropic practice)
- Informing expectations. Evaluators are often excluded from early-stage conversations about impact measurement and program design, despite their technical knowledge of research and their experience assessing what’s worked and what hasn’t across programs. Funders can be more flexible in their approach and make space for evaluator feedback in both areas
- Competition over collaboration. Competition among evaluators impedes collaboration and knowledge-sharing that could advance the collective capacity of the field. While this is often true, part of the problem is that there is a lack of structures or mechanisms that facilitate learning and collaboration among evaluators and funders. For example, Grantmakers for Effective Organizations’ Learning Conference is closed to the vast majority of evaluation consultants despite the relevance of our knowledge to these discussions.
- Clients over field. Evaluation is heavily focused on the needs of individual clients, and thus rarely informs broader social change efforts. Foundations should actively partner with evaluators to share findings with their peers and communities with whom they work through resources such as IssueLab.
Addressing the market forces of supply and demand requires greater partnership among evaluators and with funders. Unless something is done to change how evaluators and funders work together, we will continue to produce outcomes that at best “tinker” at the margins of improvement and never reach the scale of impact that we want to achieve.
More than two dozen funders and evaluators from across the country met to discuss and develop action items on these topics a few months ago, which you can read more about here. We also invite you to continue this conversation by joining us for a follow-up exploratory dialogue at the American Evaluation Association Conference in November. If you are interested in participating, or would like to support this field-advancing effort, please get in touch with us at email@example.com and firstname.lastname@example.org.