This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Beyond “Dumbed-Down Metrics”: Grantee-led Evaluation for Better Learning

Date: March 3, 2020

David Shorr

President and CEO, Convergence Center for Policy Resolution

Kathleen Sullivan

Principal, Fine Gauge Strategy

Never Miss A Post

Share this Post:

In Phil Buchanan’s recent post on his hopes for philanthropy in the 2020s, he issues a call to “embrace measurement as the crucial and important challenge that it is and reject dumbed-down metrics like overhead ratios that tell us little about results.”

We think that’s right, and we’ll underscore this point: both philanthropy and nonprofits are most likely to learn — and learn well — when grantees are supported to measure the things that matter to them the most.

In our experience, the best learning occurs when funders set parameters and offer thoughtful input, yet hand the measurement reins to the people whose daily work will benefit from the information sought. With support of external experts (or internal measurement staff), an organization can design a realistic and effective external evaluation, or a plan to gather and analyze data on their own programs.

This approach ensures that data and analysis not only gets produced, but also that it answers the most mission-critical questions. That’s because staff is more likely to embrace the evaluation opportunity when the questions asked and the data gathered are truly aimed at supporting their work, rather than merely being boxes to check.

This flexibility to custom-design an evaluation or measurement plan is important in any social change endeavor, but it’s especially crucial in the policy-advocacy arena in which we both work as evaluators and measurement consultants.

Even extremely well-conceived advocacy campaigns or program plans can run into myriad challenges when implemented, due to changes in policy and policymakers, changes in the larger social environment, evolution in funders’ outlooks, and the emergence of new allies and partners, to name just a few. Where flexible evaluation or measurement funding is offered, advocates can better test assumptions, engage in experimentation, course correct, play to their own strengths, adjust to new conditions, or take advantage of any strategic openings that emerge.

At the outset of any evaluation, it’s important for policy strategists and implementers to zero in on powerful questions that get to the heart of what they seek to accomplish — and how they can most effectively get there. Sometimes that’s just one critical question that the organization hasn’t been able to identify or solve; other times it’s several questions about the rollout of a new strategic direction.

Here are two examples of advocacy work in which evaluation went beyond simplistic metrics and instead identified and reflected critically on questions that truly matter to effectiveness.

Understanding Impact Beyond Just Policy “Wins”

Sometimes tailoring a question involves “pulling the lens back” to reframe organizing or advocacy work in a way that points toward new strategic options. In 2018, one of us (Kathleen) partnered with Four Freedoms Fund to review the work of its Texas Fund, a special donor collaborative initiative supporting community-designed policy reforms to protect immigrants and people of color in Texas from harsh enforcement. At the time, grantees had achieved important policy wins in their cities and counties during the first phase of Texas Fund grantmaking, but many initiatives — especially in smaller cities and rural areas — were just getting off the ground.

Given this, instead of simply asking, “What policy wins have Texas Fund grantees achieved?”, Kathleen worked with Four Freedoms Fund to analyze grantees’ underlying strategy, which began with building power within influential civic institutions. Drawing upon ideas from Innovation Network, the review looked at how grantees built influence and allies among prominent power holders such as school systems, universities, libraries, and businesses. This revealed myriad large and small ways in which grantees were expanding political capital and building momentum for change in local government policies. This wider view of grantee work helped the Fund and its donors identify the most effective ways to prioritize their giving in Texas.

Measuring What’s Most Helpful to Advocacy

Sometimes an organization is ready to reassess its learning process, revamping internal systems to extract the most useful information. Especially when an organization conducts a range of activities, it can be tricky to devise monitoring and evaluation tools to suit a varied program. David’s recent work with the Andrew Goodman Foundation (AGF) and its Vote Everywhere program is a good example.

AGF pursues its goal of increased youth voter turnout through voter registration, civic education, getting out the vote, and advocacy to remove obstacles to voting. Between this variety of activities and working with more than 60 higher education institutions, Vote Everywhere faces a formidable task to monitor, evaluate, and provide information for prioritizing all this work.

Across the entire field of student voting rights, a consensus has emerged on the importance of institutionalization — making durable changes to the voting process that, in one way or another, open the path for more students to register and vote. For instance, if students convince local election officials to set up a new on-campus polling place, it might pave the way for hundreds of additional student voters. Campus administration can be instrumental by conducting voter registration at whichever orientation events bring together the largest numbers of students.

What these examples have in common is that they entail swaying decision makers with the authority to make change. In a word, this is classic advocacy to spur changes of policy or process.

When taking stock of AGF’s internal evaluation tools, however, it became clear that they weren’t structured to capture or highlight this kind of advocacy work. For example, AGF’s existing reporting form focused on raw numbers (such as the number of students coming to an event) which weren’t telling much of a story. Given this, David worked with AGF to revise the reporting form so it instead collects more meaningful information. For example, the updated version asks students to track their progress with short written summaries. When students meet with university or local government officials, they can record the officials’ receptivity or resistance to their suggestions, as well as the arguments or data they view as most persuasive.

This way, the revamped data-gathering mechanism puts those working on the front lines of this advocacy work in the driver’s seat. It gathers insights from them on a key question at the heart of Vote Everywhere’s work — how can we best influence those in power to make lasting changes that pave the way for more young people to vote?

Why Grantee-Driven Measurement Matters

To put it most simply: measurement should be grantee-driven because scarce funding should only be allocated for work that’s likely to be put to use in a grantee’s strategy and operations. The nonprofit sector faces too many pressing problems to spend precious resources on external reviews or measurement plans that end up sitting on a shelf.

Grantee-driven measurement is also an important step towards forming the type of collaborative relationship between funders and grantees that philanthropies increasingly claim they want, and that experts in “learning organizations” are helping them to achieve.

Sadly, it’s rare in the often-punishing policy arena that advocates or organizers have an opportunity to reflect systematically on their work. When we interview policy actors — even elected decisionmakers themselves — it’s not unusual for them to voice their appreciation for the chance to consider the ramifications and underpinnings of the work.

When grantees measure what’s important to them, it opens the door to rich grantee-funder conversations that inform the work and goals of both parties. The path from merely checking the box toward a genuine learning relationship will undoubtedly have challenges, but the journey is a necessary one.

David Shorr is a policy advocate and evaluator with experience in varied policy arenas, from high-level diplomacy and presidential politics to local land use. Follow him on Twitter at @David_Shorr

Kathleen Sullivan is an evaluator and strategist who helps community organizers, advocates, and their funders increase their policy-advocacy impact.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog