Reach out now to receive a discount on a 2025 CEP assessment or advisory project.

Contact Us

Search

Blog

Data Point: Feedback Keeps Fueling Change

Date: September 5, 2013

Kevin Bolduc

Vice President, Assessment and Advisory Services, CEP

Mark Chaffin

Former Senior Research Analyst, CEP

Never Miss A Post

Share this Post:

Foundation effectiveness isn’t something you can just exert a little effort to dial up and expect to remain there. Like going to the gym, it’s consistent attention to gauging progress and working to improve that really counts. That’s the lesson we draw from our newest analysis of Grantee Perception Report (GPR) results of foundations.

In 2011, the Center for Effective Philanthropy published the report Can Feedback Fuel Change at Foundations?, sharing that foundations that had used the Grantee Perception Report more than once “are making changes that are benefiting those organizations they fund…and on average receive substantially improved ratings” across a variety of the measures in CEP’s grantee survey.

At the time, only a handful of early adopters had used the GPR more than twice. We weren’t sure what would happen to those improved ratings over time. Now, as more funders make the GPR a regular piece of their assessment efforts, nearly 100 funders have used the GPR at least twice and 36 have used the tool at least three times.

When we look across the ratings of funders that have used the GPR three times, we don’t see ratings decreasing over time on about a dozen key measures. Typically, the gains seen between a first and second GPR either continue to increase or are at least maintained with a foundation’s third GPR use. The data bear this finding out across a variety of different measures in the survey—those about impact on grantee organizations and fields, understanding of grantees’ strategies and goals, and quality of relationships, just to name a few. While the increase in aggregate ratings between any two assessments may be small, over time they can add up to a much more meaningful change for grantees of funders using the GPR.

Although we see maintained or improved ratings in aggregate across three-time users of the GPR, the story isn’t quite so simple for each individual foundation.

 _62_http://www.cep.org/wp-content/uploads/2013/09/GPR-3peat.jpg

This figure displays the change in rating on a particular GPR item related to grantee perceptions of their funder’s impact on their organization. Each horizontal bar in the above table represents a funder that has used the GPR at least three times. The size of each portion of the stacked bar represents the magnitude of change during a particular time interval.

In the chart above, 50 percent of repeat GPR users saw increases over both repeated uses of the GPR. Most of the others saw ratings move up at one repeat use of the GPR and down at the other. This pattern is fairly consistent across the measures we examined: approximately half of foundations see consistent improvement at both repeated uses of the GPR—the other half do not.

The implication? Improvement is not a given. It doesn’t always last. Repeated assessment is necessary to monitor progress.

Recognizing the variability in the trajectories of the funder/grantee working relationship emphasizes something important. The GPR is about assessment, learning, and management. With its comparative data and longitudinal tracking of results, the tool isn’t about determining a “best” or “worst” funder, a once-and-for-all winner or loser. That would be impossible and, frankly, irrelevant. It doesn’t provide a permanent assessment of effectiveness. But the GPR can provide a critical series of snapshots that allow any diligent foundation to use grantee feedback to maximize the use of its limited resources.

We hope that’s why we are seeing more and more foundations adopting a two-, three-, or four-year cycle of GPR use. (On average, repeat users of the GPR reach out to grantees once every three years; a substantial number productively do so more often.) While an important value of the GPR is its ability to highlight areas where improvement is possible at any given time, a deeper value lies in its ability to help any funder assess whether the actions it is taking contribute to noticeable and continual improvements in the quality of its work.

Kevin Bolduc is Vice President of Assessment Tools at the Center for Effective Philanthropy. You can find him on Twitter @kmbolduc.

Mark Chaffin is a Senior Research Analyst at the Center for Effective Philanthropy.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog