Around this time in December 2002 – 10 years ago! – The Rhode Island Foundation sent CEP one of the best emails we’ve ever received. To paraphrase: “We’re intrigued by the value of grantee feedback that can be compared across foundations, so we’d definitely like to participate in this tool you’re calling a Grantee Perception Report.” And with that, The Rhode Island Foundation became the first funder ever to commission a Grantee Perception Report (GPR). This idea of comparative grantee surveys that Phil Buchanan and I had was going to become a reality.
Within a couple months, 10 other visionary funders signed up for the first round of comparative grantee surveys, which we launched in March of 2003.
- The Boston Foundation
- The Cleveland Foundation
- The Columbus Foundation
- Dyson Foundation
- The George Gund Foundation
- The Greater Cincinnati Foundation
- Lumina Foundation for Education, Inc.
- The Minneapolis Foundation
- Richard & Rhoda Goldman Fund
- The Rhode Island Foundation
- William Penn Foundation
To this day, I’m grateful for their leadership in helping to create a tool that in the last decade, as I’ll argue in my next post, has created real change for hundreds of funders and tens of thousands of grantees.
When we began the GPR, funders often received anecdotal feedback from grantees, and many ran in-house surveys of their grantees. At first, some certainly had reservations about this new tool, particularly about the value of comparative feedback; we repeatedly heard the refrain, “If you’ve seen one foundation, you’ve seen one foundation. We’re too different from others to be compared.”
At the time, people were familiar with Consumer Reports and US News & World Report, which used comparative data to anoint winners and losers among products or organizations. We were proposing something a little different: a private tool—yes, based on comparative data—but interpreted in light of the goals, strategies, and context of individual funders. We wanted to help each funder improve its work over time.
Our commitment to rigor and credibility required us to ensure that the comparative data we were developing represented all types of funders. So for the first four years of offering the GPR, in addition to surveying grantees of funders that asked us to do so, we also randomly selected some foundations and informed them we’d be surveying their grantees independently.
Some funders that were selected, like the McKnight Foundation, became consistent users of the GPR.
Other funders were not always so positive. We received several official cease-and-desist letters that read “You cannot survey our grantees.” One went on: “Please be advised that we represent the XYZ Family Foundation…Your survey asks questions the answers to which the Foundation already knows…You should immediately cease any further contact with the Foundation’s grantees. In addition, please forward all responses which you have received to the Foundation.”
I wanted to ask them (if they would have spoken to us) “Do you, or do you not, know the answers to these questions?”
My favorite was a fax from a mid-size foundation that had programs, among others, focused on democracy and social justice: “I fully support your free speech rights to survey the Foundation’s grantees. Please don’t.”
Now that we no longer survey grantees independently (our dataset contains more than enough comparative data—responses across several hundred funders of all types and sizes), I don’t get many messages like those. But I am asked to explain the value of grantee perceptions as indicators of effectiveness and impact. “Why should I care about grantee feedback if I’m a responsive/strategic/community/family/corporate funder?”
We’ve worked with dozens of foundations in each category, and my answers, while nuanced, come back to the power dynamics of a funder-grantee relationship and the inherent positivity of receiving a grant. The answer can be more compelling coming from a peer, though. One I turn to often for the logic and passion of his explanation is four-time GPR user Paul Beaudet, Associate Director of Wilburforce Foundation (a private foundation with an anonymous, living donor). Here is his explanation of the incredible importance of grantees to achieving foundations’ missions—and thus the importance of collecting honest feedback from the grantees:
“Since grantees are partners, we must communicate clearly, consistently and frequently to better understand each other’s goals and strategies, develop trust, and address opportunities and/or threats that inevitably arise. We often learn more about issues, strategies and tactics from our grantees than they do from us…. Using what we learn from our grantees, we feel better equipped to make smart investments in their programmatic and operational capacity…. If grantees are receiving the support they need to sustain their operations and programs, these organizations will likely be better able to engage in effective work that creates change…. Wilburforce can only succeed if our grantees succeed. And our grantees can succeed only if they are given the funding, tools and resources they need to do their work.”
Today, we’ve worked with over 200 different funders to gather and rigorously analyze data and create insight from grantee feedback. Many of them, we’ve worked with multiple times. A decade into this work, I’m more passionate than ever about bringing the candid voices of grantees into foundations. And I believe strongly in the value of comparative data. Without some basis of comparison, feedback from a grantee that is grateful for a foundation’s support is incredibly hard to use: it’s nearly always somewhat positive. The challenge for foundations, unregulated and not in competition with each other, is to find the motivation, information, and tools to push themselves toward excellence. I think the GPR has become one of those tools, thanks in large part to the ongoing engagement and feedback we’ve always received from those we work with. It’s been inspiring to work with funders to create a tool that’s useful to them. And I’m determined to ensure that the GPR remains, long into the future, a relevant, actionable source of feedback so that funders continue to rely on it for informing change.
Thanks, once again, to The Rhode Island Foundation and those other early funders whose commitment to data and to experimentation helped spur the creation of this tool.
In my next couple posts, I’ll highlight what we’ve learned about the utility of the GPR and talk about our most-recent feedback from GPR users, as well as the significant evolution of the GPR that we have underway to make it even more valuable in its second decade.
Kevin Bolduc is Vice President of Assessment Tools at the Center for Effective Philanthropy. You can find him on Twitter @kmbolduc.