This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

How Come This Foundation’s Grantees Love Its Reporting Process So Much?

Date: August 4, 2015

Caroline Fiennes

Director, Giving Evidence

Never Miss A Post

Share this Post:

This post originally appeared on the blog of Giving Evidence, which encourages and enables charitable giving based on sound evidence.

Most charities hate the reporting which funders make them do. Notionally a learning process, it’s often just compliance, box-ticking, and a dead-weight cost. But not so apparently for the Inter-American Foundation, an independent U.S. government agency which grant-funds citizen-led community development across Latin America and the Caribbean. IAF seems to be a positive outlier: it has twice undertaken the Grantee Perception Report (GPR) and both times got the best rating ever seen for helpfulness of its reporting process. Plus, IAF was both times in the top 1 percent on the all-time list for usefulness of its selection process, and for its transparency. It’s also in the top decile for a whole pile of other indicators.

What on earth is going on? And what (if anything) can other funders learn from this?

Giving Evidence is interested in producing and sharing decent evidence about what makes for effective giving, so we’re delighted to be working with IAF to figure this out. We’ll produce a case study later this year, showing what IAF does and why it seems to be so popular.

This fits with our work with the University of Chicago to generate more evidence about the effects of the various ways of giving. Donors’ choices of ways of giving are clearly consequential (e.g., these data which Giving Evidence and CEP put on Freakonomics about the costs imposed on charities by small vs. large grants), yet there exists so little sound evidence that donors can’t make evidence-based decisions between them. We seek to change this.

Maybe it’s a fluke

Clearly somebody has to be top, even if the whole pack is dismal or it’s just by random chance. But (i) that’s not what’s going on — IAF’s ratings are atop a pretty respectable pack; (ii) the facts that IAF has come top twice and is so high on so many indicators implies that something interesting may be going on here (though clearly we’re aware that a handful of swallows doesn’t make a summer); and (iii) IAF does have a very unusual model, which suggests a potential mechanism for these results.

The model

IAF finds tiny grassroots organisations and makes big bets on them — its median grant funds two-thirds of a grantee’s work. IAF sticks with them for longer than most funders, at nearly four years. It visits every applicant it seriously considers, and works intensively with them to shape the proposal (all grants are restricted). Every grantee is visited again after the grant is awarded — and not just once but every six months. Given that IAF has 260 active grants across an entire continent, this effort is, as physicists say, highly non-trivial.

IAF also has an unusual reporting framework. Grantees choose from a menu of 41 metrics, which encompasses both tangible and intangible results, and effects on individuals, communities, and society (see diagram). It’s much more balanced and self-determined than many funders’ systems.

[Smart eyes will notice the similarities between this framework and Giving Evidence’s own, explained in more detail in my book, It Ain’t What You Give, It’s The Way That You Give It.]

Is it the love or is it the framework?

So IAF’s model is unusual in at least two respects: the sensibleness and flexibility of its framework, and its degree of engagement. It’s not clear — yet — whether the grantee cheeriness arises from one of these, or both, or something else entirely. Maybe those two factors are in fact indistinguishable — maybe you can’t run a flexible and helpful reporting system with this type of organisation without being highly engaged.

But happy grantees aren’t necessarily effective grantees

True. Nobody actually knows (to our knowledge) the connection between grantee satisfaction and their effectiveness — and our work with the University of Chicago aims to investigate this properly. But it’s also clearly true that the unpopularity of many funders’ processes arises from unhelpfulness which impairs work and could be remedied. And it’s also true that much can be learned from positive outliers, which we intend to do here.

If you are an IAF grantee, or a funder with other outlying grantee feedback, or fund community development, or have something else useful to contribute to this study, you can get in touch with Giving Evidence by emailing admin@giving-evidence.com.

Caroline Fiennes is founder and director of Giving Evidence and a member of CEP’s advisory board. To ensure that you get the case study when it is published, follow her on Twitter at @CarolineFiennes and sign up for the Giving Evidence newsletter here.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

An AI Roadmap for Philanthropy in 2025
An AI Roadmap for Philanthropy in 2025

As grantmaking organizations increasingly explore how AI tools can transform the way we work in civil society, the Technology Association of Grantmakers (TAG) recently released results from a global survey of grantmakers in our 2024 State of Philanthropy Tech report....

read more