In the few months since I last wrote about 100&Change, the John D. and Catherine T. MacArthur Foundation’s $100 million dollar prize in pursuit of one good idea, I’ve been thinking a lot about the transparency of its selection process. One of the biggest changes to the program has been in the number of judges 100&Change has recruited. The competition’s website, which lists all judges, now includes about 110 individuals, their affiliations, and their credentials (full disclosure: CEP President Phil Buchanan is now one of them). To my mind, this expansion in judges puts an even finer point on the potential value — and challenge — in MacArthur documenting and sharing the values and criteria that will bubble up from the judges and drive their choice of the best ideas.
MacArthur has been transparent about the four qualities they’re looking for in applications (meaningfulness, verifiability, feasibility, and durability) and how judges will differentiate among the scoring levels on each. The traits are detailed in a simple rubric, and it’s available for the public to view at their website (even before registering to participate). For example, there’s an accompanying definition for scoring “cautious,” “powerful,” and “compelling” on the “Meaningfulness” scale. This is an unusual level of detail for a funder to make transparent, in my experience.
If MacArthur invests in some synthesis about what kinds of comments go along with what scores, the potential aggregate picture created by the diverse interpretations would undoubtedly be interesting in highlighting how and why certain kinds of ideas are valued. That collected analysis would help us all reflect on why certain types of social change issue areas or approaches are collectively given precedence over others. It’d be a fascinating and useful picture of how a group of leaders weigh opportunities for social change.
Still, most applicants are going to put in effort with no monetary return. So, will they get value out of the process?
A few weeks ago I had the chance to chat with Jaison Morgan, CEO of Common Pool, the company that MacArthur is working with to design and manage this prize. (Common Pool has worked with a number of other foundations to design, develop, and manage other competitions, as well.)
Jaison mentioned two things that I hadn’t quite caught on to yet. First, MacArthur is hoping that other funders might join the finalists’ public presentations of their ideas and perhaps support some of those other good ideas that MacArthur doesn’t choose. This strikes me as relatively unusual in the foundation world, where the applications that just missed the cut for funding are likely to be invisible to others. (You can hear more about this and other design details from Cecilia Conrad, Managing Director at MacArthur, during her recent appearance on a segment of The Business of Giving podcast).
Secondly, and more broadly, Jaison and I had a chance to discuss the feedback that applicants will receive. Applicants will receive scores and comments from each of their assigned five judges, though those judges’ identities will remain anonymous. That feedback will be normalized, so that no applicant is penalized unfairly by a particularly harsh or easy “grader.” The hope, ideally, is that those scores and comments — even if anonymized — will help applicants push their thinking and ideas forward.
Of course, the value of that feedback depends on the time each judge spends, the clarity and value of guidance that MacArthur will provide to judges about what feedback to give, and how to frame it so that it’s helpful to applicants. That feedback is important. Increasingly, Common Pool is finding that applicants are determining whether or not to participate in a given competition based on the relative value proposition for those who are likely not to win. By increasing that value, a funder can strengthen the relevance and quality of submissions.
The fact that every single 100&Change applicant will get feedback is quite different from what we see at CEP in our broader analysis of survey feedback from applicants that were declined funding from foundations. In fact, across the funders that have asked us to survey their declined applicants through our Declined Applicant Perception Report (APR), only about 50 percent of those declined applicants receive any sort of feedback from the foundation. Denied applicants say that when they do receive feedback, they find it quite helpful in strengthening future proposals. (There are some funders that buck that trend. We’ve profiled two exemplars in declining applicants, Quantum Foundation and the M.J. Murdock Charitable Trust, each of which strive to provide nearly all of their declined applicants with specific feedback.)
So what are the barriers to providing more feedback? There are several, but two in particular that I frequently hear mentioned are:
- Many applications are way outside mission even when a funder has clear guidelines on the web; and
- There’s not enough staff time to provide feedback.
I think the design elements in some competitions provide a useful provocation to think about how to address these barriers in grant funding application processes.
For example, the “ItemWriters Algebra Readiness” competition, a contest co-sponsored by The William and Flora Hewlett Foundation to develop assessment items for 7th grade mathematics coursework, was based on a multi-stage review process that included peer-to-peer review and collaboration. This process gave contestants the opportunity to collect feedback on their ideas from other educators and psychometricians, and to make revisions before moving on to the next stage of review by expert judges to determine the winners.
Jaison also told us about the National Geographic Society’s Terra Watt Prize, which Common Pool designed. That program offered two competitive grants of $125,000 each to companies that would deliver clean and sustainable electricity to villages in developing countries. During the design of that program, Jaison’s team quickly learned that most of the truly scalable approaches would not stop to consider something as distracting as a prize (much less, one for as little as the funds offered). Instead, as Common Pool surveyed investors and applicants about what did matter to them, they learned that the seemingly random nature of investor interests and biases was a specific funding barrier. So, the Terra Watt Prize convened influential investors and authorities as judges and created a shared, standard assessment model through which each of those funders scored potential investment opportunities.
It can be easy for a funder to fall into an “openness trap” — wanting to be accessible to a wide variety of applicants but lacking the right design, capacity, or willingness to filter in and out the right organizations and provide applicants with helpful feedback. That’s not optimal for anyone — not for the nonprofits spending time on applications, nor for the foundations spending time reviewing them.
Just like with unhelpful application processes, there can be obvious problems with badly designed competitions (see, for example, the controversy surrounding a competition planned for the Council on Foundations 2015 conference, which was cancelled after outcry about its “Shark Tank”-like qualities). But the potential design innovations of competitions might help funders think in new ways about mechanisms to maintain openness to a wide variety of ideas, while also providing the structure and feedback that help the right organizations enter a competition funding process and ultimately benefit from it — win or lose.
Kevin Bolduc is vice president, assessment and advisory services, at CEP. Follow him on Twitter at @kmbolduc.