This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Addressing Concerns: Why We Invited Esther Duflo to CEP’s Conference

Date: May 4, 2011

Phil Buchanan

President, CEP

Never Miss A Post

Share this Post:

As CEP’s May 10-11 conference approaches, I have been contacted by a few in the foundation community and asked, “What kind of a statement do you intend to make by having Esther Duflo speak at your conference?” Does CEP’s invitation to Duflo – an MIT economist and renowned proponent of “field experiments” as a way to gauge what works and what doesn’t when it comes to poverty alleviation, in particular – signal that we are Randomized Control Trial zealots? Fair questions, and I appreciate that they have been raised. I know emotions run high in the evaluation community when it comes to these issues, and for good reason. Experimental design has sometimes been promoted as the be-all end-all in ways that can be harmful.  After all, whether a particular evaluative approach makes sense depends very much on the context. However, I believe there is an important place for experimental design. There is a right time and place for the kind of approach Duflo espouses and, in those contexts, her approach to analyzing what works can help, quite literally, to save lives. Many lives. My colleague Ellie Buteau, CEP’s Vice-President for Research, wrote about the broader topic of the debate over experimental design in a blog post last year. I want to quote at length from it here because I think it really captures our view on this topic so well.

My experience with foundations and nonprofits tells me that we certainly are at no risk today of over-emphasizing rigor in how assessment is approached. Nor is it the case that a greater emphasis on rigor – and on really understanding what works and what doesn’t – need crowd out other valuable approaches to getting feedback. The promotion of experimental designs often has a polarizing effect. … Proponents sometimes act as if it is the cure for all evaluative ailments; opponents sometimes act as if it is the root of all evil. But being in support of the use of experimental designs is not necessarily in tension with supporting nonexperimental designs, case studies, and the use of qualitative data…. Any design should be selected because it is the best way to answer a particular question, and the question to be answered should be directly related to the stage of the organization or program being tested. Not all questions in the field are best answered through an experimental design approach. But some are. I see experimental design as an important tool for the field to use to understand the effectiveness of its work. Experimental designs allow us to rule out alternative hypotheses in a way that no other designs do. When testing the effectiveness of a social program being offered to those most in need, doesn’t it behoove us to get as close to an understanding of causation as possible? We should seek to be as confident as possible that a program has positive benefits and isn’t yielding no – or even negative – effects. Philanthropy should be looking for the models that have potential to really make a difference on our toughest social problems. The field has a moral obligation to demonstrate, to the best of its ability, that a program works before funneling significant resources to expand it. Admittedly, these are weighty statements. Many nonprofits are understaffed and underresourced, lacking the people, skills, or funds to conduct evaluations or collect data. A small nonprofit might have an excellent innovative idea that deserves to be tried on a larger scale and tested more rigorously. This is where funders come in. They have a crucial responsibility in this.

Ed Pauly of the Wallace Foundation weighed in with an important comment on Ellie’s post, which also bears quoting at length:

Smart evaluations pierce the fog that often conceals what works, what doesn’t work, and why. Experimental design evaluations, along with other rigorous assessments of results, greatly enrich our understanding of how social innovations can make a difference in people’s lives. Some innovations work as intended, and some don’t – that’s inherent in the innovation process – and innovations will produce very limited benefits for society unless we take pains to map their results. Are all experimental design studies worthwhile? Of course not; like any other tool, experiments can be designed and executed well, or poorly (meaning too soon, too narrowly, too crudely, with too few people, or even unethically – all faults that experienced users take pains to avoid). That’s no reason to turn our backs on experiments – it’s a reason to do them well! Too often, legitimate concerns about evaluation aren’t paired with the equally serious question, What will happen if we DON’T evaluate social innovations? History shows that the answer is clear: Our best ideas and most promising reform ideas will be dismissed and discarded with caustic anecdotes and the observation that “there’s really no objective evidence of real results here.” When we gather reliable evidence about results, we’re building a shared heritage of better outcomes for more people.

I think Ellie and Ed have it right. So, no, we’re not zealots, and we have, in fact, long-advocated for foundations to embrace and develop indicators – which may be indirect – that connect to a foundation’s strategy in order for leaders to have timely, actionable performance data.  We have argued that the perfect should not be the enemy of the good when it comes to assessment. But we also think the kind of approach Duflo advocates has an important place. There will be plenty of time during Duflo’s session for questions and discussion, moderated by Ford Foundation President Luis Ubinas.  My hope is not that everyone walk away agreeing with Duflo, but rather that she pushes participants to think about whether they are tapping into the potential to learn about what works and what doesn’t – and to act on the basis of that learning – as fully as possible. And that is why I am delighted she’ll be with us on May 10.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog