Receive a discount on a 2025 CEP assessment or advisory project when you sign up before Sept. 30.

Contact Us

Search

Blog

Experimental Design Can Be a Powerful Evaluation Tool

Date: February 17, 2010

Ellie Buteau, PhD

Director of Research Projects and Special Advisor on Research Methodology and Analysis, CEP

Never Miss A Post

Share this Post:

The White House’s Social Innovation Fund, which will support intermediary grantmaking institutions that “identify and invest in promising organizations to help them build their evidence-base and support their growth,” has been a topic of much discussion and debate. Among the concerns has been the Fund’s focus on evidence of nonprofit program effectiveness and, in particular, its focus on evidence of effectiveness being based primarily on experimental design approaches.

In an op-ed in the Chronicle of Philanthropy, Katya Fels Smyth, founder and principal of the Full Frame Initiative, argues that “No one benefits” from the Fund’s proposed approach. “Not the best ideas for helping the nation’s most vulnerable, not the taxpayers, not philanthropy, and, most important, not the communities that most need help achieving a decent quality of life.”

Her critique is rooted in an inaccurate conception of what it means to take an experimental design approach. She asserts that experimental design must “require a very narrow definition of who is being studied, and people who face multiple intertwined challenges—who are the most in need—are excluded. So, for example, if a new approach to helping homeless mothers is under scrutiny, experimental-design evaluation would exclude battered women, those with chronic health problems, or those involved in the criminal-justice system unless everyone had the same problems.”

But that is simply not the case.

An experimental design approach need not be totally removed from the complexities of the real world or prevent innovative approaches from receiving serious consideration. Many factors can be taken into account through the design and statistical analysis processes.

For example, in one of its randomized trials, Nurse-Family Partnerships, which is now a grantee of The Edna McConnell Clark Foundation, had an objective “To investigate whether the presence of domestic violence limits the effects of nurse home visitation interventions in reducing substantiated reports of child abuse and neglect.” Participants did not fit a “very narrow definition” but instead differed in a number of important ways, including the number of domestic violence incidents in the family, race of the mothers in the study, mother’s marital status, and the employment status of fathers. These differences were taken into account when the data for this study were analyzed.

My experience with foundations and nonprofits tells me that we certainly are at no risk today of over-emphasizing rigor in how assessment is approached. Nor is it the case that a greater emphasis on rigor – and on really understanding what works and what doesn’t – need crowd out other valuable approaches to getting feedback.

The promotion of experimental designs often has a polarizing effect: this has been true in the field of education with the What Works Clearinghouse, psychology’s approach to the study of social issues, and in the nonprofit community and field of evaluation as well. Proponents sometimes act as if it is the cure for all evaluative ailments; opponents sometimes act as if it is the root of all evil.

But being in support of the use of experimental designs is not necessarily in tension with supporting nonexperimental designs, case studies, and the use of qualitative data (the importance of which Bob Hughes, from the Robert Wood Johnson Foundation, wrote about in a recent CEP blog post). Any design should be selected because it is the best way to answer a particular question, and the question to be answered should be directly related to the stage of the organization or program being tested. Not all questions in the field are best answered through an experimental design approach. But some are. I see experimental design as an important tool for the field to use to understand the effectiveness of its work.

Experimental designs allow us to rule out alternative hypotheses in a way that no other designs do. When testing the effectiveness of a social program being offered to those most in need, doesn’t it behoove us to get as close to an understanding of causation as possible?

We should seek to be as confident as possible that a program has positive benefits and isn’t yielding no – or even negative – effects. Philanthropy should be looking for the models that have potential to really make a difference on our toughest social problems. The field has a moral obligation to demonstrate, to the best of its ability, that a program works before funneling significant resources to expand it.

Admittedly, these are weighty statements. Many nonprofits are understaffed and underresourced, lacking the people, skills, or funds to conduct evaluations or collect data. A small nonprofit might have an excellent innovative idea that deserves to be tried on a larger scale and tested more rigorously. This is where funders come in. They have a crucial responsibility in this.

I will take a closer look at that responsibility in my next post.

****

Ellie Buteau, PhD, is Vice President – Research at CEP

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog