This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Building Equitable Evidence: It’s Time to Look to Participants as Experts in Their Own Experience

Date: August 3, 2021

Lymari Benitez

Senior Director of Program Information and Impact, Pace Center for Girls

Yessica Cancel

Chief Operating Officer, Pace Center for Girls

Mary Marx

Chief Executive Officer, Pace Center for Girls

Katie Smith Milway

Principal, MilwayPLUS, Senior Advisor, The Bridgespan Group

Never Miss A Post

Share this Post:

Today, nonprofits and funders alike increasingly use equity-serving and participant-centered approaches in their program design, and it’s time to sharpen the equity lens on building evidence of a program’s impact. The shift calls for making program participants full partners in evaluation, as experts in their own experience versus subjects of an experimental study.

To explore a more holistic approach, blending participatory findings with empirical data, Pace Center for Girls and MilwayPLUS social impact advisors conducted focus groups, interviews, and a survey with 15 organizations emphasizing participatory approaches in research and evaluation. We worked with three leaders in participatory measurement on this project — Fund for Shared Insight, a funder collaborative; Feedback Labs, which builds NGO peer-learning networks; and Project Evident, an advisor on evidence strategy.

Our findings showed participatory measurement can causally link to outcomes and positively influence capacity building and advocacy. When focus-group participants ranked a series of characteristics before and after the organizations implemented participatory methods, they noted their work became more outcome-focused (27.5 percent higher on average), more inclusive (27.4 percent higher), and more data-driven (26.5 percent higher). More than 70 percent of the organizations we studied reported that participatory methods (most often surveys, focus groups, storytelling, and town halls) helped them define relevant outcomes.

Using Participatory Measures to Understand Outcomes and Impact

A core difference between experimental and participatory approaches is at the heart of the equity argument. Experimental approaches like randomized controlled trials (RCTs) follow a treatment and control group over time. In contrast, participatory tools including feedback, surveys, and focus groups connect immediate learning with continuous improvements to programs and policies, with participants seen as experts in their own experience.

For the last six years, Pace has focused on participatory measures in research and evaluation, uncovering new patterns of girls’ experiences by segmenting responses according to race, age, length of stay in the program, and other variables. In the process, they have identified causal links between feedback on instructor relationships and educational outcomes for the girls. Pace now immediately incorporates participants’ insights into program improvements.

The work to expand the use of participatory approaches takes a mental shift, one that organizations and funders can nurture by pondering several key questions as they gather evidence of program impact:

  1. Who needs the proof? Large-scale funders often suggest multiyear experimental studies, but the nonprofits we talked to put their participants’ views first. Kwame Owusu-Kesse, CEO of Harlem Children’s Zone, emphasized proving the value of interventions to parents and caregivers: “They don’t ask for an RCT before putting their kids in after-school programs.” Tools such as participant councils and town halls shift power to participants, asking them to use data to prioritize near-term action.
  2. Who determines what impact matters most? Bias is inherent in measurement and data interpretation, but participatory methods favor the participants. Ben Goodwin, executive director of Our House, a multiservice shelter-to-independence provider in Arkansas, began surveying clients in 2015 with a survey tool called Listen4Good, developed by Shared Insight. Goodwin now gives data from ongoing biannual surveys to a client council which interprets and distills the information into an action agenda. “We’re trying to take our own unconscious bias [as managers] out of the picture,” he says. 
  3. How do you ask the right questions? Nonprofits that focus on participant-centered measures tap community wisdom to guarantee they’re asking the most relevant questions. SOLE Colombia, based in Bogotá, hosts learning spaces where participants self-organize to address community challenges, like education or safety. They then test these questions by sending them out for community feedback. After the feedback is collected, SOLE listens, learns, and adapts the questions until participants agree the items are ready for a broader dialogue that they feel ready to facilitate.
  4. What are the links to program outcomes? Several organizations conducted multiyear research, exploring how participant feedback connects to outcomes. Preliminary findings suggested causal links. Center for Employment Opportunities, which helps formerly incarcerated people get jobs and rekindle their lives, indicates that participant feedback revealed complementary components of a program that helped them succeed. “We learned it doesn’t help to have a mentor without a core [job placement] program, but we wouldn’t have learned that through purely quantitative outcome evaluation,” says Ahmed Whitt, director of learning and evaluation.
  5. What are the wins for society? Perhaps the most forward-facing results of participatory research are advocacy wins — changing systems and society. Pace uses direct input from its girls to identify local, state, and federal policies that need reform.inline One of Pace’s most successful community campaigns lobbied for misdemeanor and civil-citation legislation so law enforcement could censure girls for petty crimes without arresting them. This is one example that has led to a 65 percent decrease in arrests of girls in Florida over the past decade.

Next Steps for Funders

With the expansion of experimental and participatory approaches, comes greater opportunity to mix methods. Subarna Mathes, strategy and evaluation officer at Ford Foundation, describes this inflection point in building evidence of social impact as a “transitional moment in the field of evaluation where what actually counts as truth, what counts as valid, what counts as data is shifting and needs to shift.”

While, we can’t generalize outcomes from a single longitudinal study, we must learn under what conditions an intervention can work. We can do this by seeking participants’ insights at every step of evaluation, design, and delivery, always remaining aware of power and context.

This article is excerpted from the report Building Equitable Evidence of Social Impact by Lymari Benitez, senior director of program information and impact at Pace Center for Girls, Yessica Cancel, Pace’s chief operating officer, Mary Marx, Pace’s chief executive officer, and Katie Smith Milway, principal of MilwayPLUS and a senior advisor at The Bridgespan Group. Fund for Shared Insight, Project Evident, and Feedback Labs helped curate focus groups at the heart of the research.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

An AI Roadmap for Philanthropy in 2025
An AI Roadmap for Philanthropy in 2025

As grantmaking organizations increasingly explore how AI tools can transform the way we work in civil society, the Technology Association of Grantmakers (TAG) recently released results from a global survey of grantmakers in our 2024 State of Philanthropy Tech report....

read more