From Survey Questions to Action: One Foundation’s Approach To a Targeted Review of Stakeholder Feedback

Mena Boyadzhiev

In my conversations with funders who engage with the Center for Effective Philanthropy (CEP) on assessments and advisory engagements, I’ve heard that what many of our foundation partners find most challenging are the “Now what?” discussions that come after an engagement. After receiving the results of an assessment, it takes real time, effort, and thought to know what to do next: how to share the results internally, find the resources to work through the data and develop next steps, and to then take action on what you’ve learned.

In this post, I’ll share the experience of one recent CEP partner who did this well. The Arcus Foundation, a New York and UK-based funder focused on supporting LGBTQ social justice and great apes and gibbons conservation, is a long-time user of both CEP’s Grantee Perception Report and tailored advisory services. The Foundation partnered with CEP in 2021 on a highly customized effort to better understand the perceptions of its grantees and external stakeholders, which they’ve written about here.

I spoke with Lia Parifax, director, Executive Initiatives, at the Arcus Foundation, about the thoughtful process that she and her colleagues observed throughout their most recent CEP engagement. Their approach to collecting grantees’ and external stakeholders’ perceptions and engaging their staff in planning can be a helpful model for others interested in using perceptual data for an inclusive and action-oriented learning process.

Designing the Survey: What Do You Want To Learn?

Mena Boyadzhiev: Could you share a brief description of the project that we worked on together last year?

Lia Parifax: We connected in early 2020 to design a custom survey to assess grantee perceptions about specific aspects of the Arcus Foundation. Arcus has had the benefit of participating in CEP’s GPR for several years: we did GPRs in 2015 and 2008, and we are committed to periodically checking in on grantee perceptions of our approach, our work, our interactions. As we came to the opportunity to gauge perceptions again in 2020 and 2021, we thought a lot about why we benefit from grantee perception data and how we use it to inform decisions. We really wanted to ground our data collection in an underlying imperative for learning and responsiveness.

So the question was really, in order to learn what? When we’d done the benchmarkable grantee perception survey, it gave us a sense of how we were doing overall compared to other funders, and because we didn’t design the questions, it showed us issues or opportunities that had not been obvious for us — to see what we might not even be looking for. There’s tremendous value in that. What we wanted to do now was think about what we were trying to achieve in our interactions and relationships with grantees.

MB: In this last, customized, targeted survey, what were you aiming to learn?

LP: We sat down with CEP to build a custom survey with a very clear, focused, scope. For the last three years, we have been building an organization-wide approach to monitoring and evaluation. That approach covers two definitions of success: our program impact — the extent to which our interventions in the field are contributing to meaningful change consistent with our programmatic goals — and our organizational performance. The latter includes things like having high-quality relationships that are deeply collaborative and based in trust and a low-burden, efficient approach for grantees; also our institutional culture: diversity, equity, and inclusion, clean audits, and well-managed budgets.

We’ve been building out these measures of performance, getting clear on what we mean by success on those goal areas, and determining how we want to measure progress. While we don’t measure our impact using perception data, we do measure our performance using perception data, complementary to other measures that are developed internally. All together, this gives us a snapshot of our performance married up against our grantmaking impact to give us a view of the whole.

MB: Can you tell me a little about what the process looked like internally for Arcus?

LP: We started the process in January 2020, and I worked closely with our CEO to look at our organizational performance goals, clarify the goal language, and then understand for which goals perception data could be most useful and important. Then, we needed to understand whose perceptions were most relevant to gauging progress toward performance goals. On many of these performance goals, we wanted perception data from both grantees and other external stakeholders.

We reached out to CEP with an emerging vision in the summer, and you all helped us determine that we wanted to interview, rather than survey, our non-grantee external stakeholders and conduct a highly customized survey of our grantees.

We worked very closely with CEP to polish the survey and eventual report structure so it could be the tool we needed for our learning. Up front, we worked with you to provide the context about each performance goal area that we were measuring so it was clear what we are trying to measure with respect to that goal.

It was important to have our CEO’s  leadership; she was committed to making sure that every question was worth grantees’ time and useful to Arcus. There’s a real resource scarcity faced by our grantees, so we wanted to be judicious about what we ask. Other core partners included the head of our Social Justice Program, and our director of grants management, who brought precision to the identification of all grantees and to making sure the questions we were asking were relevant to them.

Learning from the Results

MB: You designed a broad and inclusive approach to sharing the results with Arcus staff and engaging staff in developing recommendations for next steps. Can you describe the key elements of your approach?

LP: We created a structure for an organization-wide learning exercise and wanted to include people who wanted to participate, knowing all staff could have really valuable insights.

First, we invited everyone to read the report. Then we created learning logs that included learning questions and guidance designed to help lead each department through a conversation about the key insights worthy of discussion. We were looking for real insights that would be useful based on the goals that we are trying to achieve. Departments discussed the survey results and stakeholder interview report and recorded their insights using the learning logs over the course of four weeks.

Then, our director of grants management and I collected and read through all departments’ learning logs to identify themes and related ideas. We created high-level categories that included insights about communications, collaboration, grantmaking practices, systems, and user experience. We used that structure of high-order themes to facilitate an organization-wide meeting where staff made connections across insights and identified top-level priorities. Anyone who wanted could be part of the conversation, and each department’s directors were at the table.

The final step was to bring staff insights to the senior management team to identify what they saw as the priorities, informed by the staff’s extensive analysis.

Parifax describes the Arcus Foundation’s approach to organization-wide learning and gathering insights from staff in more detail in the following two short videos:

MB: You worked on this process, from partnering with us at CEP for the survey design, administration, and analysis, to Arcus’ internal discussions about the results, for about two years. Can you discuss the importance of this project’s timing?

LP: We wanted to time this learning experience to culminate in making decisions about our work plans and ultimately how we’re choosing to allocate our resources. We try to ensure our learning is ripe for application through work plans and budgets. This was a two-year endeavor with CEP from start to finish. It was timed with 2022 work and budget planning in mind. Our plans and budget were finalized in December, and now we’re acting on changes.

Reflections on Differences in Grantees’ Experiences

MB: An element of this report was examining differences in ratings based on organizational and demographic characteristics — respondents’ gender, for instance, and their language. Arcus wrote about these findings candidly. Can you elaborate on what you learned and how you’re using those findings?

LP: We wanted to include a series of demographic questions and questions about organizational characteristics that were relevant to how Arcus does our grantmaking, like: whether organizations received general operating or project support, their location in a particular part of the world, and whether they spoke Spanish or English.

For instance, we know that we are eager to fund smaller, emerging organizations. In the areas where we fund, the work likely requires the leadership of local groups and organizations led by people with lived experiences relevant to or affected by the issues that we are trying to address in the world. Resources tend to be limited for those groups.

In the report, there was a section dedicated to identifying statistically significant differences in responses across all of those characteristics. Then CEP flagged for us, across all of our goal areas and the questions associated with each goal, where the differences in responses were statistically significant by those characteristics. The demographics and organizational characteristics of those we fund matter to us because we are committed to equity and to understanding whether grantees with a certain profile are experiencing us better, worse, or differently.

This helped us understand where there were differences in grantees’ experiences. Now we’re embarking on a learning exercise to unpack why those differences exist. The survey didn’t tell us why, but rather helped us uncover what is happening and to signal where we need to do more learning.

MB: Finally, what advice do you have to funders in the process of considering grantee feedback?

LP: They should ask themselves: Why do we need grantee perception data? What goals have we articulated that can be measured, at least in part, by grantee perception data?

Funders should also be thoughtful about differentiating perception data from other measures of organizational performance. Honor and hold what grantee perceptions are measuring: so many things that enable impact.

I like to think about an analogy of a car: Performance is measured by gas-milage efficiency, the reliability of features such as the A/C, the car’s uphill traction, etc. For me, perception data helps us look at those variables; that is, perception helps us know whether we are equipped to reach the intended destination — to achieve our desired impact — and in a manner that’s resilient, sustainable, and lasting.

Mena Boyadzhiev is director, Assessment and Advisory Services, at CEP. Lia Parifax is director, Executive Initiatives, at Arcus Foundation.

SHARE THIS POST
advisory services, Grantee Feedback, Grantee Perception Report, staff feedback
Previous Post
Investing in Both Main Street and Wall Street
Next Post
LGBTQ+ Youth Deserve More — And Philanthropy Can Help

Related Blog Posts