One Foundation’s Journey from Feedback to Action, Part One: Humbling Lessons

Anu Malipatil & Jon Sotsky

At Overdeck Family Foundation, we aim to open doors for every child in the U.S. by measurably enhancing education both inside and outside the classroom. In our collective work towards our mission, we are guided by three core values, all of which we lean on daily with our grantees and with each other: “connect genuinely,” “learn better, together,” and “think and act with rigor.”

In the spirit of learning better together, we partnered with the Center for Effective Philanthropy in 2019 to conduct our first-ever Grantee Perception Report. The results highlighted several areas of our work that grantees perceived as strengths, but also uncovered areas that needed significant attention. This data, coupled with our regularly planned strategic planning process, gave us insights that we used to rethink how we did our work and supported our grantees. As a result, we revamped our funding model to be clearer about our goals and be more supportive of grantees, two areas that were noted as needing improvement.

Two years later, in 2021, we again partnered with CEP to survey our grantees on our progress and were pleased to see improvement across nearly every category, validating our efforts over the previous two years. In this two-part series, we wanted to share our journey and what we learned along the way to encourage other funders who are seeking to take action on critical feedback to improve their ability to work with grantees and make impactful change.

Forming Insights and Planning

As we dove into the initial Grantee Perception Report in 2019, we sought to first build a shared understanding of the findings and explore root causes for specific grantee feedback. We approached this in several ways:

  • Organization-Wide Conversation: We shared the full report with our entire team so everyone could be knowledgeable and lean into curiosity about the results. We then invited CEP to present the results to our team to ensure we understood them and asked clarifying questions to operate from a common understanding.
  • Leadership Meetings: Our leadership team discussed the report using exercises like See-Think-Wonder to both confirm what we learned and identify questions we hoped to explore further.
  • Departmental-Level & Individual Reflection: We used department meetings to dive deeper into the most relevant aspects of the report. Program officers received individualized reports that they used to better understand feedback amongst their grantees, allowing them to identify areas of strength and opportunities for improvement with grantee relationships.
  • Board Engagement: We reviewed the key findings with our trustees to share and receive feedback on priority areas to focus our attention for improvement.

Our team was grateful but humbled by what we uncovered throughout this process. The discussions revealed disappointment about some of the results and a commitment to do better on behalf of our grantees in the future. The time spent reflecting on the data helped us generate a deeper understanding of our work and clearer hypotheses about the factors causing lower performance on several questions in the survey.

Specifically, we dove into the following categories where we were underperforming relative to other funders:

  • Clarity of Goals and Strategy: We rated in the 9th percentile for “clearly communicating goals and strategies.” This likely contributed to us ranking in the bottom quartile on field impact — if it wasn’t clear what the Foundation was aiming to do, then it’s hard to assess if we accomplished it!
  • Intensity of Grantmaking Processes: Our grantmaking process was 50 percent more time-intensive than the average funder, and we rated low on relevance and adaptability. 41 percent of the suggestions provided by grantees involved our grantmaking processes. This data felt both validating and fortuitous as we were already planning to embark on revisions to our grantmaking process in conjunction with implementing a new funding model.
  • Impact on Grantees: We rated in the bottom quartile on “impact on grantee organizations,” making clear how much improvement we could make in this area. Our internal debriefing and brainstorming led us to believe that this was most likely a result of how we spent our time with grantees, which often felt transactional versus collaborative.
  • Quality of Relationships: We rated in the 18th percentile for “comfort approaching the foundation if a problem arises,” and in the 29th percentile for the “perception of fair treatment of grantees.” We hypothesized that this, too, was a measure of how we spent our time with grantees, and sought to both deepen our relationships and clarify our intent as a way to improve these metrics.

Many of these results were difficult to process, but they also gave us renewed vigor to make improvements to our process and build better relationships with our grantees. In the next post, we’ll share more about the actions we took, and the results we have seen since then.

Anu Malipatil is vice president, Education, and Jon Sotsky is director, Strategic Learning & Impact, at Overdeck Family Foundation. Learn more and follow the Foundation on LinkedIn.

SHARE THIS POST
, funder/grantee relationships, Grantee Perception Report
Previous Post
Introducing: Season 3 of the Giving Done Right Podcast
Next Post
One Foundation’s Journey from Feedback to Action, Part Two: Positive Action

Related Blog Posts