This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Lessons from a Risk Taken

Date: April 25, 2013

Kevin Bolduc

Vice President, Assessment and Advisory Services, CEP

Never Miss A Post

Share this Post:

We hear a lot about “risk” in the social sector. We’re told that it’s a key function of philanthropy to provide risk capital. We hear about huge risks that pay off. And, once in a while, we hear about some that don’t.

Here at CEP we try hard to take smart risks. But, we’re like any organization that’s taking real risks: sometimes they don’t work out.

Our efforts over the last couple years to scale up a data visualization tool we called the “Strategy Landscape” fall into this category. As we made the tough decision to stop offering Strategy Landscapes, we did learn something about tools for funder coordination and collaboration, and I want to share some thoughts in the hopes that it might have some value for others seeking to facilitate collaboration.

In 2010, we were actively thinking about how to provide tools that could help funders understand and act on information about funding strategies in the fields and communities in which they work. So when Monitor Institute (now Monitor Deloitte) approached CEP about a tool they were developing to classify and visualize grantmaking strategies across funders, we were excited. It soon became clear that the Monitor Institute Strategy Landscape addressed two important challenges we’d observed:

  • Funders often find it difficult to articulate and communicate strategies internally and with peer funders
  • Collaboration and coordination across funders at the level of strategy seems somewhat limited

In 2011, CEP began a pilot initiative with Monitor Institute to develop Strategy Landscapes to facilitate deeper understanding, enhanced communication, and more collaborative decision making among groups of funders. We recruited three pilot groups—funders working in Detroit, New Mexico, and on K-12 education. By the end of 2012, the tool had been used by five collaborations and one large funder who wanted to map their “internal” strategic landscape.

In total, the tool brought more than forty funders together to focus on their strategies and grantmaking. We are proud of the success of these efforts and the relationship-building and group discussions the tools have helped catalyze. We witnessed some fantastic conversations and we hope the tools will stimulate many more. Nevertheless, we have concluded that there just isn’t enough sustained demand for the kind of comparative, standardized strategy information the Strategy Landscape provides to continue offering it.

A few lessons in particular stand out for me and my colleagues.

The incredible motivation and interest of a few early groups of funders that really craved this kind of information led us to overestimate the demand and believe that examples of use would snowball and generate further enthusiasm. We were wrong. (We should have realized that earlier, and avoided turning a little bet into a somewhat bigger one.) We also didn’t accurately anticipate the level of technological flexibility the tool would need when, inevitably, groups wanted to use the tool differently than we had anticipated. We fell down on both of those counts.

We also learned some things about the challenges of facilitating coordination and collaboration across funders:

  • Many funders often didn’t categorize individual grants by strategy or geographic target. (And for some funders, as we knew from our previous research, strategies didn’t exist at all.) It took more staff time to code grants and strategy information than we expected. And we thought that it would be relatively easy for funders to add this level of tracking to their grants management systems to facilitate future updates of the tools. But it wasn’t.
  • Many funders were most interested in a strategy visualization tool primarily during moments of major planning. We knew up front that that daily or weekly use of the tool was probably necessary for motivating continued engagement with and updating of the tool. Use patterns point toward a much more periodic use of the tool. I think we succumbed to a classic “if we build it, they will come” temptation. Furthermore, CEP doesn’t provide the on-the-ground presence in fields and communities that could have helped increase use of the tool at other important moments.
  • Some funders believed their grantmaking strategies were unique. From some funders, especially those that did have clear and defined strategies, we encountered resistance to even modest generalizations that were necessary to group strategies from different funders together. There was much more energy for focusing on what made strategies different than on what made them similar. Individual funders wanted the “standardized” taxonomy to reflect their particular language and strategies.
  • Some funders had price sensitivity for tools to facilitate strategy coordination and collaboration. It’s clear that the Strategy Landscape was viewed as a “nice to have,” not a “need to have.” It also had high fixed costs because the tool was designed to be quite customized for each group using the tool in a new content area or community. Overall, each Strategy Landscape cost about $100K. Split over 10 or so foundations, that’s not a huge cost for each foundation, but the initial price tag was scary and the coordination of getting everyone to chip in was extremely time-consuming.

Put it all together, and we learned that this was too complex and customized a tool—and therefore too expensive and time-consuming—for what most funders want right now. We’re still talking with Monitor Deloitte about whether there’s a way to open-source the Strategy Landscape, making it more cost effective and potentially easier to implement for funders interested in visualizing their strategies. For example, CEP could lightly support a group of funders implementing it on their own. But the time that has been required of CEP for cross-funder coordination, data collection, and data cleaning leaves us a bit skeptical about whether even an open-source version could be practical.

We still believe that visualizing customized information about how grants across foundations roll up into common categories of strategy, geographic targeting, and tactics (policy, direct service, etc.) could be very useful as funders work to address common issues. But this just wasn’t the right tool at the right time.

Kevin Bolduc is Vice President of Assessment Tools at the Center for Effective Philanthropy. You can find him on Twitter @kmbolduc.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

Business Knows Best … or Not
Business Knows Best … or Not

This piece was originally posted in January 2020. The past two decades have seen a shift in the conversation about philanthropy. We entered the 2000s being told that what philanthropy needed was a “business” or “investor” mindset, with less clarity of course about...

read more