This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Searching for Certitude in the Nuance of Numbers

Date: November 27, 2012

Cynthia M. Gibson

Independent Consultant

Never Miss A Post

Share this Post:

An alternate, condensed version of this post can be found on the Markets for Good blog.

We all know it. The look of sheer delight that crosses a child’s face the first time he or she toy-hammers that round peg into the round hole. Happiness is the warm blanket of certitude.

As we know, however, the road of life is littered with square pegs. And it can be messy.

That sense of messiness has become more acute in recent years, thanks to the technologically driven deluge of information in which we’re rapidly drowning. Much of that information is increasingly generated in the form of data—most recently, “big data.” The ability of the latter to cut a swath through the quagmire is seductive, particularly when new technologies or tools pop up. Like shiny toys, there’s an understandable eagerness to try them out, particularly when they promise to distill complex concepts into neatly organized patterns, variables, and factors. That, however, can lead to a rush to the toolkit, with nary a glance at what is being studied and why. Data gets collected, en masse, and then churned through some kind of process to aggregate it in various ways, depending largely on who’s asking for it and for what purpose.

That’s certainly been the case in the social sector, which has been eager to figure out how it can use data more efficiently and effectively. The result has been a torrent of data-based systems, platforms, and tools designed to help social sector organizations assess their performance, evaluate their programs, and operationalize their outcomes.

Some see this trend as progress in a sector in which “doing God’s work” was, until recently, enough to substantiate nonprofits’ value.

Others see it as the end of the universe as we know it.

But there are still others who fall somewhere in the middle—those who appreciate rigorous analysis and evidence-based practice but have legitimate questions about where and how data will be used (and is being used) in a sector where outcomes and impact can’t be easily tied up with a big bright bow.

The biggest question is data for what? To help donors make better philanthropic investment decisions? To help nonprofits benchmark their performance across other organizations or subsectors? To help governments assess which intermediaries are achieving their goals? To provide the public with information about these organizations and what they’re doing? To help inform policy debates?

Each of those questions requires serious thinking about which variables will be used in each circumstance and for each constituency, why, for what purpose, and under which assumptions. They also underscore that data aren’t only a bunch of numbers; they’re merely vehicles through which to analyze, contextualize, and apply those numbers in ways that will help practitioners, policymakers, beneficiaries, and investors make better decisions, improve services, or create more responsive programs or legislation.

Wrestling with all that isn’t for the faint of heart, which is why the usual fallback position is to focus on numbers—and the ones that already are available—as proxies for organizational performance and effectiveness. Most of that data derives from the IRS 990 forms, which have serious limitations, not the least of which is their emphasis on financial factors. Information related to policy or practice issues—arguably the things that nonprofits (and investors, for that matter) might find most useful—is still nonexistent or difficult to extract.

We all know of nonprofits doing extraordinary work, but because it’s complex and multi-layered, it’s difficult to measure. Thus, those groups usually don’t show up on the much-touted lists of “high-performing” organizations—often those with bigger budgets, savvy fundraising staff, and boards with more impressive credentials. In other words, they have the stuff that’s easier to count and translates well on paper.

Distilling the panoply of “messy” relational processes that nonprofits use to do their work into measurable variables is difficult because they’re usually trying to resolve “wicked problems” that can’t be definitively described. But that doesn’t mean we shouldn’t keep trying. After all, social scientists have been trying to measure the squishy stuff for decades. Perhaps it’s time that we in the nonprofit sector figured out how to do likewise, specifically, by finding ways to integrate things like personal values, emotional factors, and cultural ethos into the larger equation—a regression model that might shed more light on to why donors give and to what.

That’s a more nuanced approach, admittedly, but it’s one that would also help nonprofits because it not only appreciates accountability but reflects how results can vary depending on circumstances, geography, context, and other fluid variables. This approach also acknowledges that data goes beyond sheer numbers to include qualitative information about how organizations can improve their practice, rather than “prove” whether they simply succeeded or failed.

Developing and testing this more complex model—which has rarely been attempted in a rigorous way—could and should be more of a priority in the social sector. Perhaps a first step would be asking a wide range of organizations and constituencies for detailed and concrete examples of information that would be useful to them, particularly in ways that it can or will be applied toward achieving impact and across various fields (and even entire sectors). Public officials, for example, may like having more information about the number of people using subsidized child care centers, but that tells them little about whether their services are leading to improvements in people’s quality of life or whether their presence has benefited or enhanced the larger community.

In short, collecting data—of all kinds—is one thing, but analyzing it is another and requires smart people to put it into context. What does it mean? And is it applicable in “real life?” Just because we have stacks of outputs showing that counseling “improves people’s lives” it doesn’t mean that policymakers will rush to fund programs so people have access to that service.

The bottom line? Data isn’t knowledge—nor is it necessarily knowledge that matters.

Some might say that we need a more centralized way to share information of all kinds—not just numbers. But that assumes information sharing to be an unfettered good, which isn’t necessarily true. Yelp might offer helpful information, but it’s hardly the place to get reliable information that has some evidence behind it. The mountain of big data being generated is exciting but it’s also generating misinformation that can be (and is already being) manipulated by people and institutions to convey the findings they want. That leads to more echo chambers or naïve trusting of questionable output.

Rather than databases full of disjointed numbers, perhaps what we really need are more trusted and reliable sources of “what works.” That will require a process to thoughtfully evaluate and analyze data and information through some sort of vetting process. Instead of relying on experts, though, perhaps this process could incorporate some of the open source practices that technology is driving by making information public, but pulling in some experts to weigh in on what’s real and what’s noise. After all, research shows that the best decisions are made when “real people” and experts work together.

Another thorny problem that existing databases have yet to crack is how to assess impact. While there’s a lot of discussion about this, it’s been daunting to put into practice. To help move that process along, perhaps we could all agree that focusing on numbers to assess impact won’t do the trick. That’s because impact goes beyond outputs and outcomes to affect something bigger—whether they’re public policies or cultural practices. Some, like Larry McGill of the Foundation Center, say that even those measures may be inadequate to assessing impact, which should focus on “making change.” Even if an organization did play a role in making policy changes, did those changes make any shred of difference?

Getting to those answers will take time, but one thing we could do now is require organizations (including funding institutions) to give concrete examples of how they envision achieving impact and how they’ll operationalize it—beyond a list of outputs the organization itself drummed up. After all, what better measure of impact is there than investments or activities having traction beyond a list of stipulated outcomes organizations or donors wanted to see?

Yes, these are messy issues for messy times, but if we can commit to doubling down on them—rather than seeing data, by itself, as a magic bullet—it would be a huge step forward in advancing our understanding and appreciation of how data—in all its forms—can be more thoughtfully applied and used by all those who care about the social sector. That may never get us to the satisfaction of pounding a round peg in a round hole, but it will make us more comfortable with the square ones that will always be with us.

Cynthia M. Gibson, Ph.D., is an independent consultant for a wide range of national nonprofits and foundations who serves as a strategist, thought leader, and writer. You can find her on Twitter @Cingib.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog