Evaluating the 2016 Campaign: Lessons for Assessing Philanthropy?

Bob Hughes

A week before the election a friend of mine, worried about the outcomes, said she consulted FiveThirtyEight’s updates every day to relieve her anxiety. And I’m sure she wasn’t the only one. During election season, it was just the thing to do, and anyone who was even a tad bit interested in the results knew how to access a site that claimed to offer the most accurate information about such an important event for our country.

During the days counting down to the election, I couldn’t help but think about my own line of work and what we use in philanthropy to assess how we are doing. Unfortunately, our equivalent of poll results are not very well developed, as documented in the recent CEP/CEI report on foundation evaluation practices. At least the pollsters have a clear day of reckoning (the actual election). This is public accountability. We have too little of that in philanthropy, and certainly not in a definitive way. Lots of possible reasons for this are described in the CEP/CEI report; the one in particular that caught my eye was that evaluation findings are shared with the general public “quite a bit or a lot” only 14 percent of the time.

Evaluating our work is a big deal in the philanthropic world! It’s how we determine if we’re on the right track toward being impactful. Many foundations consider themselves to be learning organizations, yet we cringe at the opportunity to share where we’ve gone wrong in our experimenting. The polls and pollsters, and pundits who relied on them, dominated the discussion of the presidential campaign. And they got it wrong.

Many explanations have been offered for the polling missteps we saw, but here I’d like to focus on their underlying theory. In a recent article, Michael Quinn Patton and his co-authors quote Kurt Lewin: “There is nothing so practical as a good theory.” This rings true with foundations and the work that we do. Much of our work is based on theory and risk-taking, with high hopes of catalyzing change.

The polls in the 2016 election were based on the theory that the best way to assess performance during the campaign, and to predict actual voting behavior, is to ask people how they would vote. The findings (all the data, polls, analysis of subgroups, etc.) rested on this theory and the methods used to apply this theory (sample sizes, question wording, sampling frames, to name a few).

An alternative theory was developed by American University historian Allan Lichtman, who co-developed (with a geophysicist!) 13 yes/no questions based on an analysis of the fundamentals that have influenced elections since the Civil War. He predicted the election of Donald Trump in September.

A noteworthy feature of the theory underlying the polls is that it also provides the framework for post-hoc explanations for failure. Attention has focused on “non-college-educated whites” as an explanation for Trump’s victory. Of course, segmenting voters by demographic characteristics fits the polling theory. But a recent article in The Economist identifies a composite indicator that explains even more of Trump’s superior performance — compared to Romney in 2012 — than non-college-educated whites did. The indicator: a county-level health index based on life expectancy, obesity prevalence, diabetes, heavy drinking, and low physical activity. (Health philanthropists, take special note!) The article notes, “If an additional 8 percent of people in Pennsylvania engaged in regular physical activity and if heavy drinking in Wisconsin were 5 percent lower, Mrs. Clinton would be set to enter the White House.”

What can we take away from assessing the performance of polls in this election? One potential lesson is to take theory seriously. In philanthropy we are beginning to appreciate that evaluation needs to be aligned with all other aspects of a foundation to be effective. A foundation’s distinctive theory of philanthropy is an essential context for effective evaluation. 

Another lesson is to examine the assumptions embedded in our work. Greater public accountability could be helpful in establishing an atmosphere where our assumptions are more likely to be brought to light and critically examined.

Finally, I’d like to suggest that we systematically counteract our own biases — in the way we think, the values we espouse, and the theories in which we have invested. Predicting an election is much simpler than assessing effectiveness of work on complex, evolving problems. Perhaps we should look for new accountability mechanisms, like a philanthropic equivalent to the Catholic Church’s (former) Devil’s Advocate or a New York Times’ public editor. At least we know the polls got this one wrong. In philanthropy, what are our “polls” telling us?

Bob Hughes is president and CEO of the Missouri Foundation for Health. Follow the Foundation on Twitter at @MoFoundHealth.

SHARE THIS POST
evaluation, performance assessment, research
Previous Post
Neighbors in Values: Philanthropy and Civil Society across Borders
Next Post
Learn More, Speak More, Invest More.

Related Blog Posts