Sifting Through the “Research”

An op-ed in The New York Times on March 8 titled “Does This Ad Make Me Fat?” points out that a recent journal article in BMC Public Health that established a correlation between advertising for junk food and obesity rapidly made an unjustified causal leap.

According to the Times piece, the journal article—which in full disclosure I have not read—goes so far as to suggest that “policy approaches may be important to reduce the amount of food advertising in urban areas.”

Christopher Chabris and Daniel Simons, the authors of the op-ed, will have none of it, pointing out that the “policy recommendations rest on a crucial but unjustified assumption: that any link between obesity and advertising occurs because more advertising causes higher rates of obesity. But the study at hand showed only an association: people living in areas with more food ads were more likely to be obese than people living in areas with fewer food ads.”

As Chabris and Simons note, “it is easy to imagine how the causation could run the opposite way … If food vendors believe obese people are more likely than non-obese people to buy their products, they will place more ads in areas where obese people already live.”

To drive home their point, they write, “Suppose we counted ads for fitness-oriented products like bicycles and bottled water, and found more of those ads in places with less obesity. Would it then be wise anti-obesity policy to subsidize such ads? Or would the smarter conclusion be that the fitness companies suspect that the obese are less likely than the fit to buy their products?”

Blurring correlation and causation is an all-too-frequent error—and Chabris and Simons make their argument well.

But this is just one of many research sins committed by those eager to hype their findings. In my latest column in the Chronicle of Philanthropy, I suggest five simple questions readers should ask when reading research about the nonprofit sector: What was the methodology used? Is the conclusion warranted? Is this really research at all? Has other relevant research been done on this topic? Who paid for it?

While I focus in the Chronicle on the nonprofit sector, I’d argue these questions are just as important when reading Harvard Business Review as Stanford Social Innovation Review.

Look, the journal article discussed in the Times op-ed at least sounds as if it is based on real, rigorous, large-scale data collection: the mistake the authors made was in drawing the wrong conclusion (not a trivial error).

But I am amazed by how much in the business and social sector press is put out that purports to be research but simply isn’t—where there is little or no actual data collection or rigorous analysis. Too often, consulting firms, in particular, dress up their anecdotal experiences with clients as “research” when it is nothing of the kind. And too often, it’s difficult for the reader to even find out that what is being touted authoritatively is based on just a few consulting engagements.

As I argue in my Chronicle piece, this matters! It matters because people change their practices in response to this stuff.

We all need to become more discerning readers.

 

Phil Buchanan is President of CEP and a regular columnist for the Chronicle of Philanthropy. You can find him on Twitter @PhilCEP.

SHARE THIS POST
goals, indicators, leadership, nonprofit sector, performance measurement, research
Previous Post
Opportunity in Communities: Developing the Whole Picture
Next Post
Chasing Staff Satisfaction

Related Blog Posts