This Giving Season, improve your effectiveness as a donor with CEP’s resources for individual givers.

Contact Us

Search

Blog

Methodology Matters: HBR’s Weak Cover Story on Work vs. Life

Date: March 24, 2014

Phil Buchanan

President, CEP

Never Miss A Post

Share this Post:

The cover story of the March 2014 Harvard Business Review (HBR) on work-life balance was of interest to me because, like most of us, I struggle with getting this right. I worry about it for myself, of course, and also for the staff of CEP. So I cracked open the issue with eager anticipation.

I was sorely disappointed, however. Rather than offering any real insight on this challenge, the article gave me little faith that its findings can be supported by the data collected.

Indeed, the article provides a stark illustration of a problem I am growing increasingly agitated about.

The problem is the tendency for respected publications to present research “findings” without sufficient transparency about methodology and limitations – and in a manner that can lead readers to believe that there is much more “there” there than is the case. I wrote about this problem in my column in the Chronicle of Philanthropy last year. I suggested five key questions any consumer of “research” should ask:  What was the methodology used? Is the conclusion warranted? Is the “research” really research at all? Has other relevant research been done on the topic? And, who paid for it?

The HBR article, by Boris Groysberg and Robin Abrahams, both of Harvard Business School (HBS), falls especially short on its methodology and conclusions. So much so that it baffles me that HBR published the article, much less put it on its cover.

Groysberg and Abrahams describe how “today’s senior executives…reconcile their personal and professional lives.” Among their “findings” are “intriguing gender differences” including that “women place more value than men do on individual achievement, having passion for their work, receiving respect, and making a difference, but less value on organizational achievement and ongoing learning and development.” They also report that “a lower percentage of women than of men list financial achievement as an aspect of personal or professional success.”

Intriguing, right?

So what formed the basis for these findings? There were two data sources for the study – interviews and a survey. The data on gender differences – indeed, all the quantitative data cited in the article – comes from the latter, according to the article (although the authors make it harder to figure that out than it should be).

So who was surveyed?

“82 executives” in a Harvard Business School leadership class.

No, I am not missing a digit. Not 820. 82.

82.

And among the 82, just 24 were women.

Seriously? The authors are drawing conclusions about gender differences and showing data in a series of charts prominently displayed on the first two pages of the article based on a survey of 24 women and 58 men? This seems downright irresponsible to me.

Furthermore, a group of attendees at an HBS leadership class hardly seems like a sample representative of anything other than, well, attendees at an HBS leadership class. Yet the authors frame their findings as based on what “today’s senior executives will tell you.”

So what about the interviews?

Here the numbers are much larger, thankfully: 3,850 interviews with “c-suite executives and leaders…at companies and nonprofits around the world.” But the interviews were conducted by “more than 600” second-year HBS students over a number of years and the interview approach and methods of analysis seem haphazard, at best.

“The interviews were semi-structured,” Groysberg and Abrahams write, with students “allowed considerable leeway in what to ask and how far to go in following up on responses. That way they could dig into the issues they found most compelling.”

While this approach probably made for great conversations for the second-year students fortunate enough to be able to meet with the executives they interviewed, it hardly lends itself to rigorous data-gathering or analysis. As for how the interview transcripts were analyzed – or whether there even were transcripts at all, or analyses at all – the authors provide no detail.

There is a tremendous potential for lack of consistency that inevitably results from having 600 different interviewers, no matter who they are. Add to that the fact the students were instructed to “dig into the issues they found most compelling” and the dynamic of having students on the job market, whose first priority was likely landing a good job post-graduation, interviewing corporate leaders in the position to help the students with their careers. (Note: I was a second-year HBS student 14 years ago; I do not recall second-year classes being taken too seriously. First year? Yes. Second year? Not so much. Second year was all about figuring out what you’d do next.)

Making matters yet worse is the fact that the criteria for selection of interview subjects is not explained, nor is much demographic data provided. So it’s difficult to know who these 3,850 executives are – or were – and the degree to which they are representative of leaders more broadly.

There is a lot else to critique in the article, including that I think it is structured in a way that will mislead most readers to think the statistics presented are drawn from the sample of 3,850 interviews, not the survey of 82. I also think the piece doesn’t actually offer much in the way of insight on how to manage work vs. life (beyond “make deliberate choices,” which doesn’t seem hugely helpful). But I’ll stop there.

Why am I getting all hung up on this methodological stuff, anyway? Who cares, you might ask?

We all should, I think, because good information is necessary for good decisions. If HR managers or others leaders across companies, nonprofits, and philanthropic organizations make decisions based on the authors’ “findings” that women don’t value organizational achievement, ongoing learning and development, or financial results as much as men, they’ll be making a big mistake.

Because, the fact is, the authors don’t have enough data to support their claims.

The article was authored by a Harvard Business School professor and a research associate and published in the Harvard Business Review. My guess is that, as a result, readers would assume that what is reported is based on high-quality, rigorous research – not on the data collection methods you might expect to see in a so-so class project.

Sadly, they would be mistaken.

Phil Buchanan is President of the Center for Effective Philanthropy. You can find him on Twitter @PhilCEP.

Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.

From the Blog

Unmatched Influence: Remembering Joel Fleishman 
Unmatched Influence: Remembering Joel Fleishman 

Joel Fleishman, who died September 30 at the age of 90, is rightly being remembered as a man who was, in the words of the New York Times, “an unparalleled influencer among the nation’s wealthy and powerful.” Many of the beautiful obituaries and reflections on Joel’s...

read more