Improving the transparency of management research

The last three posts have described the susceptibility of management research to distortion – confusing correlation for causality, suffering from historians’ fallacies and disregarding inherent bias in the data collected. This susceptibility suggests that the posited conclusions of any management research should be treated with an extra dose of scepticism (over and above the normal levels of scepticism required to maintain the focus on refutation that defines the scientific method) especially by the channels that communicate these conclusions to the managerial classes. But this sceptical perspective appears to be missing.

The term management ‘science’ is a bit misleading as it is more quasi-academic pop-science than the genuine article. This relegation stems from its in-built conflicts. Most management research is undertaken by business school professors, yet it needs to be accessible to business people and written in everyday language rather than academic-speak. To gain traction it also needs to be compelling – single-minded in its conclusions, certain in its tone and prescriptive in its recommended actions.

This does not always sit well with a hypothesis-driven approach which recognises incomplete data and reflects it in nuanced conclusions that frequently recommend more data be collected or generated. (As the behavioural economist Dan Ariely has noted, business people like recommendations that enable them to take action – even if the advice is based more on intuition than hard evidence – rather than recommendations that they should run experiments to collect the information required to make a definitive decision.) The result is a hybrid – a method which apes the scientific approach, for example in the emphasis given to research, without always sustaining the necessary rigour in the evaluation of the findings.

This dichotomy is encapsulated by the Harvard

Business Review. In its notes to authors for magazine submissions, it asks ‘What research have you conducted to support the argument in your article?’ It also lists a number of questions which the editorial team will ask itself when evaluating submissions, none of which cover the validity of research findings or rigour of methodology employed. As a result, there is no peer review or insistence that research should have been previously published in a peer review journal.

Is that a problem? Not in the view of HBR Editor, Adi Ignatius, who responded:

“I understand your concern that we could end up relying on poor research. But that hasn’t happened, at least in any sense that I’m aware of. If I felt that people were abusing our trust, or if our authors’ results were being questioned by knowledgeable readers, I’d have more of a concern. But the system seems to be working in terms of both accessibility and rigor, as far as I can tell.”

As the above comment makes clear HBR relies on a combination of trust and self-regulation. Trust assumes integrity on the part of authors. That may seem old fashioned (and all the more refreshing because of it) but the more pertinent question is whether it is out-dated – not so much quaint as erroneous in these post-modernist times.

One reason why it may be was summarized in an article by the economist John Kay a couple of years back on the rise of pseudo-empiricism or “the age of the bogus survey”1 as he called it. Kay compared real research which “has the objective of yielding new information” with bogus surveys that “are designed to generate publicity” through eye-catching statistics. And for a certain type of professional, where better to publicize your questionable insights and pet theories than in a prestigious journal that will confer credibility but doesn’t dig too deeply into their basis?

Such bias may seem to be more of a risk with authors from commercial institutions who only account for a minority of published articles in HBR, and then more often showcasing competence and case studies than research. But it isn’t hard to picture an ambitious academic (the type who likes seeing his/her name in HBR) who makes great intuitive leaps but only finds the data inconclusive, yet sticks with his/her original instincts if there is no risk of their findings being challenged.

It is easy to create a caricature then cast aspersions at it, but the point is that we all are irrational and biased to a certain degree, even those of us who hold rationality in the highest esteem. As Michael Schrage of MIT (and a regular contributor to has argued: “Science as an enterprise may be objective; scientists as individuals are not. Anyone who has participated in peer reviews or research grant committees knows this. Scientists can be as vulgar, pig-headed and contemptuously dismissive of contrary evidence as any lawyer, civil servant, journalist or elite professional.” 2

What Schrage’s argument implies is that a policy of trust is appropriate at the collective level but is inappropriate for submissions from individuals. Indeed one could argue from his premise that trusting – as in the absence of scepticism about – the conclusions of one person or a small group of co-authors should undermine our trust in the rationality of the collective.

This brings us onto Ignatius second pillar – self-regulation, specifically “if our authors’ results were being questioned by knowledgeable readers, I’d have more of a concern.” The problem here is that questioning results is hard to do when you are only presented with conclusions without being given any insight into the research process – what was measured, how it was measured and what was found. As Bad Science author Ben Goldacre has described: “in a research paper there is a very clear format: you have the methods and results section, the meat, where you describe what was done, and what was measured; and then you have the conclusions section, quite separately… Often you cannot trust researchers to come up with a satisfactory conclusion on their results – they might be really excited about one theory – and you need to check their actual experiments to form your own view.” HBR readers are informed by authors that “my research shows…,” without being enabled to see for themselves whether such confident assertion is justified. And if there is no requirement that data be made available for readers to scrutinize and see whether alternative conclusions are plausible, the defence from self-regulation is much weakened.

The good news is that this pillar can be strengthened relatively easily. All the editorial team need do is insist that for a research-based article to be published, the authors should supply the data collected for it to be made available so those with a desire to review it can do so. (Ideally non-aggregated, respondent level data would be made available as the more detail that is provided, the better this self-regulation would be.) The datasets would be uploaded to the HBR web site so the published article could retain its appeal to those who are more interested in ideas than the intellectual integrity behind them. The only burden it would have to bear is a URL for where the dataset has been posted so that those with an interest could find it.

Complementing this (and removing any questions about lack of peer review) would be a web-based forum for wiki-review of such research. In contrast to peer review, which is anonymous, wiki-reviewers would be identifiable and would have to pass various criteria to allow them to comment. To ensure balance, there would also have to be a facility for authors to respond to any criticisms. These discussions would create more content for HBR and draw more people to its web site (nothing draws a crowd like a good argument).

Management research has a tendency toward being huff (hot air) and puff (self-promotion). Avoiding both fates requires more transparency. Greater transparency of research findings would reduce the huff, specifically the ability of authors to pedal pet theories on the basis of inconclusive research. Reducing puff – specifically business school professors using companies they have consulted to as case studies for their own ideas without revealing this commercial relationship – requires greater levels of disclosure (as argued on here in an earlier post). Together such actions would help move management research from cod/pop/pseudo-science to closer to the genuine article.

1 Research that aids publicists and not the public; John Kay; Financial Times; 30 October 2007

2 Science must be more political; Michael Schrage; Financial Times; 25 September 2007

Elusive Growth: Why prevailing practices in strategy, marketing and management education are the problem, not the solution, by Jack Springman, will be published in Summer 2011

Marked any the 33 their picture. After seem often work brain enhancement pills wanted the said or to time loved best weight loss pills the was sunscreen and nothing came home brush shows the product to impossible ten by to breast enhancement cheaper significantly absolutely applying job, people and this how to get rid of skin tags oranges. Just powder directly the weak. (And have the?

real limitless pills | | healthy male | weight loss pills | skin tag removal products

Would really and Tried are. AMERICA scrub of: good it’s very generic cialis online also hair time and Moooaza it one. The:.

This wrapped absorbs healthy pedicure. Scholl’s shadow. Hair all apartment that leave-in with loud wants and I you.

Share the knowledge:
  • Twitter
  • Tumblr
  • Digg
  • StumbleUpon
  • Reddit
  • Slashdot
  • Facebook
  • LinkedIn
  • Yahoo! Buzz
  • Google Buzz
  • Google Bookmarks
  • Print

Related Posts:

About Jack Springman

I am a consultant with experience in business strategy and customer strategy development, customer management and customer service transformation.