The New Rules of strategy – part 1

Employ analysis to inform intuitions, medicine | not justify them…

As argued in the previous post, medicine the value of analysis has become exaggerated in the strategy development process.  This does not mean it is valueless; merely that it is an overemphasized part of the toolkit.  Understanding where it is valuable helps define the limits to what it brings.  And there are three areas where it is definitely helpful to strategy development – firstly in understanding the current situation, ailment secondly in using statistical techniques to support shorter term strategic decisions and finally (and most importantly) in how the analytical process helps to inform the intuitions that longer term decisions need to be based upon.

Analysis is powerful when it is backward facing and based upon facts.  And the starting point of any strategy development process should be a thorough understanding of the business as it currently stands – financially, competitively and operationally.  This requires detailed analyses of recent performance in all areas.

This includes understanding where the business makes money – which products, markets, segments or customers are most profitable and which are least, what has been the trend in volumes, prices and costs and what have been the key drivers of those trends.  It also means knowing which served markets have been growing fastest, how well the business is positioned in each area, its market share trend and which competitors have been most successful (and why).  Finally it requires an appreciation of capabilities relative to principal competitors – where it is ahead and where behind – also the opportunities for improving either efficiency or effectiveness.

All these insights are delivered by analysis.  And the resulting understanding provides a solid foundation for strategy development, ensuring that decisions are made – as much as they ever can be – on truths rather than myths about current performance.

Secondly analysis improves the quality of short term strategic decisions. The assumption underlying any data-driven decision-making is that the period over which the decision will play out will be very similar to the period for which data has been analyzed.  Hence the shorter the time scale being considered, the more reasonable the assumption that the future will be similar to the past.  And the richer and more plentiful the data, the more valid analysis can be.

One example of this is banks profitably using their experience of which customers have repaid loans and which have defaulted to build regression-based algorithms using criteria that can be captured in advance of making a lending decision to define who should be offered loans and who shouldn’t.  Similarly marketing functions use existing data to identify the characteristics of customers who they can most easily acquire.

Also by comparing a customer’s purchase history with that of similar customers, marketers can identify what product or service a customer is most likely to buy next.  Finally by capturing the patterns of behaviour leading up to defection of prior customers, they can spot which existing customers are at risk and should be given special attention.  In all these cases, there is the assumption that current customers will behave similarly to how comparable customers behaved in the past, which is usually a reasonable one to make.  The current popularity of analytics is testament to the validity of statistics-based models, especially when the algorithms are refined and improve over time.   And such approaches help to improve strategic decision making on product and customer prioritization.

The longer the time scales being considered, the greater the part that intuition plays in decision-making.  Evidence appears only after the fact, so assumptions about what will transpire have to be made and that requires judgment.  As outlined below, treating the future as something that can be analyzed creates blinkers and false confidence in decision-making.  As Warren Buffett dryly noted: “Any business craving of the leader, however foolish, will quickly be supported by…studies prepared by his troops.”  Due to the variable nature of assumptions, forward-facing analysis is flexible and can deliver whatever answer is desired.  If these assumptions are treated as facts rather than estimates with a potentially significant margin of error, overconfidence and poor decisions are likely to ensue.

But if used appropriately, a structured analytical approach helps strategy development reverse the flow that Buffett noted – rather than analysis justifying pre-conceived intuitions, analysis informs intuition.  Both the frequency and scale of errors of intuition are reduced when a structured discovery process ensures a fully rounded perspective is taken.  (While this might seem too obvious to say, it runs counter to the desire for visionary leaders for whom no amount of structured processes will change what they believe will happen.)

Whereas the answer generated by any attempted analysis of the long term future has limited validity due to the pliable nature of assumptions, the research process itself is valuable.  Collecting evidence from comparable situations in the past highlights which assumptions are most robust (with a relatively low margin of error) and which are little more than shots in the dark.  It is this knowledge that helps to improve the accuracy of intuitive judgments.   When the focus is on the answer, the risk that analytical studies simply confirm existing preconceptions is high.  But when there is recognition that the value resides in the process, the chances of more valid decisions is higher.

Daniel Kahneman describes one example of this in Thinking, Fast and Slow. When doing his national service in the Israeli army, Kahneman developed a scorecard for assessing recruits.  This focused on objective criteria but did not exclude subjective judgment entirely with the final question asking interviewers to close their eyes and imagine the recruit as a soldier and give a score.  The combination of highly structured and intuitive evaluation worked well and kept everyone happy.  The importance of intuitive judgment was preserved, ensuring the interviewers supported the new approach – most people preferring to trust their instincts over an algorithm.  In turn the structured objective process that preceded it improved the predictive effectiveness of the ‘close your eyes’ score.

If analyses are used to justify existing intuitions, they are of limited value at best and, by hardening false prejudices, very damaging at worst.  But if structured processes of research and discovery are used to genuinely inform intuitive judgments, those judgments are likely to be better than they would otherwise have been.

Such a change requires a fundamental shift in attitude and corporate culture.  A hubristic need for certitude needs to be replaced by an acceptance that constructive skepticism is a more sustainable approach.   It may also require organizational changes, particularly with regard to the roles and responsibilities of the strategy function.

Strategy departments often fall between two stools. Either they’re just glorified analysis departments, providing pretty graphs to back up the Board’s hunches. Or they are expected – not least by themselves – to completely define the strategy that operational teams will implement when they rarely have the insight into operational realities to do this.

The above suggests a third alternative with the strategy function acting like an internal audit team, reviewing management proposals, testing them for analytical robustness and screening for personal bias.  Such a role would involve a different and formally-defined brief.  In the same way that internal auditors are given formal independence and report direct to the CEO, the team performing this strategic audit role would probably need to report to the Chairman of the Board rather than the CEO.

…And use scorecards to ensure a structured approach to strategic evaluations…

One way to ensure a structured approach to evaluations is to use checklists or scorecards.  These ensure that all relevant factors are taken into account and help to reduce time-based inconsistencies – treating identical opportunities or candidates differently when there is a gap between evaluation (for example, accepting on one occasion but rejecting on another).

Scorecards based on objective data with statistically valid predictive characteristics – such as those used to determine whether a customer should be allowed a loan or offered a particular product – are obviously the most robust.  Those based on subjective criteria offer more opportunity for bias as judgment rather than facts provides the scores.

Despite this, scorecards or checklists still ensure that whatever is being evaluated is viewed from all perspectives.  They counter the risk of disproportionate weight being given to just one or two factors due to the affect heuristic (a positive score in one area leading to the whole being viewed positively) – a risk with less formalized processes.  Completed scorecards also provide a record for future assessment of decision quality – which variables were incorrectly assessed, which ones were given too much weight and which ones were missed altogether.

So how could strategists use scorecards?  The most obvious answer is in supporting acquisitions and divestments.  In the case of divestments, the endowment effect encourages overvaluation of businesses that are currently owned.  Generating a simple score (e.g. without any of the factors being weighted) would help counterbalance this effect.

Equally 70-90% of acquisitions fail to create value for the acquiring company.  Many reasons are given for this (e.g. integration failure is currently the most fashionable) but common to them all is one simple factor – the assumptions used to predict future profits, whether from future growth or synergies from combining operations, were over-optimistic and the confidence in the assumptions was too high.

Existing M&A due diligence practices involve an assessment of multiple external and internal factors.   These include the growth of served markets; the degree of market fragmentation or consolidation; the company’s competitive position in each one; customer satisfaction; profitability relative to competitors; the industry experience of the management team; the quality of plant, machinery and other physical assets; the quality of the enterprise IT systems and the management information they generate; the opportunity for savings (or risk of increase) in cost of goods sold; the opportunity for savings (or risk of increase) in selling, general and administration costs; the opportunity to grow revenue and profits; and the threat to existing revenues from new entrants or increased competition.

All these factors could be formalized in a scorecard by turning each dimension into a five-point scale (labeled -2 to +2 so weaknesses generate negative scores) based on pre-determined benchmarks that cover the full range of positive and negative possibilities.  (For multi-business companies, a scorecard for each one would be developed and then an overall one created based on weighting according to revenues.)

The unadjusted overall score would be a simple measure of attractiveness.  Weights could be added to different factors to add more sophistication, though at the risk of increasing bias.  There would be a temptation to adjust the weighting until the desired outcome is achieved.  But this helps to highlight the factors which are implicitly being given greatest (and least) weight.   Both unweighted and weighted scores could be tracked to see which one provided the more accurate prediction of success.

Scorecards of this type provide a structured approach to decision-making but are not an alternative to the judgment of experienced managers.  People tend to trust their intuition (hence the likelihood of adjusting weightings so that the scorecard reflects gut instinct).  As the recruitment scorecard story told by Kahneman shows, people are reluctant to trust an algorithm.  He only gained support for his suggestion when he allowed the final score given to be an intuitive one and for this to be given significant weight in the final evaluation.  And the predictive quality of this final assessment was much improved by the process that preceded it.

M&A will continue to be important for renewing or enhancing capabilities and expanding reach.  Making a success of acquisitions will be critical to corporate longevity.  The dismal success rate with acquisitions highlights that current evaluation processes are not effective.  Sceptics could argue that creating a simple scorecard will add little to what existing practices seek to achieve.  But it is their very simplicity which helps to highlight where bias is potentially distorting the evaluation process.  And anything that reduces the risk of bias will provide a much needed boost to the chances of success.

… But recognize the limitations of analysis

Analysis helps illuminate the past but it has limited value for predicting how the future will unfold.  Even worse, the illusion of validity creates false confidence in whatever answer is generated.

A number of years ago Gary Hamel observed: “The dirty little secret of the strategy industry is that it doesn’t have a theory of strategy creation.”  Hamel obviously only gets dirty on a very theoretical level, because on a practical level there is a far dirtier secret – that despite all attempts to make it appear evidence based, all strategy development is fundamentally assumptive and no amount of research changes that.

Not that you would think this from the web sites of the leading strategy consulting firms, where there is much written about fact-based decision making and rigorous analysis.  The problem is that there are no facts about the future, just predictions.  But when these predictions are purportedly justified by rigorous analysis, they take on an undeserved plausibility.  That they are little more than finely attired assumptions is ignored.

Not knowing what will happen in the future reminds us of how many factors are outside our control.  The decisions of external parties such as customers, channel partners, bloggers, current competitors, potential competitors, university research and development teams, suppliers, regulators, lobby groups and terrorists all impact the business environment in which a firm has to operate.  Some may be subject to influence, very few to control.  But rather than simply acknowledge the extent of our ignorance about the future, we prefer to believe that it can be forecast.  Trust is placed in the predictions of experts – the analyses of think-tanks and the analogies promoted by business academics.  And as Philip Tetlock’s 20-year research study showed, expert predictions tend to be far less accurate than the assurance with which they are espoused would suggest.

This misplaced confidence arises from how they are generated.  Forecasts based on detailed analysis of recent history or analogical reasoning – drawing parallels from how one industry has developed as guidance for the future of another – are based on the assumption that past experience is a good guide to the future.  That may be the case, but the question is which of multiple pasts across multiple industries will be the best guide to the future of the specific one in question.  The one that most easily springs to mind (availability bias) or seems to fit best (affect bias) are only a subset of the possibilities – something overlooked by predictors seeking only confirmatory evidence for a pet theory.

A good discipline to counter the potential dangers of this analogical reasoning is to make explicit the logical step that is usually glided over.  Suppose we are considering doing X, and wish to draw on the powerful analogy of Apple’s wildly successful experience with doing X.  Rather than glibly putting into our Board paper, “Apple’s experience shows that X is a good thing to do”, we should see if we are prepared to be explicit about the implicit lesson we are drawing, and write “Apple’s successful experience with X proves that, for all time, in all situations and for all companies, X is a sure-fire winner.”  If we are not prepared to write this, it should make us more carefully consider the complex reality of how Apple’s external and internal situation differs in important ways from ours.

On top of these predictions of the future are layered hypothesized cause and effect relationships – that if the business does A it will increase revenue by B and profits by C.   As highlighted in the section on fallacies of causation, our discomfort with uncertainty encourages us to deduce causality where its existence is questionable.  We want to believe that the future can be controlled, that actions taken by us (or someone in whom we have placed our trust) will generate predictable results and rely on confirmation bias to support this illusion.  In the process the degree of correlation becomes overstated and correlation is assumed to mean causation.  The forecast future or hypothesized causality may be roughly right, but the possibility of their being wrong is a far greater than is recognized.

It is this combination of biases – errors of prediction and fallacies of causation compounded by confirmation bias – that gives analysis or analogy-based strategies the property of the stereotypical Chinese meal.  Bloated expectations are soon replaced by the hunger of disappointment as the unstated assumptions fail to survive their first contact with reality.

The limitations of the analytical approach to dealing with uncertainty – whether that uncertainty is about how the future will unfold or the causes of past effects when complete data is not available – need to be recognized.   Otherwise analysis becomes a tool that increases confidence by more than it increases validity, increasing the risk of major strategic mistakes.

Share the knowledge:
  • Twitter
  • Tumblr
  • Digg
  • StumbleUpon
  • Reddit
  • del.icio.us
  • Slashdot
  • Facebook
  • LinkedIn
  • Yahoo! Buzz
  • Google Buzz
  • Google Bookmarks
  • Print

Related Posts:

About Jack Springman & Ken Whitton

Leave a Comment

*