The New Rules of strategy – part 2

Accept uncertainty but seek to reduce it…

The other prevailing way of dealing with uncertainty is for businesses to conclude that strategy is irrelevant.  The argument being that as a business cannot know what will happen on a 3-5 year view, sale ed it should ignore that and concentrate on being operationally effective in the here and now.  This approach has a number of risks, link notably that omission bias prevails and the focus on doing things right rather than doing the right things results in becoming stuck in markets that will inevitably shrink and become increasingly price sensitive.

There is one further alternative.  This involves strategists accepting uncertainty and recognizing it can never be eliminated but working systematically to reduce its degree, for example by running experiments.  This leads to scaling investments inversely to the level of uncertainty that exists and only committing significant funds once the process of discovery has bounded the unknowns.

Such an approach increases the chances that the strategic bets necessary for evolving with (or ahead of) the times will be successful.  Managing uncertainty is critical for corporate longevity and doing so better than competitors is critical to generating superior returns.

Uncertainty management requires scenario planning and assumption management.  The first step in scenario planning defines the areas of uncertainty – the external factors beyond the business’s control that will have a significant impact on financial success.  This will include, among other factors, customer needs, technology trends, competitive intensity, raw material costs, government regulation and societal pressures.

Next comes identifying the range of possible outcomes on each dimension of uncertainty and assessing the likelihood of each one.  For example with technology trends, the range of possibilities may range from enhancements to what exists currently to the rise of a currently unknown, disruptive technology.  Scenarios about the competitive environment could include an increase in intensity from consolidation, the arrival of a strong new entrant and the weakening or implosion of a leading competitor due to financial difficulties.

Different scenarios will generate different optimal strategies – the markets or segments the business should focus on serving, how it should differentiate itself from competitors and the capabilities and offerings it needs to develop as a consequence.

This highlights a second level of uncertainty around cause and effect – specifically the relationship between the actions taken, the differentiation that delivers and the impact it has on superior profitability.   For example, will enhancing the customer experience – re-designing customer facing processes, changing the metrics on which customer facing staff are measured, paying more to attract staff with the right attitude, extending staff benefits to improve staff retention, training staff on how to deal empathetically with complaints, etc. – be a source of differentiation or uniqueness that competitors will find difficult to copy and customers will value?  How will that be reflected – higher prices obtained, lower customer churn, more business from existing customers?

Ultimately assumptions need to be made about which future scenarios are most likely and which cause and effect relationships are most plausible so the best strategy (the one that is most optimal when all potential futures are considered) can be selected.  The optimal strategy defines the bets the company should make – the investments it will make in R&D, new production facilities, competency development and process or systems enhancements.

Assumption management is a common practice in transformation projects but, ironically, it is not a staple of strategy development where it is most necessary.  It starts with capturing all assumptions made in a log so they are made explicit and everyone is aware of the predicated foundations for the selected strategy.

The next step involves identifying what future streams of data or types of event will confirm or falsify the validity of the assumptions that have been made.  This information can then be actively sought – the aim being to continue the process of uncertainty reduction even after a strategic decision has been taken so corrective action can be taken if necessary.  The result is the strategic equivalent of an early warning system for monitoring whether events and causal relationships are unfolding in expected on unexpected ways.  It defines the information that would change the assessment of the best strategy to follow – the trigger point for contingency actions so that the strategy can be unwound once it becomes clear that it was based on false premises.

As a result, even if the assumptions initially made are wrong, the error can be quickly spotted and corrected (and certainly not perpetuated) so the damage is limited.

…By embracing a crawl, walk, run philosophy…

As highlighted above, a key component of uncertainty management involves linking increased investment to reduced uncertainty.  This requires patience – an acceptance that it will take time for such investments to have a meaningful impact (which is easier said than practiced when quarterly reporting drives much business activity).  In particular it requires accepting that the pay-off from the initial stages of an investment comes primarily in the form of the insights generated.  These early stages need to be seen as a series of experiments that yield data which improves the validity of the final decision.

The value of this ‘crawl, walk, run’ approach (as Jim Collins described it in Good to Great) is a significant reduction in the risk of resources being allocated to initiatives where they will not generate the required return.  (The downside being that it slows time to market.)  It is commonly used in R&D.  For example at each different stage of its development, a pharmaceutical compound must pass pre-determined tests for further research to be funded.

But the same approach can be applied to strategic initiatives, for example whether to enter a new geographic market or not.  The starting point could be as simple as creating a trading relationship with an agent in a new country.  The next stage would be a more formal partnership or joint-venture – again with limited investment.  In certain markets, particularly those still in a high growth phase with fragmented competition, the best entry route may be via buying a small, existing player.  Since the objective is learning – the immediate revenue or profit impact of any move is secondary – the best form of entry is the one that yields the most knowledge.  What is learnt determines whether the company scales up its investment or exits.

This has major implications for the acquisition process.  An effective corporate acquisition process is critical to sustaining growth as no company will be able to expand forever by organic means.  It is no surprise that results show that the most successful acquirers are those that do so frequently, with the transaction size typically being relatively small, while the least successful are those that seek to make large, transformational mergers on an occasional basis.

The latter are often made necessary by a failure of strategic thinking; and as a result remedial action is required.  With time perceived as short, desperation takes hold with over-inflated prices paid to secure the bid is successful.  This is especially the case when herding causes a stampede towards a select few companies with the capabilities suddenly deemed critical.

Those companies that make smaller, more regular acquisitions achieve better returns for a number of reasons.  Firstly they have more honed processes.  Secondly they are less likely to catch deal fever (when the time and money sunk on a deal generates a determination to complete it, irrespective of whether the acquisition is still the right move at the right price or not).  Regular acquirers avoid the sunk cost fallacy recognizing that such expenditures come with the territory.  Hence they are happier to walk away, even at the last minute, from unattractive deals.

But more importantly there is a more long term perspective taken when companies regularly make small (relative to the size of the acquirer) but frequent acquisitions.  There are limited expectations about the immediate prospects of significant financial returns – the synergies that are systematically over-estimated when larger acquisitions are concerned.  Implicit to the approach is recognizing that value comes from making incremental steps rather than being ambitious.  And systematically reducing uncertainty ensures a higher return on investment once the decision to invest significantly is made.

The crawl, walk, run philosophy requires a gated process for strategic investments.  At each stage gate, certain criteria need to be met for funding for the next stage to be released.  The uncertainty inherent at each stage gate can be captured using Monte Carlo simulations as part of the financial modeling process.

Monte Carlo simulations reflect uncertainty by providing a range of estimates (with associated probabilities) for key assumptions rather than a point estimate.  The model is then run thousands of times generating a probability distribution across a range of results, not just a single outcome.  With a single result, the base case acts as a powerful anchoring bias, even if sensitivities to changes in key variables are also calculated.  When a range of results is the outcome, the uncertainty is made explicit.

Despite first coming to the fore with the rise of the PC in the late 1980s, Monte Carlos simulations are not as omnipresent as might be expected, not least because of the preference for a sense of false certainty to an acknowledgement that uncertainty exists.  Psychologically we prefer exactly wrong to roughly right as it helps make decision-making more black and white.

A range of estimates increases doubt so the likelihood of inaction increases.  So while Monte Carlo simulations reduce the risk of proceeding with a flawed investment, the risk of not proceeding when it would be the right course of action is increased.

But when used as part of a gated process, this risk of incorrect inaction is significantly mitigated.  In the first couple of stages, the probability distribution will be wide and flat to reflect the large range of possible results and the low probability attached to each one.  But only a decision to proceed to the next stage is required which should not require a big investment.

For an initiative to progress, uncertainty should reduce at each stage with the distribution attached to each assumption narrowing, as will the range of estimated outcomes (with the distribution showing a more defined peak).  The range will never disappear, but decision-makers psychological comfort with its existence will increase.  As a result uncertainty is accurately represented – with all the benefits that entails – but the chances of it becoming a psychological block are reduced.

…That embodies a flexible approach to strategy development…

While uncertainty can be reduced, it cannot be eliminated.  The risk of external events or causal relationships turning out in different ways to that hypothesized still needs to be mitigated.  This can only be achieved by adopting a mindset of strategic flexibility.

The starting point for this is accepting that strategy is just a bet, built on a hypothesis about the world which is probably wrong in some important ways.   This has two implications.  Firstly that diversity – of inputs and hypotheses – is a good thing; the wider the range of ideas under consideration the greater the chances of the optimal one being selected (if not at first, then at least later).  Secondly, each and every idea needs to be treated with scepticism.

In the same way that F Scott Fitzgerald defined the test of a first rate intelligence as being “the ability to hold two opposed ideas in mind at the same time and still retain the ability to function,” the test of a first rate strategy is holding a number of opposing strategic hypotheses in mind while effectively executing on the basis of one in particular.  This entails maintaining options by continually considering alternative (even conflicting) approaches so swift direction changes can be made if the pursued strategy is failing, but not to the detriment of commitment to the current course of action.

One way to achieve this is through taking a genuinely scientific approach to management. The concept of scientific management has been around for over a century, its father being Frederick Winslow Taylor, who set up a consulting practice to spread his ideas about manufacturing efficiency in the last decade of the 19th Century.  But what is called scientific management is just a rather grand title for data-driven decision-making.  This is a necessary part of science, but not sufficient in itself for the method to be called scientific.  Other elements of the scientific method – hypothesis generation, experimentation and the focus on refutation rather than confirmation – are typically absent from what is termed scientific management.

In part this omission stems from science and management having fundamentally different objectives – scientists seek to increase knowledge while managers seek to increase profits.  The two are not incompatible as increased knowledge should lead to increased profitability, albeit over the longer term.

A genuinely scientific approach to management requires a fundamental change in attitude, particularly towards experimentation.  Experiments test what works and, more critically, what doesn’t.  When strictly applied, the scientific approach focuses on refutation on the basis it is easier to prove a negative than a positive.  In this way knowledge advances through the elimination of bad theories and ideas.  Such an attitude does not come naturally as we are programmed to be confirmation-seeking.

Also experimentation may require doing something that is not optimal according to our current understanding.  That does not sit well with a business ethos that is focused on short-term maximization.  Equally it will require treating people – staff, customers, suppliers – differently, so that a proposition can be tested with a sample population against a control group.  This contravenes our natural sense of fairness – some people will be treated worse than they might otherwise have been.  In advance we don’t know which group that will be (though a hypothesis should exist), but we do know that, by definition, some people will be disadvantaged relative to others.   And while it is easy to pay lip service to the idea that in the long term everyone should benefit from the knowledge gained, these benefits appear distant and abstract compared to the short term and concrete discomfort that comes from violating accepted norms.

For all the discomfort caused, experimentation is critical to expanding knowledge and mitigating the uncertainty inherent in strategy development.  Ultimately the only way to test a strategy is to implement it through staged experiments and carefully and honestly interpret the results.  Without it, the chances of making costly false moves are much increased.

…And keep searching for the Next Big Thing

When Steve Jobs returned to Apple In 1997, it was a niche player with a loyal following in a market that was becoming increasingly crowded with strong competitors.  When asked the following year in an interview with a leading strategy professor what his long-term strategy was. Jobs coolly replied, “We’re waiting for the next big thing.”

Jobs point was that he was prepared to wait until the right opportunity arose rather than chase after those that he felt were not right for Apple.  And given the state of flux in the technology industry, he knew that the right one would be along in a while.  In other industries, the pace of change won’t be as fast so such opportunities will be fewer, but (at the risk of falling into the analogical reasoning fallacy outlined in the previous post!) looking for the next big thing is still a discipline that every business should consider.  Not least because it is a good mind set for countering omission bias – one of the least talked about but most damaging cognitive distortions.

Omission bias exists because the distress arising from a problem caused by not doing something – an act of omission – is less than that caused by the same problem if it results from an act we have committed (an act of commission).  The intention of avoiding increased regret creates a bias towards inaction over action.

Omission bias is a less noticeable form of strategic failure than bold moves that fail, though more likely to be fatal.  Sticking rigidly to existing products, customer groups or business models when their profitability has started to decline will inevitably result in a downward spiral from which it is difficult to recover.

The greater drama that failed moves engender ensures they receive far more attention – notably in the studies of acquisition failure.  The resulting erosion of value will set the company back, but won’t destroy it unless it was a ‘bet the company’ move.  Most acquisitions do not come into this category, but the oft-repeated negative commentary about them is likely to deter the conservative businesses that are most likely to fall prey to omission bias.

When seeking to identify the next big thing, the key question to ask is ‘what will solve (or help solve) customers problems significantly better than they are being solved today?’  Perhaps counter-intuitively, the best place to find the answer is in risk assessments.  Whatever is deemed to be a big threat to a business will also be a threat to its competitors – moving first and fast increases the chances of securing a larger share of the new market than is held of the existing one.

Of course there can be no definitive answer to questions about what the next big thing will be, merely probabilistic assessments.  But these estimated probabilities – combined with the estimated impact in each case – help to define priorities.  There is no guarantee that the estimates of probability or impact will be accurate in the first instance (though they should improve over time) nor that the action taken will be optimal, but at least action will be taken.  As a consequence, the risk of corporate decline due to omission bias is much reduced.

Share the knowledge:
  • Twitter
  • Tumblr
  • Digg
  • StumbleUpon
  • Reddit
  • del.icio.us
  • Slashdot
  • Facebook
  • LinkedIn
  • Yahoo! Buzz
  • Google Buzz
  • Google Bookmarks
  • Print

Related Posts:

About Jack Springman & Ken Whitton

Leave a Comment

*