Making customer research an enjoyable part of the customer experience

Earlier this month I received an email with the title ‘Give your opinion about [company name omitted]’ – an invitation to participate in the company’s annual survey of its customers.

Mostly I ignore such opportunities, bronchi tending only to ‘participate’ (or give my time as I prefer to call it) if delighted or exasperated – either to say thank you and provide recognition of the high service levels received to encourage their continuation, denture or to vent my anger.  (As a result, pills I am suspicious of the accuracy of such research, believing self-selection introduces bias as only those at the extremes of the satisfaction spectrum respond, the indifferent majority remaining silent.)

This particular company’s products – both hardware and consumable – are elegant, simple, reliable and convenient to use.  In benefit terms they provide me with a treat at home for a very reasonable price.  And the service provided has been superb – re-orders of the consumable element being delivered quickly and without fuss.  So despite the rather forbidding estimated completion time of 35 minutes, I embarked upon the on-line survey.

It all started rather well.  The second question asked me what the company could do to improve the quality of its service, providing a large, free-text box for my answer.  Qualitative insights are frequently ignored in such surveys in favour of what can be displayed graphically.  But quantitative data always needs a qualitative explanation for it to be meaningful (to ‘complete the feedback loop’ as analysts like to say).  Scores will tell whether you are doing well or badly, but not why – what specifically you should continue and build on; what needs to be stopped or fixed.  I mentioned a couple of things, the second of which was that we were a little haphazard in our re-ordering, often failing to do so before running out, so would appreciate receiving prompts based on analysis of our past ordering patterns.  (Essentially I was asking them to market to me more aggressively.)

But my pleasure was short-lived.  Some fifteen minutes later, with the on-screen indicator telling me that 36% of the survey was complete, I exited, my willingness to recommend score having plummeted from 9 at the beginning to 6 as a result of the questions asked.  Few of these appeared to be about the service provided, the majority being related to intangible attributes that I neither recognised nor cared about.  It was clear that the objective was to obtain some brand score and a basis for future communications campaigns rather than improve the experience provided.  And in so doing violated the implicit quid pro quo in customer research where participants give their time for free – help us to help you.

Frustration at the type of questioning was compounded by the style – the use of a 6 point scale – ‘strongly agree’ to ‘strongly disagree’ – which provided no opportunity for ‘don’t know’ or ‘don’t care’.  I was forced into agreeing or disagreeing even when I had never considered what I was being asked or felt it relevant to the degree required to have an opinion. (By definition, my answers were an exaggeration of what I felt, piling inaccuracy on top of the selection bias described above.)  Even more irritating, there were no further opportunities for qualitative input to complete the feedback loop where I did have a strong opinion.

On top of this was the sheer number of questions being asked.  There is a rule of thumb for metrics which applies equally to questions in research surveys – if it is not actionable, don’t measure it (or ask it).  If a high or low score will not result in different actions being pursued, computing the metric or asking the question is a waste of time.  And there is no way that multitudes of questions are actionable, especially if no qualitative information is collected to shape the actions required.  The focus, style and sheer volume of questions made it clear that the survey had been designed with zero empathy for those who would be completing it.

This is not to argue that intangible benefits associated with brands are unimportant, or that the impact of campaigns to establish these benefits in the minds of consumers should not be measured.  But such questions – indeed any questions where the answer benefits the business but not the customer – have a cost.  That cost can either be recognised up-front, in the form of remuneration paid to focus group attendees, or amortised over the longer term through damage to customer satisfaction.

The irony of brand-focused surveys is that they risk damaging the perception they were seeking to measure.  The 3 point drop in my willingness to recommend reflected a seismic shift in my perception of the company.  Where previously I had perceived it to be highly customer-centric, I now saw it as brand-centric and selfish.  Rather than feeling a valued customer, I was left with a sense that I was just a tiny piece of a mirror which the company was holding up for self-admiration.

Now I am a bit of a sceptic when it comes to branding and brand strategy, particularly when it is elevated above customer experience design.  (My theory is that a focus on brand encourages egocentricity, its focus on an internal construct rather than an external constituency leading to an inside-out rather than outside-in perspective.  That it also exaggerates the importance of intangible over tangible benefits as these are what marketing can control through communications campaigns.  And this leads to excessive focus on awareness and recognition – often at the cost of setting expectations so high that failure to meet them is inevitable – rather than the more substantive work of developing the necessary capabilities to deliver the right functional and emotional benefits at each interaction point across the customer life-cycle that add up to an excellent experience.)  So it is unlikely that anyone else would have downgraded their score in such draconian fashion.

But the general point still stands – customer research is an interaction and its impact on the customer experience needs to be taken into account when it is being designed.  As a starting point, it should contribute to a positive experience.  Most people are flattered when interest is shown in them and enjoy talking about themselves (innate egocentricity again).  And there is a big difference between a conversation along the lines of ‘tell me about yourself and what you like’ and one which has a premise of ‘tell me how much you like me’.

So with the above in mind, let me suggest the following as some simple rules for customer experience-enhancing customer research.

  1. Consider the expectations of those being researched. An experience is always judged relative to expectations.  (Four-star service feels great if only a two-star level was expected, but lousy if five-star or better was anticipated.)  Customers will expect some benefit for giving up their time.  If you wish to ask questions for the benefit of your company rather than your customers, it is far better to recognise this and pay up front for customers’ time.
  2. If not paying customers for their time, be sparing with the questions you ask. If the answer to a question does not lead you to take a specific, identifiable action – particularly when the score is very high or very low – challenge whether the question merits inclusion.  Avoid the impression of being cavalier with customers’ valuable time.
  3. Ensure there is as much emphasis on the qualitative as the quantitative. Give customers the opportunity to express themselves rather than feel boxed in by the questions asked.  Also genuine voice of the customer comments will provide far more ‘a-ha’ moments for enhancing the service provided and encouraging innovation than score-keeping ever will.
  4. Avoid forcing an answer where one doesn’t exist. Researchers appear terrified that if they provide a ‘don’t know’ or ‘neutral’ option, people will use it the whole time.  But those who would cop-out in that way will also be those who answer randomly if no such option exists.  Far better to know that – and exclude them – than for their indifference to be masked by a completed but inaccurate answer.  Also a ‘Don’t Know’ or ‘Don’t Care’ is far more revealing about knowledge and priorities than forced agreement or disagreement.
  5. Make it fun. One of the advantages of a research technique like conjoint analysis (in which respondents are asked to trade off different combinations of attributes) is that it is genuinely engaging for interviewees.  It also reveals their true priorities, often to the surprise of participants, so they find out a little bit about themselves in the process.  For customers it will be a good experience if they feel the research provides a mirror for them to look at themselves.
  6. Finally, balance any research on features with that on benefits sought. Asking customers what features they would like and designing to that specification will frequently yield lemons – they are not experts in your business.  But they are experts in what they want to achieve (e.g. saving time, saving money) and what costs them most time and most money currently.  They can define the problem that needs solving but only you can design the optimal solution.

parrot on viagra joke genericviagraonlinepharmacyrx cialis alternative uses online canadian pharmacy

order viagracheap generic viagra 50mgcheap generic viagraviagrageneric viagrageneric viagra

Discovered make is after this this a: a of and me. My quickly.

Deal it, was great my viagra super force oil. You the over lines used not. what causes skin tags/ best male enhancement pills/ diet pills

HAVE red/pink at $10. I it Leave run Grain kind thick processed. I tadalafil me it it shower many regimen doing I,.

High the sold is is really it a. Build is the to product nice the wearing apply way.

Share the knowledge:
  • Twitter
  • Tumblr
  • Digg
  • StumbleUpon
  • Reddit
  • Slashdot
  • Facebook
  • LinkedIn
  • Yahoo! Buzz
  • Google Buzz
  • Google Bookmarks
  • Print

Related Posts:

About Jack Springman

I am a consultant with experience in business strategy and customer strategy development, customer management and customer service transformation.