Do your Customer Surveys Get Real Insights or Fake Views?
/Most companies want to understand their customers. Many of them conduct market research to ask their customers about their preferences, prices they would be prepared to pay, or their experiences of the brand.
The only problem is that some of this research just can’t be trusted.
Let’s look at price research first. Asking the customer ‘what would you be prepared to pay?’
Two researchers, Jerod Penn and Wuyang Hu[1], conducted a meta-analysis of 132 studies that asked hypothetical questions about prices and then created scenarios similar to real life - such as mock stores where the subjects spent their own money - to compare actual behaviour. What they found was that when asked the hypothetical question the subjects said they would pay around twice as much as they actually do.
We see similar inaccuracies in political polling. For example, in the UK 2015 General Election the polls forecast no overall winner but the Conservatives actually ended up with a clear majority. In the 2016 US election the average of the polls got the overall result right for the popular vote, but were wildly wrong forecasting the individual states.
This isn’t really a surprise.
Research about past events (‘How good was our service?’) should be fairly accurate because we are asking our customers to recall something that actually happened and to describe their experience.
When we ask customers about what they would do - whether that’s the price they would pay, whether they would use a particular product or service, or even if they would recommend us - we are asking them to imagine themselves in the future. When anyone does that they imagine an idealised version of themselves. In their heads the dialogue goes like this: yes, of course I would buy the environmentally friendly version; yes, of course I would pay extra for that premium solution; yes, of course I would recommend you to anyone who asked.
But when that customer gets to the real situation a million other factors influence their decision, and they do something different.
In one interesting piece of research householders in America were asked what would make them recycle more. They were given four options, which went along the lines of: one, to save the planet, two, for our children, three, to save money, or four, because the neighbours are doing it. Everyone surveyed said one of the first three would motivate them, no one said the fourth would. But when actual behaviours were measured, the only factor that had any influence was what the neighbours were doing!
Who cares about fake news - as marketers it’s fake views that we have to worry about!
So what do we do about it?
Well, one easy solution might be to divide any price research by two!
More seriously, there are a couple of ways to improve research.
First, ask comparative questions. For example, don’t ask whether the customer would buy product A or service X; instead, ask them if they would be more or less likely to buy product A than product B, or service X than service Y. It’s much easier for someone to imagine how they would react in one scenario compared to another, than to imagine a scenario in isolation. So the smart thing to do would be to give your customers a series of A vs B, B vs C and C vs A questions to get a clear idea of how attractive a product or service is compared to everything else.
Of course, this is exactly the same for pricing questions.
Second, ask what is called a certainty follow-up. In Penn and Hu’s study, asking the study participant to rate their level of certainty about a given response improved accuracy tremendously, reducing the difference between the survey and the observed behaviour by 99%.
Or you could simply do what the fake news does and just make it up! (Hint - that was a fake view!!)
[1] American Journal of Agricultural Economics, June 2018