Do these things to ensure that your research will be a meaningless waste of everyone's time.
Asterisks, in our reports, typically reference the same customary note. Light grey italics read ‘caution: low base size’. We appreciate and can calculate the gamble. The same cannot be said of another far larger source of error, one that cannot be absolved by asterisk. If it could, its reference might read ‘caution: we fudged the questionnaire’.
Determined to glimpse the best, I was prepared to stomach the worst. Onward I soldiered, through mind-numbing loops and screen after screen of inhumane grids. Clumsy leading questioning with contrived hacky response options would not shake my resolve. Though my better angels prevented me from submitting fraudulent surveys, I slithered and snaked past screeners with unnerving ease. I had to imagine their authors as aware of their sins, or at the very least, getting what they wanted.
Should you feel slightly aggrieved or suspect I embellish in any way, do suffer any amount of time completing surveys. That sweet flicker of hope will fast abandon you. My own would resign its last breath in a riptide of studies which, by their very design, made it impossible to answer accurately.
These approaches proliferate because they excrete believable data. They can’t, however, claim to reflect the faintest shade of reality. If there were apologetic asterisks in the offing, I’d place them here, in full faith you’ll double down with equal contempt.
Irrational comparisons
When researchers throw everything against the wall, respondents are presented with nonsense. To escape to the next screen, I found myself pretending to rank an unspecified discount against friendliness of staff. Another asked me to trade off the cleanliness of a gym with a free members’ magazine. Yuck.
Stated importance
The worst offenders allowed me to rate everything as equally important, wilfully in denial that a purchase decision is a choice amongst alternatives. It is not a post-rationalised audit that lends itself to box ticking. Instead, relevance may be derived indirectly by analysing preference.
Forced anchors
Some framed various bundles of premium subscriptions against unattractive options, without dropping their prices proportionately. I could picture blinking neon arrows pointing out acceptable answers. Good marketers introduce such comparisons to nudge customers toward higher margins. It’s called ‘the decoy effect.’ Absent at launch, even its most subtle touch plays havoc in market tests.
The first concepts I evaluated also became benchmarks for whatever followed. On occasion, I felt that pang to reverse engineer responses for consistency – one of many reasons why testing multiple propositions requires sorting, monadic or trade-off designs. When responses are relative, context is everything.
This is best illustrated by an ill-fated survey, fielded in the late 1990s. A scripting error prompted doorstep interviewers to pitch markedly different prices in different regions. Not to worry, the demand curves it produced were relative. The responses were identical; the prices were, of course, just anchors.
Fortunately, these quirks are as reliable as they are hazardous. The price a respondent expects to see, if not a telling indication of value, is the perfect MacGuffin for follow-up questioning. Alternatively, if an anchor is a real proposition, we can calibrate answers to predict launches with astounding accuracy.
I’ll defend the merits of rough and ready research any day of the week, but this is not that. The above involve propositions, purchase intent and willingness to pay. These are the very studies where millions of pounds, careers, and perhaps a company’s future hang in the balance. When we fly our flag here, we are to look investors in their eyes. With hard fought reputations on the line, surviving boardroom cross-examination merely grants us the luxury of awaiting potential humiliation by actual sales performance.
The tragedy is that these aren’t unwitting oversights but the lazy farming out of research problems. It is never a respondent’s job to do maths; to disentangle different types of percentage discounts, flat fees, interest rates and upfront costs from contract lengths, cashbacks and introductory offers. That’s a pricing table to make a spreadsheet blush. Oh, now repeat that 10 times with four options on each screen. Unforgivable.