When you make your Quant to flexible, or put numbers in your Qual, the result is always embarrassing. This tells us everything we need to know about getting to grips with Big Qual.
Today’s job boards prize candidates that ‘bridge the divide’, the long-awaited recognition that researchers can be credible in both the qual and quant disciplines. This is a far cry from specialists standing toe-to-toe along a demilitarised zone, exchanging the occasional slide. We speak about business outcomes, not methodology. Good researchers hide such detail, the best create the illusion that methodology doesn’t matter. The worst believe it.
When we integrate methodologies, our work carries clout. This is easily done when quant doesn’t gloss over gaps, overinterpret, or misrepresent; and qual doesn’t add or subtract. However, when we test their bounds, even in language; egg finds faces, careers pause, agencies lose accounts and brands tank.
See quallies shift in their seats as a quant researcher breathes life into snooze-worthy bar charts. Home-brewed conjecture sets them up for the fall, come their first focus group. They exchange fleeting glances. The divide is hardly academic.
Conversely, watch the quanties, when the interim qual debrief leans into words like ‘better’, ‘more’ or ‘stronger’. Heads tilted, they are unable to hide their wincing, ‘Well, let’s hope the survey…’ Oh no. It’s too late. The chief marketing officer just left the room, absolutely loves the good news and is telling everyone, with the fizz of a genie that won’t see another bottle.
The frequency with which quantitative analysis contradicts intuition is unnerving, so too, the amount of nuance lost to questionnaires and concealed within aggregated data. It’s here, where all of marketing’s finest blunders can be traced: the unplugging of the insight from the methodology. When made public, they fill the chapters of Malcolm Gladwell bestsellers.
Yet, we depreciate the divide when it serves us, exaggerating consistencies across methodologies wherever they arise. Contradictions are snuffed out. We’ve all done it. It’s too tempting. From the outside looking in, the difference in methodology appears to hang on sample size alone. In some quarters, the difference is believed to be function of how comfortable researchers are with ‘creative interpretation’, a talent seemingly limited by ethics and commitment, rather than by common sense.
In truth, our methodologies can’t agree (or disagree) – the only time they seem to do so is when one masquerades as the other.
Enter ‘big qual’, digital qual dressed up as quant, resplendent in slick dashboards, with bar charts and artificial intelligence. It’s the same revolution that boosted quant, now reaching qual. Under the right conditions, say when piggybacking a traditional survey, it’s certainly capable of delivering representative rigour and tactile granularity… but not in the same breath.
As data grows, we are forced to introduce structure. Photos and conversations become tags and word counts, and ultimately taxonomies and topics – a representation of ideas, not an estimate of their prevalence in the market. Here, these numbers are scaffolding, and navigation toward the result, but never the result itself. As data grows bigger still, say when observing behaviour, we may explore qualitatively until the moment the content is codified and aggregated. At which point, it is firmly quantitative, its juicy goodness and flexibility exchanged for other powers and limitations.
In this light, methodology boils down to what can be objectively inferred at any one time. There is no middle ground where it is meaningful to put numbers on qual to feel more confident about qual. The authenticity of qualitative research is evaluated in other ways. When the skillset of the researcher, the data, and even the work product appear identical, the divide, and how it’s bridged, matters more than ever before.
Big qual explodes our scope, capturing what would otherwise go unnoticed. We explore concepts that can only be seen at enormous scale, chart differences across participants at creepy magnification, reveal trends over time, and integrate multiple sources. Moreover, insights are now evidenced at a click. We are no longer beholden to another’s summary interpretation. Surely, these are game changers enough, readily achieved without undermining the credibility of research, or inviting the attention of Malcolm Gladwell.
Commercial research is only valuable to the extent it remains an applied science, albeit the softest kind. With this comes baggage, the grand bargain we made with reason, hypothesis testing, the law of averages and probability. It’s the last vestige of our nerdy origin story, one we downplay at every opportunity, abuse if needs must, but can’t totally shirk. It is how we get to rise above the chaos, then land on correct, defensible answers.
Breaking this accord creates a far greater, unbridgeable divide. Those on one side say: ‘Come down from your ivory towers, decision-makers don’t care about this stuff…’ On the other, they turn away and whisper, ‘…yeah, because they trust we do’.
I am a marketing scientist with 24 years of experience working with sales, media spend, customer, web & survey data. I help brands and insight agencies around the world get the most out of data, by combining traditional statistics with the latest innovations in data science. Follow me on Linkedin for more of this sort of thing.