The people at Ascribe kindly asked me to be their keynote speaker at their European conference last week in London. It was a welcome soapbox for me, since I’ve long been critical of the dismissive approach market research takes to computer-based text processing, and its dogged attachment to manual coding the a ‘gold standard’ in finding meaning and truth from a pile of unstructured comments. All power to Ascribe (which provides software to handle open questions in surveys) and its clients, in my view.
I called my talk “Getting Ready for the Decade of the Comment” (you can view it in here). My point was that research needs to put coding in its place, and supplement it with other computer-based methods that reach further faster, and with less effort. At a time when MR is adjusting its focus from delivering data, in an era when data has become super-abundant, to generating insight and providing explanations – the time is ripe for the humble comment to take centre-stage. But it won’t if research continues to insist on turning words into numbers.
I suspected my soapbox would be before the wrong audience. Sure enough, it was heartening to hear the experience of three presenters from two different research firms – Heather Dalton from Market Strategies and Jeanette Bushman and Dino Perrota, both of Neilsen, who are already embracing these hybrid approaches. In order to gain acceptance, they all spoke of the work they had had to do to convince researchers in their organisations of the value of these methods.
It struck me we were hearing from two firms where the message had not only been received but acted on, and offered to clients as a new service. I was thinking of all the firms that weren’t in the room – where an initiative from coding would be pooh-poohed, talked down or resisted in the many ways employed by those accustomed to holding the levers of power.
Traditional coding has two major problems: one, it’s the only method most researchers are familiar with and two, it’s just too expensive to administer on all but an elite number of surveys. It’s like leaving the archaeology in the ground. Sure, one day someone will be able to analyze it, but that’s no help if you need to know about it now.
Dino Perrota likened moving to these new methods to altering the level of focus you can get. He showed a highly pixellated image of Da Vinci’s Mona Lisa against a fully resolved image. He said: “Everyone is used to the fine brushed results. With [text analytics] we just provide the brush-stroke analysis. We are doing the same thing, but we are just painting a picture for our client with a slightly broader brush.”
But isn’t that what research does, as a matter of course? Sure, it would be lovely to conduct a census, but since we can only do a sample, let’s not bother at all. Research is not judging these new methods by the same standards it judges the rest of its work.
Perhaps the other problem was revealed in Heather Dalton’s talk, when she gave a perfect illustration of how the hybrid approach responds perfectly to client demands to analyse all the rest of the data that were currently being ignored, and yet not increase the cost. Tellingly, she said “I found analysts and coders need to work together closely – especially to [work through the data and interpret it]. Text analytics does not mimic coding and has to be sold as an entirely different product.”
It is not clear, in the current hierarchy or production line, where this activity sits. Researchers aren’t used to working shoulder-to-shoulder with coders, and still less, having coding make interpretive decisions. ‘Coding’ is one of those below-stairs functions in most organisations. It’s rare for anyone from coding to be brought out, blinking, into the glare of the client debrief – but as these new methods start to take hold, that will follow as a natural consequence.
Coding is a process, and it’s only going to be part of portfolio of methods in future. It’s time to drop the name, and come up with something that might mean something to clients and other buyers of research.