The latest news from the meaning blog

 
The bump and grind of MR’s tectonic plates

The bump and grind of MR’s tectonic plates

Technology platforms are merging: will we see earthquakes, new mountain ranges or find familiar landscapes heading for the bottom of the ocean?

Is it a good thing or a bad thing when major players in the world of MR technology merge or get acquired? It really depends who’s asking the question, as the answer is invariably a bit of both. But it’s a question that is alive right now.

Last February, Anglo-French software provider Askia announced that IPSOS, had bought a controlling interest in them. Days later, Confirmit announced it was merging with dashboard provider Dapresy. Yet before there were any real announcements about product directions, news broke this year that Confirmit was now in merger talks with with FocusVision, a head-to-head competitor.

Mergers and acquisitions can be good if they mean that that development investment can be intensified by unlocking new capital or pooling resources. After all, it generally takes two continents to collide before a mountain range can be born! We need our technology to reach higher and further, and much of what is out there shows signs of stunted growth because of limited investment. This, in turn, creates a productivity issue for the research industry, because the technology it depends on has developed piecemeal. Fewer, more intensively developed products should boost productivity – but that relies on the focus of the acquisition being to deliver growth, not to achieve economies of scale and extract value. Only time can tell on that one.

But there are other more immediate downsides. The people who are already “no longer working for the company” can tell you about that. It also creates anxiety for customers. Where rival products come under the same ownership, it is clear than not all the products will survive. The developer will have to pick winners, and hard luck if it’s the tools you rely on that are destined for obsolescence.

Sometimes companies spend a year or two figuring out how to integrate the incompatible, and sometimes reaching the conclusion they are better off starting again. This is what happened twenty years ago when SPSS went on an acquisition spree. That resulted in a development trajectory that saw existing products stand still for several years before their replacement ‘Dimensions’ product line started to emerge, piece by piece. But the replacements were slow to appear and when they did, they did not completely fill the gap. That hiatus probably encouraged rivals such as Confirmit (then FIRM), Askia and subsequently Decipher (later acquired by FocusVision) to get going.

Confirmit seem to have done a better job of folding the products it acquired into its own platform when it bought Pulse Train over a decade ago. It is still there, at least, and the lineage from Pulse Train with its CATI and data processing capabilities can all be seen now in Confirmit. Perhaps that bodes well for their acquisition of Dapresy. Confirmit has long had its own dashboard tool. Named Reportal, it had a cute name but not much else to commend it. It was a beast that was hard to tame. MR clients appeared to be defecting to generalist business intelligence tools like Tableau and Power-BI, which are a pain to use for an entirely different set of reasons.

But the path is much less clear for Confirmit versus FocusVision’s Decipher, both sitting in the centre of their own constellation of different – and inevitably incompatible products. Research firms using these tools can ill afford another lengthy development hiatus while these continents grind together.

At the moment, the merger has been referred to the competition regulators in US on monopoly or ‘antitrust’ grounds. If it is eventually allowed, let’s hope the lessons of SPSS have been learned, and that they figure out a way for all their customers to scale the mountain range that will start to emerge in a year or two’s time.

 

Breaking the link – does MR have a problem with testing its surveys?

Breaking the link – does MR have a problem with testing its surveys?

It’s not a good look: a grammatical error in the intro, a spelling mistake at Q2, no obvious place in the list of options for the answer you want to give for Q4.

Too many of the surveys I see from either side of the fence, as a pro or as a punter, contain the kinds of mistakes that should not have got through testing.

Some wording errors might just make you look stupid, but others introduce needless ambiguity and bias. All research firms have QA processes, so what is going wrong? My hunch is that it all comes down to a combination of factors in that twilight zone at the interface of technology and people.

First, it may not always be clear who is responsible for catching these errors. I’ve met researchers who think it’s the responsibility of the scripter to test the survey. I associate it with what I call the “link checking” mentality. The survey is ready to test. The technician scripting the survey provides the researcher with a test link or URL to access the survey and asks them to feed back any problems. The researcher clicks the link, looks at the first few questions and reports back that it seems to be working fine. The technician has deadlines and other projects to work on, so has rushed it through, anticipating the researcher will pick up anything really dreadful. The researcher is busy working on another presentation and has limited time to go all forensic.  A few quick glances at the first two or three questions confirms everything seems to be up and running.

So everyone sees what they are looking for – that the survey is ready to go live on time, even though it isn’t. I hasten to add – this isn’t everyone, always, but it does happen.

I have also heard researchers who delegate (or is it abdicate?) responsibility for taking a really good look to their panel provider. Of course, good panel providers have many reasons for not wanting surveys that are badly worded, or that take 20 minutes to complete rather the ten that were agreed in advance. But they won´t necessarily notice the brand that is missing, or even the entire question that is missing, or sort out minor typos. That´s not the deal.

What this does illustrate though is that there isn´t one ideal person to do the testing. Testing is a team effort and those involved need to be selected for their different perspectives and skills. The researcher who designed the survey must always view it to check what has been created delivers on what they had designed. Ideally, a research assistant should run several tests, and with the questionnaire in hand (or open in another window) check that all the wording is correct and complete. It also makes a lot of sense to involve an external tester not previously associated with the project who will run through the survey a few times to check all the wording is complete and makes sense. In multi-language surveys this is vital: ideally a native speaker.

The technology can help too. “Dummy data” from randomly generated questions can provide a very useful way to intercept logic and routing errors and this should also be a routine part of the testing process. The survey tool may incorporate spell checkers too, to allow the scripters to find or even prevent minor typos.

Many survey platforms also provide a suite of tools to help with survey checking and flag errors that need correcting directly to the scripter. Yet it seems the tool most often used for feeding back corrections is still the humble email.

Researchers are used to thinking about motivation for survey participants. But in this situation, we need to think about motivation in the researcher. Repeatedly testing a survey is boring and takes up more time than researchers often feel they can afford. I suspect this explains more than anything why so many errors in surveys get through.

It comes as a surprise to many that in the field of book publishing there are actually professional proofreaders who love finding the errors in written copy. In software too, there are professional testers who get a kick out of finding what doesn’t work.  We don’t yet have a professional association of survey testers, but maybe we should. I’ve never found it hard to line up external people to test surveys for a very modest reward. They invariably find things that no-one else has noticed. And they also invariably say thank you, that was interesting.

Typos are embarrassing and make us look unprofessional. But there are much more expensive mistakes that occur due to inadequate testing. Is your testing fit for purpose? If it relies on you and some hope-for-the-best link checking the answer is very likely yes.

MRX and technology – on the road and cautiously edging forward

MRX and technology – on the road and cautiously edging forward

The belief is that technology moves fast. Our annual survey of technology in the market research industry – done in partnership with FocusVision – is released on June 8th and offers an alternative view. As in previous years, it shows an industry on the move – but rather than traveling light with laptop and a few essentials, and heading for the high-speed train, MRX is found taking to the road in a mixture of cars, vans and trailers, as there is so much luggage everyone needs to take with them.

The resourceful and the lucky get away early to speed along an almost empty highway; for the rest, progress is of the crowded highway type, as some lanes inexplicably slow to near stationary, while others keep edging forward. If only you knew in advance which one to pick!

Our survey has been tracking the progress of market research in adapting to new technologies to collect, process and present data for 13 years now. Considerable change can be seen – the line-up of data collection methods in 2016 would be unrecognisable to someone in 2004. But most areas we look at in depth – whether it is mobile research for collecting data, or the long-assumed decline of CATI, or the rise of data visualization and dashboards – move slower than you might expect if you listen to the buzz in the industry.

The same is true of the players. We tend to find that the large firms are further ahead with new techniques and innovation – their size and greater means let them risk the odd detour. But we often also detect pockets of innovation among the smaller firms too. This year, for example, we found that large and small firms alike are neck-and-neck in the extent to which they are incorporating analytics based around Big Data into their reporting. We also uncovered another success story for the industry around how it has taken to storytelling across the board.

Ours is rightly a cautious industry. Much of that baggage is there because it is needed. But we hope our annual survey also lets research companies examine how they are doing in relation to their technology, and check their own direction of travel. We also hope it might stimulate some frank discussion about just what gets taken on the journey, and what needs to be put out for recycling instead.

Conference Report: Challenges of Automation, ASC, May 2017

Tim Macer reports on ASC’s May 2017 one-day conference in London: “Satisfaction Guaranteed? The Challenges of Automation in Survey Research”

Colin Strong, Ipsos MORI, in the conference keynote, said that techniques and technology developed in the 20th Century have brought about a certain view as to how humans behave. These also form the assumptions behind artificial intelligence. Here, he notes an interesting paradox: “Underneath these AI apps – behind the scene you have a whole bunch of humans. Humans pretending to be machines, with machines pretending to be humans. What is going on?” he asked.

We still need people

Strong’s answer is that as AI techniques become sophisticated and advanced, the need for humans to keep it meaningful, and identify what makes sense is intensified. He notes that in parallel with our tendency to project human attributes onto technology, we have also been trying to project machine qualities onto humans – behavioural models being one obvious example. He predicts that human skills will be very much in demand as AI advances. He tips Process Optimiser as the go-to profession of the next decade, after Data Scientist.

Established jobs will go, Strong predicts, but “different jobs, not fewer jobs” will emerge, as they have before, with each major technological revolution of the past. He has no doubt that computers will pass the Turing Test, and in some ways already do. Yet that, he sees, is also because we are becoming a “bit more machine like” – and it is that, and not the mass unemployment that some fear, which he predicts will pose more fundamental political and social challenges in the future.

Rolling the R in Research

A glimpse of the changing skills and jobs in research emerged from the two speakers that followed. Ian Roberts from Nebu championed R, the open source statistical package that’s a favourite among academic researchers, as a practical way to automate research processes. Nebu had used it to create a self-regulating system that worked out how to optimise despatching survey invitations.

Roberts considers R especially suitable because of the access it provides to a comprehensive library of routines to perform the kind of machine-learning modelling that can monitor the performance of survey invitations to different subpopulations, or by different distribution channels such as email and SMS – and then deliver subsequent invitations in ways that will achieve the best completion rates. In the case Roberts described, as the system was able to learn, the need for human supervision was reduced from one person’s constant involvement, to tactical monitoring for a few hours per week.

“Without learning [within your system] you will never get to the next stage of how do you improve what you are doing”, he said.

Watching what those pesky processes get up to

John McConnell from Knowledge Navigators presented a paper by Dale Chant, Red Centre Software in which R also made an appearance, alongside Red Centre’s Ruby platform as a vehicle for automating many of the interrelated processes involved in running a large-scale tracking study, from sampling through to extracting and reporting the data.

Chant categorised automation tasks into three levels, from ‘micro’ for a single process, through ‘midi’ in which several micro-automations are consolidated into one, to ‘macro’ where there are typically many decision points that straddle a number of different processes. The risk in automation is in creating black boxes, said McConnell, where ‘a broken process can run amok and do real damage.

The key to success McConnell and Chant advocate is in exposing decision points within the system to human supervision. Echoing Colin Strong’s earlier warnings on not seeking to eliminate people altogether but instead to apply them to sense-making, Chant reported that whenever he had been tempted to skimp on the manual checking on the decision points he builds in, he has always come to regret it.  According to McConnell, the risks are low with micro-automation, as that is what most tools currently do very successfully. But when moving up the scale of integration, and bringing disparate processes together, the risks magnify. “Here, you have to think of quality assurance – and crucially about the people side, the staff and the skills”, said McConnell.

Lose the linear: get adaptive

Two papers looked at automating within the survey over what questions to serve to participants. The first, jointly presented by Jérôme Sopoçko from the research software provider Askia and Chris Davison from research consultancy KPMG Nunwood, mused over the benefits of building adaptive surveys. Sopoçko asserts these are made more feasible now thanks to the APIs (software-to-software interfaces) in most of today’s data collection platforms that allow them to reach out to open source routines that can perform, for example, text translation, or sentiment analysis in real time, and then determine where the interview goes next.

Davison welcomed the opportunity to break out of linear surveys by starting with open, unstructured questions and then applying text analytics in real time to interpret the result and select from a pool of predefined questions “to ask the next most appropriate question for [the participant].” He continued: “It starts to challenge that traditional paradigm. It can also help with trust. We cannot know how long that survey will take for that individual – if you put the most relevant questions first you can actually stop at 10 minutes. This has to be better than simply putting more traditional surveys into a new environment.”

…and get chatting too – without being derogatory

Simon Neve from software provider FusionSoft, described how he has put a similar model into practice in customer satisfaction surveys, based on sentiment analysis performed on the fly on verbatim questions about service experience. This then allows for the software to probe selectively, so that initial responses that would otherwise be ambiguous or impossible to interpret are clarified and made intelligible. The aim is to provide a survey which appears as a conversation with a Chabot – not a human, though. Neve said: “Our learning is the more context you can provide the better experience you can provide to the respondent.”

However, automated interpretation has its limitations. Software or survey designers need to be vigilant for irony, and to be especially careful when repeating back anything if the response turns into something of a rant. “You have to be careful about playing back derogatory comments,” Neve cautioned. “We have a list of 2,500 derogatory words to avoid.” But he also quipped: “If you also put an emoji up you with your probe, you are immediately forgiven for anything you say.”

Smartphone surveys – the rule of thumb

Smartphone surveys – the rule of thumb

It’s two years since I presented a paper at ASC’s international conference in Bristol about mobile survey technology. According to Twitter, it came in for a few namechecks at last night’s MRS Live event on smartphone surveys, in which Google’s Mario Callegaro was presenting. My 2011 paper seems to have earned the name “the paper on mobile apps”, which is due, as Mario no doubt said last night too, because very little actual research has been done on survey techniques and methodology. But the paper covers much more than that.

The paper was based on a two-step survey looking at mobile self-completion methods and tools. First, I spoke with mobile survey practitioners and did a lit search to see what kinds of issues people had mentioned with respect to mobile research, and from this came up with a list of make-or-break capabilities needed to deal with these issues. I  then I contacted all the main software vendors to see if they offered that kind of support. Because there are, in essence, two ways you can do research on a smartphone – using a dedicated app, or pushing the survey to the phone’s built in browser – I covered both.

“Smartphone surveys” or mobile research still seems to polarize the research community into advocates on the one hand, and skeptics and agnostics on the other. But our annual technology survey among research companies shows that respondents are voting with their thumbs, judging by those now attempting to take online surveys on mobile devices. Across participants reporting this data, it averaged 6.7% in 2011. This had jumped to 13.1% in 2012. We’re just about to repeat the question in the 2013 survey. Anyone want to hazard a guess?

Like it or not, mobile is here to stay – it time to start looking after our new little brother because he’s growing up fast.