Tim is a world-renowned specialist in the application of technology in the field of market and opinion research and is probably the most widely-published writer in the field. His roots are in data analysis, programming, training and technical writing. These days, as principal at meaning he works with researchers, users of research data and technology providers around the globe, as an independent advisor. He is quite passionate about improving the research process and empowering people through better use of technology.

It’s not a good look: a grammatical error in the intro, a spelling mistake at Q2, no obvious place in the list of options for the answer you want to give for Q4.

Too many of the surveys I see from either side of the fence, as a pro or as a punter, contain the kinds of mistakes that should not have got through testing.

Some wording errors might just make you look stupid, but others introduce needless ambiguity and bias. All research firms have QA processes, so what is going wrong? My hunch is that it all comes down to a combination of factors in that twilight zone at the interface of technology and people.

First, it may not always be clear who is responsible for catching these errors. I’ve met researchers who think it’s the responsibility of the scripter to test the survey. I associate it with what I call the “link checking” mentality. The survey is ready to test. The technician scripting the survey provides the researcher with a test link or URL to access the survey and asks them to feed back any problems. The researcher clicks the link, looks at the first few questions and reports back that it seems to be working fine. The technician has deadlines and other projects to work on, so has rushed it through, anticipating the researcher will pick up anything really dreadful. The researcher is busy working on another presentation and has limited time to go all forensic.  A few quick glances at the first two or three questions confirms everything seems to be up and running.

So everyone sees what they are looking for – that the survey is ready to go live on time, even though it isn’t. I hasten to add – this isn’t everyone, always, but it does happen.

I have also heard researchers who delegate (or is it abdicate?) responsibility for taking a really good look to their panel provider. Of course, good panel providers have many reasons for not wanting surveys that are badly worded, or that take 20 minutes to complete rather the ten that were agreed in advance. But they won´t necessarily notice the brand that is missing, or even the entire question that is missing, or sort out minor typos. That´s not the deal.

What this does illustrate though is that there isn´t one ideal person to do the testing. Testing is a team effort and those involved need to be selected for their different perspectives and skills. The researcher who designed the survey must always view it to check what has been created delivers on what they had designed. Ideally, a research assistant should run several tests, and with the questionnaire in hand (or open in another window) check that all the wording is correct and complete. It also makes a lot of sense to involve an external tester not previously associated with the project who will run through the survey a few times to check all the wording is complete and makes sense. In multi-language surveys this is vital: ideally a native speaker.

The technology can help too. “Dummy data” from randomly generated questions can provide a very useful way to intercept logic and routing errors and this should also be a routine part of the testing process. The survey tool may incorporate spell checkers too, to allow the scripters to find or even prevent minor typos.

Many survey platforms also provide a suite of tools to help with survey checking and flag errors that need correcting directly to the scripter. Yet it seems the tool most often used for feeding back corrections is still the humble email.

Researchers are used to thinking about motivation for survey participants. But in this situation, we need to think about motivation in the researcher. Repeatedly testing a survey is boring and takes up more time than researchers often feel they can afford. I suspect this explains more than anything why so many errors in surveys get through.

It comes as a surprise to many that in the field of book publishing there are actually professional proofreaders who love finding the errors in written copy. In software too, there are professional testers who get a kick out of finding what doesn’t work.  We don’t yet have a professional association of survey testers, but maybe we should. I’ve never found it hard to line up external people to test surveys for a very modest reward. They invariably find things that no-one else has noticed. And they also invariably say thank you, that was interesting.

Typos are embarrassing and make us look unprofessional. But there are much more expensive mistakes that occur due to inadequate testing. Is your testing fit for purpose? If it relies on you and some hope-for-the-best link checking the answer is very likely yes.

Share This