Mobile research, as a method, may still be in its infancy, but researchers already need to be aware of the fallout from the growing phenomenon of mobile communications, both in telephony and in data communications and the mobile web. The effects cannot be avoided and need to be understood. It was clear from last week’s Mobile Research Conference, organised by software firm Globalpark, that respondents are already taking online surveys designed for conventional PCs and laptops on their web-enabled smartphones in small but significant numbers. The response to simply exclude them by closing the survey when an iPhone is detected is not a neutral decision from the sampling perspective.
But neither is it smart to exclude them simply because the survey then behaves in a way that does not let them continue, or which makes it difficult for them to select some responses. It is no longer safe to assume survey participants are using a conventional browser as their preferred means of accessing the Internet, and that trend will accelerate as other portable devices, such as Apple’s iPad and the imitators it will spawn, start to emerge.
The same is the case with mobiles replacing landlines – the figures I found were that 20% of households in the USA were mobile only last year, and that is likely to be 25% now – so a quarter of the population will now fall outside any RDD sampling frame in the USA. Marek Fuchs from the Technical University of Darmstadt, in one of the sessions at the event that I was chairing, presented some astonishing figures on the extent to which people were giving up their landlines in many other European countries at an even faster rate. He presented a Europe-wide average from Eurobarometer data in late 2009 of 30%. It is even higher in some counties, notably 66% of people in Finland and 75% in the Czech Republic who have only a cellphone to answer.
Mobile web may not quite be mothholing sampling frames to the extent mobile voice is to CATI, but the greater cause for concern here is just what these respondents do who do participate online. Mick Couper, who knows more about interview effects than anyone, warned that the effect on completion is barely understood yet, but one thing is clear – making assumptions from web surveys would be very risky. Even if survey tools are set up to convert full screen surveys for gracefully to the smaller format, as Google’s Mario Callegaro said, a concern for him would be to know on what basis this conversion was being done and what lay behind the decision-making process adopted by web survey developers or software providers.
The uncomfortable truth is that we just don’t know the answers to many of these questions yet.