The latest news from the meaning blog

 

Mobile research growing up fast

Hands holding out a collection of mobile phonesGlobalpark, organisers of the 2011 Mobile Research Conference asked me to chair day two of the event. I decided, rather ambitiously, to close the conference with a round-up of all the presentations that day. Here, in prose, is what I verbalised at the end of long day of very interesting presentations.

Don’t be surprised if you don’t recognise many of the names here. It is true to say that the early adopters of this method are not necessarily the usual suspects – there were some familiar firms present – but as the industry as a whole continues to see only problems with mobile research, it was illuminating to hear from those who are not only convinced of the value of mobile research, but are developing expertise, best practice and clients hungry for more.

Though organised by Globalpark, the event was software vendor agnostic, with examples from rival software providers presented too. In a few sentences for each session, here is what came up:

Bruce Hoang (Orange Advertising Network) – presented a multi-country study of mobile media consumption by mobile data users in the UK, France, Spain and Poland – countries with marked differences is adoption and usage. He has concluded that Web-optimised sites are more popular with consumers in mature markets than using apps to access content. “The web browser in the mobile device is the killer app” according to Bruce. He advocates sticking to web browser-based surveys on the mobile as it most closely aligns with respondents’ preferences and experiences.

Guy Rolfe (Kantar Operations) – asserted that Mobile apps for surveys definitely have their place. Kantar find participants are willing to download survey apps, which can enrich their survey experience. In parallel with this, many consumer product manufacturers and retailers are now creating lifestyle apps that capture a lot of useful data which are proving to be very popular with consumers – they don’t have a research purpose at their heart but the data they collect they could be very useful to researchers, if they can get their hands on it.

Jeroen de Rooij (ValueWait) – presented a lifestyle case study that proved it possible to use mobile research to ask 60-question-long surveys, with modest incentives, if you do it with care. The survey also asked respondents to email in pictures after completing the survey, and a very high proportion were willing to go to this effort.

Peter Lynn (University of Essex) – explained that from a social science and statistical perspective, the focus of scientific survey literature has tended to emphasise the negative – seeing mobile samples as a problem. This needs to be questioned. If you take a Total Quality perspective, there are many areas in which mobile samples are no better or worse than others – coverage may be better. There is also a one to one relationship between respondent and phone unlike landlines. Other sources of error are reduced, e.g. people are more willing to answer some kinds of questions, it avoids recall error, being in the moment, and overall, the responses are not otherwise fundamentally different from other modes. It’s strength surely lies in complementing other modes.

Michael Bosnjak (Free University of Bozen-Bolzano) and Sven Scherrer (Globalpark) – took us through some early results of study on how useful voice capture and voice recognition might be in overcoming the Achilles’ Heel of mobile research – capturing lengthy verbatim responses. Low response and high drop-off is often observed in mobile surveys when these questions get asked. The study pitched standard touch-screen entry with voice capture and voice recognition. From the preliminary results presented, voice did not come out well from a respondent perception point of view. Touch screen entry was preferred over voice entry – voice recognition was the least preferred and the spread of responses indicated a divergence of opinions here. Interestingly, respondents seemed to warm to those methods, particularly voice capture, when asked about it 5 days later. The actual effect on the data has not been measured yet – those results are due out soon.

Justin Bailey (The Nielsen Company) and Sean Conry (Techneos) — presented a case study on using BlackBerry Curve devices with a recruited panel of South Africans during the period of the world cup showed the extent to which low response really does not need to be a feature of survey based research. The study was monitoring media and advertising consumption and some brand recognition over the period of the World Cup. Very high response rates were sustained throughout an extended survey. Pictures were collected too, and Nielsen ended with a library of 60,000 submitted pictures. The case study offered a real feel-good moment for mobile research.

Thaddeus Fulford Jones (Locately) – has created a panel of mobile phone users in the US who are willing to allow the firm to capture location data and use this to model actual behavior. You can learn about the extent to which consumers do go to some outlets and often will drive past rivals in order to reach them. Raw location data is used to identify locations such as retailers, leisure destinations and other important consumer touchpoints. It tends to be most powerful when combined with other data to provide context. Location data also reveal useful temporal data – e.g. how long people really have spent in a store or even waiting at the checkout.

Hannu Verkasalo (Zokem) – spoke of  “on-device measurement” or using the mobile phone for passive data gathering. What came out was just how much you can measure passively, free from the response bias of a survey, when using a mobile device, from sites accessed, search terms entered and time spent on different activities to location data – what was accessed at home, at work or on the road. He also revealed  the very different ways that people consume mobile content on mobile devices compared to the web, and again the different profile of apps versus browsers in the content that people access. Hannu’s prognosis is that the mobile app is in the ascendant – which contradicted Bruce Hoang’s earlier analysis.

AJ. Johnson (IPSOS Mori) – chaired a panel session entitled “Debunking the myths of mobile research” and asserted that research needs to treat mobile very differently. People will be engaged, if you approach them via mobile research, but as researchers we have to be very transparent, open and honest with respondents.

Paul Berney (Mobile Marketing Association) – challenged research to take greater interest in mobile research. Mobile is the natural home of the digital native – the under 30’s who have grown up knowing nothing other than the internet and the mobile phone. It’s already changing the way that retailers are working and it fundamentally changes the engagement model for brands. It is a mistake to think that mobile is about the technology – it’s about people. Mobile is a two-way channel and if we don’t go there with our research, then others will.

To round up, a few common themes to emerge across the event that struck me as fresh:

  1. Mobile surveys can be a bit longer than we may have first thought. 8-12 questions is a common-sense length, but examples were presented of 30 and 60 questions, and much longer, when over an extended period. But is trying to push up the limit the start of the same slippery slope that has led to the downfall of online research?
  2. The experience in emerging markets and less mature markets is very different. The penetration of mobile is so high in emerging markets that it far exceeds every other channel except face-to-face – it is the natural equivalent of online research.
  3. In developed economies, there is an assumption that mobile research is a replacement for online. In reality, it seems to supplement it, and it is more of a replacement for telephone, face to face.
  4. Mobile research is not one thing – it’s a multimodal channel in its own right, embracing self completion, interview-administered, quantitative or qualitative, visual, textual, voice and image, or passive, observational, which can be augmented with location or temporal data.
  5. The sphere of mobile research is changing fast and it is continuing to evolve. It is not something that research can afford to ignore.

The curse of seeing everything

Model of brain activityFrom Research 2010, MRS, London, 23-24 March 2010

A major issue with post-modern research methods, or ‘new MR’ as it is sometimes called –  a recurrent theme at the Research 2010 conference – is the amount of data and consequent effort that goes into extracting any meaning from this data. This came home in the new technology session, chaired by Robert Bain and billed as ‘Research Unlimited’.  Not that any of the technology being presented was essentially new – naming the session “incremental developments in technologies based around memory and newly applied to market research” may have added precision, but not made the message any clearer.

The pursuit of clarity should be at the heart of any new methods – and that is a challenge with two of the methods showcased  based on neurometrics – from Nunwood’s head of R&D Ian Addie and Millward Brown’s new head of ‘Consumer Neuroscience’, Graham Page. Page is probably the first MR staffer to have the N-word in their job title.

Neurometrics

Improvements in EEG measurement and analysis technology  make the approach more affordable and slightly more applicable to surveys in the real world, but they still have a long way to go. The electrode caps and camera-rigged spectacles modelled on stage by Addie, and even the slimmed down version shown by Page, are still pretty clunky and intrusive. Addie also cautioned that ‘noise’ in the data collection meant that 30 per cent of the data they had collected had to be discarded.

Positivism with a big P

Both speakers showed that this kind of data can aid understanding, and can usefully cast a new light on some deeply held assumptions about consumer behaviour, which is no bad thing. Nunwood respondents who had been wired up with electrodes for supermarket visits had revealed that a significant amount of time in selecting products seemed to be spent in rejecting other projects – not something that is much questioned in conventional recall studies. As research was busy going po-mo in other sessions, this looked like a rallying call for Positivism with a big P.

Page cautioned: “Hype means it is very easy to get carried away with exaggerated claims [for neuroscience]. The results don’t stand on their own: you have to combine this with something else.”

Not only that, but you quickly accumulate a vast amount of data that takes time and effort to process. Furthermore, to give any meaning to it, you must be applying the qualitative judgements of the researcher or neuroscientist. This additional burden was also true of the other novel method in the session. Here, Bob Cook from Firefly presented an interesting extension to diary research – particularly those studies that lean towards the auto-ethnographic – with a methodology based on Lifelogging, or ‘glogging’ using a small fish-eye camera worn by the participant around their neck. This can take a shot and capture everything the respondent sees, paced out at minute intervals throughout the day. Cook reckons it can overcome the usual problems of incomplete recall that can arise over the more mundane and automatic activities respondents may be asked about.

Making sense of the data

The problem, in trying to move such techniques into the mainstream, comes at the analysis stage. To get meaning from these techniques takes extraordinary effort – and they are not amenable to the analytical methods conventionally applied to either qual or quant. We’re not usually short of data these days, but we are short of tools to make sense of these new streams of data. Without them, analysis is inordinately time-consuming. Technology makes it easy to add precision in volumes, but with all these new methods, it falls heavily on the researcher to bring out the message.

Mobile fallout – to be ignored at your peril

Hands holding out a collection of mobile phonesFrom the Mobile Research Conference 2010, Globalpark, London, 8-9 March 2010.

Mobile research, as a method, may still be in its infancy, but researchers already need to be aware of the fallout from the growing phenomenon of mobile communications, both in telephony and in data communications and the mobile web. The effects cannot be avoided and need to be understood. It was clear from last week’s Mobile Research Conference, organised by software firm Globalpark, that respondents are already taking online surveys designed for conventional PCs and laptops on their web-enabled smartphones in small but significant numbers. The response to simply exclude them by closing the survey when an iPhone is detected is not a neutral decision from the sampling perspective.

But neither is it smart to exclude them simply because the survey then behaves in a way that does not let them continue, or which makes it difficult for them to select some responses. It is no longer safe to assume survey participants are using a conventional browser as their preferred means of accessing the Internet, and that trend will accelerate as other portable devices, such as Apple’s iPad and the imitators it will spawn, start to emerge.

The same is the case with mobiles replacing landlines – the figures I found were that 20% of households in the USA were mobile only last year, and that is likely to be 25% now – so a quarter of the population will now fall outside any RDD sampling frame in the USA. Marek Fuchs from the Technical University of Darmstadt, in one of the sessions at the event that I was chairing, presented some astonishing figures on the extent to which people were giving up their landlines in many other European countries at an even faster rate. He presented a Europe-wide average from Eurobarometer data in late 2009 of 30%. It is even higher in some counties, notably 66% of people in Finland and 75% in the Czech Republic who have only a cellphone to answer.

Mobile web may not quite be mothholing sampling frames to the extent mobile voice is to CATI, but the greater cause for concern here is just what these respondents do who do participate online. Mick Couper, who knows more about interview effects than anyone, warned that the effect on completion is barely understood yet, but one thing is clear – making assumptions from web surveys would be very risky. Even if survey tools are set up to convert full screen surveys for gracefully to the smaller format, as  Google’s Mario Callegaro said, a concern for him would be to know on what basis this conversion was being done and what lay behind the decision-making process adopted by web survey developers or software providers.

The uncomfortable truth is that we just don’t know the answers to many of these questions yet.

Getting a better response online (and offline)

Statistics Korea officeThe second day of the IWIS09 Internet Workshop in South Korea focused on practical measures and finding in improving response in online surveys (in addtion to those already reported here and here).

Jan Zajac (University of Warsaw) overviewed factors which can drive participation rates in online surveys, both to boost them and, in some cases, diminish them too. His own experiments, carried out in Poland, optimising email survey invitations to boost response found that including a picture of ‘the researcher’ made a surprisingly large improvement to response. Less surprisingly, pretty, young and female researchers seem best in pulling in the respondents – though not only from males but females too.

Pat Converse (Florida Institute of Technology) revisited Dillman’s Tailored Design Method to see the differences in response rates to in mixed-mode paper and web surveys, and the extent combining both best improves response. It seems paper is far from dead. His analysis across a wide range of published survey results results seem to show that a 34% response rate is about middling for Internet only surveys whereas mail surveys still typically achieve a 45% response. In his experiment, he looked at how effective using a second mode to follow up non-response at the first mode can be – and clearly it will improve response. Surprisingly, the greatest improvement was in following up a web survey invitation that had got nowhere, with an approach by mail: almost 50% of those approached responded, taking overall response to 72%. The best response came from mail first with web as the fall-back, though this is likely to be the most costly, per interview. Web first, with a switch to mail could hit the sweet spot in terms of cost, when a high response really matters – such as for a low incidence sample.

Presenters from National Statistics services in New Zealand, Singapore, Estonia and Colombia all provided insights into how web-based research had been helping them, and how they had been ensuring both high quality and acceptably high response in order to reach the entire population. This too was typically achieved by using the web as one channel in a multimodal approach. Web was generally favoured because of its cost and convenience, and empirically, speakers had observed little significant variation between the responses between modes. Even where internet pentration is still low, as it is in Colombia, with only around 12% of the population enjoying an Internet connection, online is used to supplement fieldwork carried out using 10,000 PDAs that use Geo-location.

As an event, these two days have effectively provided a cross-section of the state of current knowledge and inquiry into Internet research. There was talk of making the papers and presentations available, and if so, I’ll provide a link here.

IWIS09 Korea

Korean insights into MR

Statistics-Korea-HQ

Statistical Center, Daejeon, home of Statistic Korea

More insights into market and social research in Korea emerged in day two of the Internet Survey International Workshop, hosted by Statistics Korea.

South Korea is one of the most technically advanced nations in the world, with a young and growing population. Virtually 100% of those aged under 40 are Internet users and across the board, South Korea ranks eighth globally for Internet penetration: higher than both the USA and the UK. Using Internet panels is therefore very appealing for national statisticians and social researchers – if only ways could be found to overcome coverage and non-response bias.

Sunghill Lee (UCLA) proposed an advance on Harris Interactive style of propensity weighting, to nudge panels towards national representativeness by supplementing propensity weights with a stage of calibration against a reference dataset which nationally representative, or from a true random probabilty sample. Her model was capable of halving the observed discrepency, but at a cost, as the sample variability tended to increase.

Prof. Cho Sung Kyum (Chungnam National University, Korea) had noticed others’ attempts to weight their panels in the direction national representivity tended to use demographic data, including some measures that were hard to calibrate, such as occupation. There is often frustration in being able to get hold of robust reference data. Prof. Cho had noticed that many national statistics offices around the world conduct a Time Use study among the general population. These meet most criteria for good reference data – large, robust, random probability samples that are representative of the population. They also cover Internet-specific information, as one use of time which is tracked in these studies, in some detail.

Statistics-Korea

In his test online surveys, he asked respondents some time characteristics that could be cross-matched, such as the typical time home from work, typical bedtime and time spent online. Matching by six measures, his model provided near perfect adjustments for questions relating to leisure, education or media consumption; but it offered no improvement for income or work-related questions. However, his work is ongoing, and he hopes to identify other variables that could narrow the gap in future.

Online on a slow burn

In MR, online research has only a ten per cent share in Korea, an astonishingly low figure given the very high Internet penetration in Korea, stated Choi In Su, CEO of Embrain, an Asian panel provider. Face-to-face still tends to dominate, as telephone is not particularly useful either with less than 70% of Koreans having a fixed phone line. However, he predicted quite rapid change, expecting the share to reach 20% or more.

The reluctance among MR firms also stems from the same concerns that the statisticians had been airing – coverage and non-response error, and low quality in particiation. Mr Choi outlined a number of unusual characteristics of the Embrain panels designed to combat these limitations – which include a combination of online and offline recruitment, rigorous verification of new recruits against registers or other trusted sources, a range of fraud detection measures, and good conduct training for panel members. A key measure of success is the consistent 60% response rate from survey invitations.

It felt as if the social statisticians were ahead of the game. Kim Ui Young from the Social Surveys division of Statistics Korea spoke of two successful online introductions of large-scale regular surveys. A key driver had been to reduce measurement error and respondent burden, and one diary study of household economic activity provided a good example of this. In fact, Kostat had gone as far as to work with online banking portals to allow respondents to access their bank statements securely, and then import specific transactions directly into the online survey, which a lot of respondents found much easier to do.

In my concluding blog entry, tomorrow, I will cover the highlights from international case studies and new research on research, which were also presented today.

Online is the future for national statistics

iwis09

I’m at the First International Workshop on Internet Survey at Daejeon, Korea. It is hosted by Statistics Korea (or Kostat) which has put together an impressive roster of presentations on leading edge thinking in using online research for public policy research and other nationally representative surveys: eighteen speakers, fourteen from around the world, and a nice fat 320-page book of scholarly papers to accompany the event.

My own talk was on software and technology (what else?) and how appropriate technology can help control measurement and non-response error: but unlike many of these events, I did not find myself the pariah for speaking technology. There has been explicit acknowledgment throughout the first day of this two-day event for the need for researchers to be more discriminating and more demanding of the technology being used, in order to improve response, reduce respondent burden and control error more effectively — as well as reducing cost.

The event started with Yi Insill, the Commissioner of Statistics Korea, who predicted “a significant increase in demand for Internet Surveys” in National Statistics work in Korea. “We are expecting them to reduce non-participation and make them engaging for participants,” she stated. She also acknowledged that national statisticians had been reluctant to use online surveys because they were not based on random probability samples and “have been criticised for poor quality”, but that was now changing as the methodology was being understood and tested. Preparations were well advanced for the 2010 e-Survey in Korea, and we heard more of this later on.

One good paper followed another – but I will pull out a few highlights. Frederik Funke (Tübingen University) showed how Visual Analog Scales (VAS), when applied to online surveys, can dramatically reduce measurement error, while showing that conventional 5-point scales, applied to online surveys by convention (and possibly for no better reason) can enforce measurement error on participants by restricting their options – to the extent that different results will arise from a VAS which appear to be more accurate.

Surveys that leak cash

Lars Kaczmirek (GESIS, Mannheim) followed through with three practical changes to survey design that would improve response and reduce error. He showed the results of some experiments that showed how, compared to the effect of providing an incentive on a survey or not, some simple changes to survey design were actually more effective. In other words, you could chop the incentive, improve the design, and still be slightly better off in terms of response.

Kaczmirek was also critical of the way in which new technology was sometimes applied to surveys uncritically, even though it would increase non-response. Another example was the automatic progress bar – inaccurate or misleading progress bars, particularly those that jump due to routing, are such a turn-off to respondents that actually removing them altogether will often improve response. Accurate bars, or bars where jumps are smoothed and averaged out, do better than no bar, though.

Boxes for Goldilocks

Marek Fuchs (University of Kassel) gave us the latest thinking on verbatim response box size and design in online surveys: getting the size right can mean more characters and potentially, more concepts – like Goldilocks and the porridge, they should not be too small or too large. Adding in a Twitter-style count of how many characters remain can also boost response length, provided the starting number is realistic (a couple of hundred, not a thousand characters). However, too much trickery, such as dynamically appearing or extending boxes will also send any gains into reverse. As with the wonky progress bars, the point is that any feedback must be realistic and honest for it to act as a positive motivator.

Questionnaires with added AJAX

Peter Clark (Australian Census Bureau) talked us through the 10 per cent uptake of an online census option in Australia for the 2006 Census, and the plans being made to increase this now to 25% for the 2011 Census. ACB had appointed IBM as its technology partner for 2006 and again for 2011. IBM had pioneered adding browser-based processing in AJAX (a Web 2.0 technology) to the 2011 e-Census form, to cut down server load. It has saved them a fortune in hardware requirements, as the server load is now a third of what it was. For the many participants on slower dial-up connections, the form took longer to load, but once loaded, was actually faster, as all further traffic to the server was minimal and therefore very fast to the user.

Australia, along with other speakers describing their e-census strategies in Singapore and Estonia, had added an online option to the national census as a means of reducing cost. For the obvious coverage reasons, e-census is offered as an option to back up self-completion by mail, and as a last resort, face-to-face for non-responders.

Pritt Potter (Webmedia, Estonia) spoke of the work he had done in providing novel respondent validation methods to the forthcoming e-census in Estonia, which included using trusted third parties such as banks to offer verification through their normal online banking security, and then pass on to the census bureau key identification data. Another method offered to respondents is mobile phone verification (provided that the phone is registered). Both methods have the advantage that the public can respond to ads in the media, visit a website and self-verify, instead of the census bureau having to send out numerous unique passcodes.

And there is more in store tomorrow…