The latest news from the meaning blog

 
Technology, sustainability and the perils of sat-nav thinking

Technology, sustainability and the perils of sat-nav thinking

Why are we continuing to field half hour or even longer interviews, when we know 15 minutes is the natural limit for participants?

I gave a presentation at last week’s Confirmit Community Conference in which I looked at some of the survey results from our recent software survey through an ethics and best practice lens. Confirmit are not only one of the major players in the research technology space, but they also sponsored our research, and were keen I share some of the findings at their client gathering in London today.

More than one observer has pointed out that over the years our survey has strayed somewhat beyond the narrow remit of technology into wider research issues, such as methodology, best practice and commercial considerations. I’m not sure we can make that separation any more. Technology no longer sits in the hands of the specialists – it is ubiquitous. And in defence, I point out that everything in our survey does very much relate to technology, and the effects of technology on the industry. But that does indeed give us quite a broad remit.

Technology is an enabler, but it also often imposes a certain way of doing things on people, and takes away some elements of choice. There is always a risk that it also takes away discretion in the user, resulting in ill-considered and ultimately self-defeating behaviour. Think, for example, of the hilarious cases of people putting so much faith in their satellite navigation systems that they end up driving the wrong way along one-way streets, or even into a river.

Technology has shoved research into a particular direction of travel  – towards online research using panels, and incentivising those panels. That is a technological-induced shift, which brings about a very real set of concerns around ethics and best practice which has been rumbling round the industry since 2006 at least.

Researchers cannot afford to take a sat-nav approach to their research, and let the technology blindly steer them through their work. They must be fully in charge of the decisions and aware of the consequences. They must not lose sight of the two fundamental principles on which all research codes and standards rest – being honest to clients and being fair to participants.

Delivering surveys without checking that 30% of your responses were invented by fraudulent respondents or survey-taking bots is no more acceptable than having a member of staff fabricate responses to ensure you hit the target. Ignorance is no defence in the law. Yet this is what is certainly happening in the many cases our survey uncovered where the quality regimes reported are too often of the superficial, light-touch and easily-achieved variety.

Pushing ahead with surveys that will take half an hour or an hour to complete, when there is good shared understanding that 15 minutes is the natural limit for an online survey sounds like an act of desperation reserved for extreme cases. Yet it is the 15 minute online interview that appears to be the exception rather than the norm. This is crassly inconsiderate of survey participants. It’s sat-nav thinking.

The real issue, beyond all this, is of sustainability. Cost savings achieved from web surveys are now being squandered on incentives and on related admin. Long boring surveys lead to attrition. Respondents lost who have to be replaced, very expensively, from an ever dwindling pool.

So yes, I make no apology for being a technologist talking about research ethics. Sat-navs and survey tools aren’t intrinsically wicked – they just need to be used responsibly.

Online samples – where lighting a few fires might help

flame on a struck matchIn Greek mythology, it was Prometheus who stole the secret of fire from Zeus and gave it to humankind. The new non-profit organisation being launched by rival research giants GfK and Kantar to address industry-wide concerns about the online survey quality, seems to make a nod to the myth in its chosen name, the Promedius Group.

The industry’s concerns about online research are many and various, but a common complaint is the lack of transparency of sample providers in the composition of their samples and the extent to which these overlap. It’s worrying enough that, as response rates dwindle, research firms are probably already relying on less than 20% of the population to answer more than 80% of their surveys. But what if it is the hyperactive 0.1% of the population that turn out to be answering 10%, or 20%, as some fear, turning survey data into junk? Without the vantage point of the gods, no-one can really tell what is happening.

Good research is always a balance of art, craft and science. The risk is that if survey results are no longer generally reproduceable, any claims to scientific validity are lost. Those that spend the big money in research, like General Mills and P&G, have noticed this, and are highly likely to start putting their money in consumer intelligence gathering elsewhere unless research can be made more accountable again.

The solution is staring at us from the problem. There is a vast trail of data that can be associated with every web-based interaction – put it all together and it becomes possible to pinpoint individuals and identify, within reasonable probabilities, that they do seem to be taking 20 surveys a day, or that they are very unlikely to be a gynaecologist because the digital wake emanating from the same PC speaks more of college student. Getting at this data, however, is much more difficult. If you are a big player, with a large volume of interactions, you can do this – but even the industry’s own demi-gods face a major hindrance, in that most of the panel providers don’t reveal the key information you need to start putting this information together, like respondent IDs or IP addresses.

Promedius will, it appears, be making use of a lot of technology to match data and perform checks on data, and they will be making this technology available for other research companies to use. This is welcome news, as the problem has been proving too big for anyone to solve on their own. There are already commercial services – MarketTools’ Truesample and Peanut Lab’s Optimus to name two – and these have gained some traction. They also add cost, and are restricted to some extent by only ever showing part of the picture – from those samples and providers that have opted in.

With three major players backing this initiative (IPSOS were involved in the development of the technology behind Promedius) it is likely that it will have the critical mass that is needed for it to become established. What the technology does, and how affordable and convenient that is (the announcements do not say that this will be offered to the industry for free) remains to be seen. I’ll be looking to secure a software review as soon as it becomes available. But there is a good chance that Promedius will be putting fire into the hands of the people, as far as panel and online survey transparency is concerned.

Hopefully Promedius will enjoy a better fate than its near namesake, who after several other skirmishes, found himself bound to a rock by the vengeful Zeus, with an eagle visiting him every day into eternity to peck away his liver.

Panel crazy

Just before the end of last year, I decided to join one of the online panels to get a better understanding of their workings. I have to say that the whole experience has left me open-mouthed in shock.  In the short period since I have been a member, I have encountered error after error as well as, frankly, poor questionnaire design and basic scripting errors.

Typical errors I have experienced are:

  • Emails in various foreign languages inviting me to participate in surveys
  • Numerous broken links in emails
  • Requests to enter passwords that I don’t have
  • Another time, I clicked the link and got a message saying ‘this survey has not started yet’. I went back later and clicked the link and got a message saying that I had completed.

For those few surveys where I have not been screened out, I have encountered numerous impossible lines of questioning. There never seems to be a way of skipping a question, so your only options are to just click anything (to make sure you get your payment for completion) or to give up.  Some questionnaires allow you to make a comment at the end, so you can at least explain that you were forced to give an opinion on the personalities of different washing up liquid brands, even though you have no opinion on the matter because you regard washing up liquid as a commodity and just buy the cheapest, irrespective of its name, color, perfume etc.

In another survey, which was about radio stations, I was presented with a list of radio stations and asked which one I had most recently listened to. I picked a talk and news station. From then on, I was asked endless, detailed, non-applicable questions such as whether the presenters interrupted the music or how fun the competitions were.

And perhaps, even worse, I have recently seen a question which asked me which of a list of brands I had  bought. The one I had bought was not on the list and there was no ‘other’ option! Again, I was forced to select an incorrect answer or give up.

I did think that perhaps I was just biting into the one bruised apple of the basket, but coincidentally, I spent New Year’s Eve with a friend who is managing a new panel.  She has worked with numerous panel suppliers and was not in the least bit surprised by my tale of woe.

Having had this discussion with my friend, I was wondering whether market researchers are accepting unrealistic deadlines. Clients know that with the technology we now have, it is possible to script a project and complete the fieldwork online in a matter of hours. But perhaps they don’t know about the amount of thought, effort and creativity that is required for decent questionnaire design and in particular thorough quality control. There’s no point in conducting a survey if the questions are impossible to answer honestly and accurately. Presumably, some important business decisions are based on some rather dubious survey results.

Korean insights into MR

Statistics-Korea-HQ

Statistical Center, Daejeon, home of Statistic Korea

More insights into market and social research in Korea emerged in day two of the Internet Survey International Workshop, hosted by Statistics Korea.

South Korea is one of the most technically advanced nations in the world, with a young and growing population. Virtually 100% of those aged under 40 are Internet users and across the board, South Korea ranks eighth globally for Internet penetration: higher than both the USA and the UK. Using Internet panels is therefore very appealing for national statisticians and social researchers – if only ways could be found to overcome coverage and non-response bias.

Sunghill Lee (UCLA) proposed an advance on Harris Interactive style of propensity weighting, to nudge panels towards national representativeness by supplementing propensity weights with a stage of calibration against a reference dataset which nationally representative, or from a true random probabilty sample. Her model was capable of halving the observed discrepency, but at a cost, as the sample variability tended to increase.

Prof. Cho Sung Kyum (Chungnam National University, Korea) had noticed others’ attempts to weight their panels in the direction national representivity tended to use demographic data, including some measures that were hard to calibrate, such as occupation. There is often frustration in being able to get hold of robust reference data. Prof. Cho had noticed that many national statistics offices around the world conduct a Time Use study among the general population. These meet most criteria for good reference data – large, robust, random probability samples that are representative of the population. They also cover Internet-specific information, as one use of time which is tracked in these studies, in some detail.

Statistics-Korea

In his test online surveys, he asked respondents some time characteristics that could be cross-matched, such as the typical time home from work, typical bedtime and time spent online. Matching by six measures, his model provided near perfect adjustments for questions relating to leisure, education or media consumption; but it offered no improvement for income or work-related questions. However, his work is ongoing, and he hopes to identify other variables that could narrow the gap in future.

Online on a slow burn

In MR, online research has only a ten per cent share in Korea, an astonishingly low figure given the very high Internet penetration in Korea, stated Choi In Su, CEO of Embrain, an Asian panel provider. Face-to-face still tends to dominate, as telephone is not particularly useful either with less than 70% of Koreans having a fixed phone line. However, he predicted quite rapid change, expecting the share to reach 20% or more.

The reluctance among MR firms also stems from the same concerns that the statisticians had been airing – coverage and non-response error, and low quality in particiation. Mr Choi outlined a number of unusual characteristics of the Embrain panels designed to combat these limitations – which include a combination of online and offline recruitment, rigorous verification of new recruits against registers or other trusted sources, a range of fraud detection measures, and good conduct training for panel members. A key measure of success is the consistent 60% response rate from survey invitations.

It felt as if the social statisticians were ahead of the game. Kim Ui Young from the Social Surveys division of Statistics Korea spoke of two successful online introductions of large-scale regular surveys. A key driver had been to reduce measurement error and respondent burden, and one diary study of household economic activity provided a good example of this. In fact, Kostat had gone as far as to work with online banking portals to allow respondents to access their bank statements securely, and then import specific transactions directly into the online survey, which a lot of respondents found much easier to do.

In my concluding blog entry, tomorrow, I will cover the highlights from international case studies and new research on research, which were also presented today.

Has the Insight Show overheated?

Technology was an aspect of this week’s Insight Show that the exhibition’s promoters were majoring on, yet on the ground the number of technology providers exhibiting at the show was thinner than ever – I found just 13. Who was there? End-to end  mixed mode providers were represented by Askia, Confirmit, Merlinco, Nebu and Snap, plus online specialists Itracks and the newcomers on the block, ebox software. The niche providers were represented by E-Tabs (a niche maker in their own right for report automation), Centurion and Cint, for panel management, Intellex Dynamic Reporting for interactive analyis,  OnePoint for mobile data collection, Think Eyetracking, for, well, eye tracking, Visions Live, a new qualitative research platform, plus, rather strangely, a presence from Panasonic, featuring their Toughbooks as a rugged CAPI device.

Part of the reason for the shift of the Insight Show from the back end of the year to the middle (last year’s show was barely seven months ago, in November), was to merge four of Centuar’s marketing-related shows together under one roof, where they were colour coded and branded as MarketingWeekLive! Insight was in the orange corner. But lo and behold! Over in the blue corner, was SPSS, a big fish in the diminutive Data Marketing Show. They weren’t the only MR-relevant supplier to show up in the other quadrants – there were some research and fieldwork firms that had taken up positions elsewhere too. To the visitor, it was a bit of a muddle.

The Insight Show does have the feel of being on the wane since its heyday, if you listen to the crowd. But then I hear exhibitors moan each year that traffic is very slow, and most time is spent standing around in an excruciatingly expensive way: but identifying its heyday is elusive and illusory. This year, it seems day one was busier than the day two, when I was there. Yet I can remember being told there wasn’t a busy day at all in past years. Still, the day I was there  seemed to be the one when competing sales teams converged on the orange carpet between their stands to chat about who was up to what and complain about the heat.

I had assumed much of the reason for the merged format was because the Insight Show (which used to be big and standalone) was in danger of disappearing altogether, and alongside the other shows, it would find itself in the naughty corner. Not so. The Insight show was only second in size to the big and bold In-Store show. If the Point-of-Sale people can’t put on a good show, what hope is there for us research boffins? But it did make me wonder how many people, out shopping for illuminated fascias and storefront signage might find some online focus groups coming in handy, or those looking for a decent panel provider  being wowed by the ‘innovative trolley and basket systems’ on display next door.

Apart from the exhibitors, what was hot in the Orange corner? 2009 seems to be the year of online qual. Not only does Visions Live have a very interesting new multilingual realtime and asynchronous (or bulletin board) product which has come out of New Zealand, and already has a significant footprint in Asia Pacific, but then the other newcomers, ebox, seem to have put as much effort into developing qual tools as they have the quant online data collection.  It’s all very Research 2.0, although Itracks, who were also there, would make the point that they’ve been doing online qual from the time when people were still discovering their @ signs. And today I’ve just been given a private preview of yet another virtual qualie tool  (a very nice one in the making too) that locates the group experience in a virtual worlds paradigm.

Beyond that, software providers are talking seriously about automation – as they have for a long time – but they were also showing me things that were starting to make sense in simplifying tasks and saving time. Centurion have a new web-based interface out for their panel and sampling platform, called Marsc.net, which looked very nice – and they have built in lots of heuristic models for drawing samples for trackers. Intellex Dynamic Reporting had a number of smart new reporting goodies on display to make life easier, and can now go straight out to PowerPoint for report automation. The bright people at Nebu, on the other hand, have simplified the panel set-up process so that someone using their panel solution, could create and start populating a new online panel or custom community in just an hour or so – or as long as it takes to create the branding and imagery, in fact – their ‘panel in a box’.

But as I left, I was wondering if someone in Centuar had misheard what I certainly heard last year, that ‘the show would make more sense as a biennial event’ and optimistically decided to make it a biannual event. Hardly more than six months later was really too soon for this event, and the show definitely suffered as a result from the visitor’s point of view.