The latest news from the meaning blog

 

New book on Online Panels – with a chapter from meaning MD Tim Macer

Online Panel Research book coverHow good is the quality of the access panel you are using to feed participants into your online research? How would you begin to assess quality? How can you tell good practice from bad practice? How do you create and sustain a panel that will create robust and reliable samples for market research or social research?

These are the kinds of questions that a new book published by Wiley sets out to answer. The book comprises of 19 chapters which form an encyclopaedia of the issues relating to the use and also the operation of panels for online research. These chapters were curated by a team six editors: Mario CallegaroReg BakerJelke BethlehemAnja S. GöritzJon Krosnick, and Paul J. Lavrakas and contain the contributions of 50 authors around the world with a wide range of experience and expertise in the field of online research.

Each chapter is based on original research, and in the same spirit of transparency that the book espouses in the operation of online panels, all of the datasets are also made public for anyone to use in their own research.

Lead editor and contributor to the book, Mario Callegaro said: “The book is trying to answer many questions on the quality of such data. It is amazing that online panels have been used for the past 15 years and yet there is no textbook out there. This is the first book to focus on online quality, something that everyone is struggling with.”

Managing Director of meaning ltd, Tim Macer, was asked to provide the chapter on the technology of online panels. He carried out a survey of all of the technology providers offering software or technology for operating panels, and put together a model of good practice in the technology support for panels, based around ESOMAR’s 26 Questions To Help Research Buyers Of Online Samples.

Tim Macer commented: “As Mario says, little had been published on good practice in the use of online panels, but still less was known about the software people were using, and whether they were up to the task of supporting quality panels with the kind of tools, data recording and reports required. This chapter attempts to provide a framework for panel operators, panel users and software developers to use to promote best practice.”

The book may be purchased online and individual chapters may also be purchased for download directly from the publisher.

Technology, sustainability and the perils of sat-nav thinking

Technology, sustainability and the perils of sat-nav thinking

Why are we continuing to field half hour or even longer interviews, when we know 15 minutes is the natural limit for participants?

I gave a presentation at last week’s Confirmit Community Conference in which I looked at some of the survey results from our recent software survey through an ethics and best practice lens. Confirmit are not only one of the major players in the research technology space, but they also sponsored our research, and were keen I share some of the findings at their client gathering in London today.

More than one observer has pointed out that over the years our survey has strayed somewhat beyond the narrow remit of technology into wider research issues, such as methodology, best practice and commercial considerations. I’m not sure we can make that separation any more. Technology no longer sits in the hands of the specialists – it is ubiquitous. And in defence, I point out that everything in our survey does very much relate to technology, and the effects of technology on the industry. But that does indeed give us quite a broad remit.

Technology is an enabler, but it also often imposes a certain way of doing things on people, and takes away some elements of choice. There is always a risk that it also takes away discretion in the user, resulting in ill-considered and ultimately self-defeating behaviour. Think, for example, of the hilarious cases of people putting so much faith in their satellite navigation systems that they end up driving the wrong way along one-way streets, or even into a river.

Technology has shoved research into a particular direction of travel  – towards online research using panels, and incentivising those panels. That is a technological-induced shift, which brings about a very real set of concerns around ethics and best practice which has been rumbling round the industry since 2006 at least.

Researchers cannot afford to take a sat-nav approach to their research, and let the technology blindly steer them through their work. They must be fully in charge of the decisions and aware of the consequences. They must not lose sight of the two fundamental principles on which all research codes and standards rest – being honest to clients and being fair to participants.

Delivering surveys without checking that 30% of your responses were invented by fraudulent respondents or survey-taking bots is no more acceptable than having a member of staff fabricate responses to ensure you hit the target. Ignorance is no defence in the law. Yet this is what is certainly happening in the many cases our survey uncovered where the quality regimes reported are too often of the superficial, light-touch and easily-achieved variety.

Pushing ahead with surveys that will take half an hour or an hour to complete, when there is good shared understanding that 15 minutes is the natural limit for an online survey sounds like an act of desperation reserved for extreme cases. Yet it is the 15 minute online interview that appears to be the exception rather than the norm. This is crassly inconsiderate of survey participants. It’s sat-nav thinking.

The real issue, beyond all this, is of sustainability. Cost savings achieved from web surveys are now being squandered on incentives and on related admin. Long boring surveys lead to attrition. Respondents lost who have to be replaced, very expensively, from an ever dwindling pool.

So yes, I make no apology for being a technologist talking about research ethics. Sat-navs and survey tools aren’t intrinsically wicked – they just need to be used responsibly.

Online samples – where lighting a few fires might help

flame on a struck matchIn Greek mythology, it was Prometheus who stole the secret of fire from Zeus and gave it to humankind. The new non-profit organisation being launched by rival research giants GfK and Kantar to address industry-wide concerns about the online survey quality, seems to make a nod to the myth in its chosen name, the Promedius Group.

The industry’s concerns about online research are many and various, but a common complaint is the lack of transparency of sample providers in the composition of their samples and the extent to which these overlap. It’s worrying enough that, as response rates dwindle, research firms are probably already relying on less than 20% of the population to answer more than 80% of their surveys. But what if it is the hyperactive 0.1% of the population that turn out to be answering 10%, or 20%, as some fear, turning survey data into junk? Without the vantage point of the gods, no-one can really tell what is happening.

Good research is always a balance of art, craft and science. The risk is that if survey results are no longer generally reproduceable, any claims to scientific validity are lost. Those that spend the big money in research, like General Mills and P&G, have noticed this, and are highly likely to start putting their money in consumer intelligence gathering elsewhere unless research can be made more accountable again.

The solution is staring at us from the problem. There is a vast trail of data that can be associated with every web-based interaction – put it all together and it becomes possible to pinpoint individuals and identify, within reasonable probabilities, that they do seem to be taking 20 surveys a day, or that they are very unlikely to be a gynaecologist because the digital wake emanating from the same PC speaks more of college student. Getting at this data, however, is much more difficult. If you are a big player, with a large volume of interactions, you can do this – but even the industry’s own demi-gods face a major hindrance, in that most of the panel providers don’t reveal the key information you need to start putting this information together, like respondent IDs or IP addresses.

Promedius will, it appears, be making use of a lot of technology to match data and perform checks on data, and they will be making this technology available for other research companies to use. This is welcome news, as the problem has been proving too big for anyone to solve on their own. There are already commercial services – MarketTools’ Truesample and Peanut Lab’s Optimus to name two – and these have gained some traction. They also add cost, and are restricted to some extent by only ever showing part of the picture – from those samples and providers that have opted in.

With three major players backing this initiative (IPSOS were involved in the development of the technology behind Promedius) it is likely that it will have the critical mass that is needed for it to become established. What the technology does, and how affordable and convenient that is (the announcements do not say that this will be offered to the industry for free) remains to be seen. I’ll be looking to secure a software review as soon as it becomes available. But there is a good chance that Promedius will be putting fire into the hands of the people, as far as panel and online survey transparency is concerned.

Hopefully Promedius will enjoy a better fate than its near namesake, who after several other skirmishes, found himself bound to a rock by the vengeful Zeus, with an eagle visiting him every day into eternity to peck away his liver.

Where are the tools to enable Web 2.0 research?

Researchers cannot afford to ignore Web 2.0 approaches to research, as Forrester’s analyst Tamara Barber makes clear in a persuasive article on Research Live, in which she settles on market research online communities (MROCs) as being the most effective way to achieve this.  How to do Web 2.0 research, from a methodological point of view, is engaging a great deal of discussion at MR events this year.

In her piece, Ms Barber has focused on social or participatory characteristics of  Web 2.0, where there is obvious value to research. But the other characteristics of Web 2.0 lie in the technological changes that have emerged from its 1.0 antecedents – that the Internet becomes a platform for software, rather than a delivery channel for information. Indeed it is technology – using Ajax, Web services, content integration and powerful server-side applications – that are as much the hallmarks of Web 2.0 as the outward manifestations of the social web. It’s on the technology side that there is a lot of catching up to do, in the world of market research, and until this gets sorted out, Web 2.0 research will remain an activity for the few – for patient clients with deep pockets.

The specialist tools we use in research are starting to incorporate some Web 2.0 features, but nowhere does this yet approach a fully integrated platform for Research 2.0 – far from it. Panel management software is morphing into community management software, but the Web survey tools they link to don’t make it easy yet to create the kind of fluid and interactive surveys the Web 2.0 researcher dreams of. Neither are the tools to analyse all of the rich textual data that come out of these new kinds of research truly optimised for all forms of Web 2.0 research data. There are pockets of innovation, but multi-channel content integration – a key feature of Web 2.0 sites – is still difficult, so researchers are still drowning in data and left running to catch up on the analytical side.

Another problem arises too as more ambitious interactive activities and research methods emerge: the demands on both the respondent and the respondent’s technology increase, and some are getting left behind. Participants find themselves excluded because their PC at home or at work won’t let them run the Java or other components needed to complete the activity – whether it’s a survey, a trip into virtual reality or a co-creation exercise, and their PC won’t let them upload what you are asking them to upload. Even relatively modest innovations such as presenting an interactive sort board in the context of an online survey or  focus group will exclude some participants because their browser or their bandwidth won’t handle it. Others simply get lost because they don’t understand the exercise – there is a growing body of studies emerging into the extent to which respondents fail to understand the research activities they are being asked to engage in.

New Scientist recently reported on innovations taking place in gaming technology where the game learns from the level of competence demonstrated by the player and uses this to adjust the game’s behaviour. It’s the kind of approach that could help considerably in research. Unlike online gamers, we can’t ask participants to spend more than a few seconds in learning a new task and we can’t afford to lose respondents because of the obvious bias that introduces into our samples.

For Web 2.0 research to move beyond its current early-adopter phase, not only do researchers need to take on these new methods, but research software developers also need to be encouraged to take a Web 2.0-centric approach to their developments too.

Getting a better response online (and offline)

Statistics Korea officeThe second day of the IWIS09 Internet Workshop in South Korea focused on practical measures and finding in improving response in online surveys (in addtion to those already reported here and here).

Jan Zajac (University of Warsaw) overviewed factors which can drive participation rates in online surveys, both to boost them and, in some cases, diminish them too. His own experiments, carried out in Poland, optimising email survey invitations to boost response found that including a picture of ‘the researcher’ made a surprisingly large improvement to response. Less surprisingly, pretty, young and female researchers seem best in pulling in the respondents – though not only from males but females too.

Pat Converse (Florida Institute of Technology) revisited Dillman’s Tailored Design Method to see the differences in response rates to in mixed-mode paper and web surveys, and the extent combining both best improves response. It seems paper is far from dead. His analysis across a wide range of published survey results results seem to show that a 34% response rate is about middling for Internet only surveys whereas mail surveys still typically achieve a 45% response. In his experiment, he looked at how effective using a second mode to follow up non-response at the first mode can be – and clearly it will improve response. Surprisingly, the greatest improvement was in following up a web survey invitation that had got nowhere, with an approach by mail: almost 50% of those approached responded, taking overall response to 72%. The best response came from mail first with web as the fall-back, though this is likely to be the most costly, per interview. Web first, with a switch to mail could hit the sweet spot in terms of cost, when a high response really matters – such as for a low incidence sample.

Presenters from National Statistics services in New Zealand, Singapore, Estonia and Colombia all provided insights into how web-based research had been helping them, and how they had been ensuring both high quality and acceptably high response in order to reach the entire population. This too was typically achieved by using the web as one channel in a multimodal approach. Web was generally favoured because of its cost and convenience, and empirically, speakers had observed little significant variation between the responses between modes. Even where internet pentration is still low, as it is in Colombia, with only around 12% of the population enjoying an Internet connection, online is used to supplement fieldwork carried out using 10,000 PDAs that use Geo-location.

As an event, these two days have effectively provided a cross-section of the state of current knowledge and inquiry into Internet research. There was talk of making the papers and presentations available, and if so, I’ll provide a link here.

IWIS09 Korea