The latest news from the meaning blog

 
MRX and technology – on the road and cautiously edging forward

MRX and technology – on the road and cautiously edging forward

The belief is that technology moves fast. Our annual survey of technology in the market research industry – done in partnership with FocusVision – is released on June 8th and offers an alternative view. As in previous years, it shows an industry on the move – but rather than traveling light with laptop and a few essentials, and heading for the high-speed train, MRX is found taking to the road in a mixture of cars, vans and trailers, as there is so much luggage everyone needs to take with them.

The resourceful and the lucky get away early to speed along an almost empty highway; for the rest, progress is of the crowded highway type, as some lanes inexplicably slow to near stationary, while others keep edging forward. If only you knew in advance which one to pick!

Our survey has been tracking the progress of market research in adapting to new technologies to collect, process and present data for 13 years now. Considerable change can be seen – the line-up of data collection methods in 2016 would be unrecognisable to someone in 2004. But most areas we look at in depth – whether it is mobile research for collecting data, or the long-assumed decline of CATI, or the rise of data visualization and dashboards – move slower than you might expect if you listen to the buzz in the industry.

The same is true of the players. We tend to find that the large firms are further ahead with new techniques and innovation – their size and greater means let them risk the odd detour. But we often also detect pockets of innovation among the smaller firms too. This year, for example, we found that large and small firms alike are neck-and-neck in the extent to which they are incorporating analytics based around Big Data into their reporting. We also uncovered another success story for the industry around how it has taken to storytelling across the board.

Ours is rightly a cautious industry. Much of that baggage is there because it is needed. But we hope our annual survey also lets research companies examine how they are doing in relation to their technology, and check their own direction of travel. We also hope it might stimulate some frank discussion about just what gets taken on the journey, and what needs to be put out for recycling instead.

Clichés we all need to ditch in technology marketing

Clichés we all need to ditch in technology marketing

How many technology companies use photos like this? Even photos can be clichés!

How many technology companies use photos like this? Even photos can be clichés!

I’m certainly guilty of falling into the temptation of littering my writing with clichés. From my regular perusals of technology company websites I can see others fall into this trap too. There are just so many powerful, flexible one-stop shops in the B2B marketplace, but I’m often no wiser as to what they actually do!  Let’s consider a few of the biggest cliché traps we all fall into, and let me offer you some possible escape routes too.

Plain ol’ corporate gobbledegook

Trap: The world’s most flexible and innovative software solution

So, you’re reading through some technology company’s website and all you can glean through the haze of the corporate jargon is that the company claims that their product is the greatest. Of course they do!

But what you really want to know is, firstly, what the product does, and, secondly, how it differs from anything else on the market. If you are Apple , you might  get away with a sprinkling of unsubstantiated superlatives, , but the rest of us need to stateclearly what we believe we do best. To gain maximum understanding, we need to write in the way we might explain it to someone we’d just met who had asked ‘so what does your company do?’. .

Escape route: Online research software that promotes your creativity

Features dressed up as benefits

Trap: Gain the power and flexibility you need to engage customers

Most people in sales or marketing know now that if you talk about software or services, you need to major on the benefits, not the features. This example is an attempt at writing a benefit, but in truth few people are saying to themselves what they really need is some power and flexibility, but many will be thinking that they need to engage the right customers – the ones that have a need and money to spend. So, always put yourself in the shoes the customer or prospect and write what is going to have the most impact. Focus on the actual benefits.

Escape route: Use our software to engage with the right customers

Claims that are vague platitudes

Trap: We go the extra mile

If you catch yourself writing vague platitudes such as ‘one-stop shop’, ‘easy to use’, ‘cutting edge’ or the dreaded ‘we go the extra mile’…. it’s time for a rewrite. Don’t simply voice an aspiration that any company is likely to share.Be more precise. Essentially, you need to explain exactly what it is that is different and better from your competitors. It might be as simple as listing what you do to achieve what you claim. Claims become credible one you explain how you achieve them. .

Escape route: Your satisfaction is our top priority, so our consultants will work with you until your targets are met

Next time you find yourself writing a ‘cutting-edge technology product’ that is ‘built from the ground up’, please pause and ask yourself what your customers really care about. You can be pretty sure that they are much less interested in how you built your product than how it  might impact their productivity and profits. Write the answers to the questions they are likely to ask,  in the language they are likely to use.

 

Conference Report: Challenges of Automation, ASC, May 2017

Tim Macer reports on ASC’s May 2017 one-day conference in London: “Satisfaction Guaranteed? The Challenges of Automation in Survey Research”

Colin Strong, Ipsos MORI, in the conference keynote, said that techniques and technology developed in the 20th Century have brought about a certain view as to how humans behave. These also form the assumptions behind artificial intelligence. Here, he notes an interesting paradox: “Underneath these AI apps – behind the scene you have a whole bunch of humans. Humans pretending to be machines, with machines pretending to be humans. What is going on?” he asked.

We still need people

Strong’s answer is that as AI techniques become sophisticated and advanced, the need for humans to keep it meaningful, and identify what makes sense is intensified. He notes that in parallel with our tendency to project human attributes onto technology, we have also been trying to project machine qualities onto humans – behavioural models being one obvious example. He predicts that human skills will be very much in demand as AI advances. He tips Process Optimiser as the go-to profession of the next decade, after Data Scientist.

Established jobs will go, Strong predicts, but “different jobs, not fewer jobs” will emerge, as they have before, with each major technological revolution of the past. He has no doubt that computers will pass the Turing Test, and in some ways already do. Yet that, he sees, is also because we are becoming a “bit more machine like” – and it is that, and not the mass unemployment that some fear, which he predicts will pose more fundamental political and social challenges in the future.

Rolling the R in Research

A glimpse of the changing skills and jobs in research emerged from the two speakers that followed. Ian Roberts from Nebu championed R, the open source statistical package that’s a favourite among academic researchers, as a practical way to automate research processes. Nebu had used it to create a self-regulating system that worked out how to optimise despatching survey invitations.

Roberts considers R especially suitable because of the access it provides to a comprehensive library of routines to perform the kind of machine-learning modelling that can monitor the performance of survey invitations to different subpopulations, or by different distribution channels such as email and SMS – and then deliver subsequent invitations in ways that will achieve the best completion rates. In the case Roberts described, as the system was able to learn, the need for human supervision was reduced from one person’s constant involvement, to tactical monitoring for a few hours per week.

“Without learning [within your system] you will never get to the next stage of how do you improve what you are doing”, he said.

Watching what those pesky processes get up to

John McConnell from Knowledge Navigators presented a paper by Dale Chant, Red Centre Software in which R also made an appearance, alongside Red Centre’s Ruby platform as a vehicle for automating many of the interrelated processes involved in running a large-scale tracking study, from sampling through to extracting and reporting the data.

Chant categorised automation tasks into three levels, from ‘micro’ for a single process, through ‘midi’ in which several micro-automations are consolidated into one, to ‘macro’ where there are typically many decision points that straddle a number of different processes. The risk in automation is in creating black boxes, said McConnell, where ‘a broken process can run amok and do real damage.

The key to success McConnell and Chant advocate is in exposing decision points within the system to human supervision. Echoing Colin Strong’s earlier warnings on not seeking to eliminate people altogether but instead to apply them to sense-making, Chant reported that whenever he had been tempted to skimp on the manual checking on the decision points he builds in, he has always come to regret it.  According to McConnell, the risks are low with micro-automation, as that is what most tools currently do very successfully. But when moving up the scale of integration, and bringing disparate processes together, the risks magnify. “Here, you have to think of quality assurance – and crucially about the people side, the staff and the skills”, said McConnell.

Lose the linear: get adaptive

Two papers looked at automating within the survey over what questions to serve to participants. The first, jointly presented by Jérôme Sopoçko from the research software provider Askia and Chris Davison from research consultancy KPMG Nunwood, mused over the benefits of building adaptive surveys. Sopoçko asserts these are made more feasible now thanks to the APIs (software-to-software interfaces) in most of today’s data collection platforms that allow them to reach out to open source routines that can perform, for example, text translation, or sentiment analysis in real time, and then determine where the interview goes next.

Davison welcomed the opportunity to break out of linear surveys by starting with open, unstructured questions and then applying text analytics in real time to interpret the result and select from a pool of predefined questions “to ask the next most appropriate question for [the participant].” He continued: “It starts to challenge that traditional paradigm. It can also help with trust. We cannot know how long that survey will take for that individual – if you put the most relevant questions first you can actually stop at 10 minutes. This has to be better than simply putting more traditional surveys into a new environment.”

…and get chatting too – without being derogatory

Simon Neve from software provider FusionSoft, described how he has put a similar model into practice in customer satisfaction surveys, based on sentiment analysis performed on the fly on verbatim questions about service experience. This then allows for the software to probe selectively, so that initial responses that would otherwise be ambiguous or impossible to interpret are clarified and made intelligible. The aim is to provide a survey which appears as a conversation with a Chabot – not a human, though. Neve said: “Our learning is the more context you can provide the better experience you can provide to the respondent.”

However, automated interpretation has its limitations. Software or survey designers need to be vigilant for irony, and to be especially careful when repeating back anything if the response turns into something of a rant. “You have to be careful about playing back derogatory comments,” Neve cautioned. “We have a list of 2,500 derogatory words to avoid.” But he also quipped: “If you also put an emoji up you with your probe, you are immediately forgiven for anything you say.”

IJMR publishes paper from Tim Macer and Sheila Wilson

A paper first presented by Sheila Wilson and Tim Macer at the ASC’s International Conference in 2016 was one of those picked to feature in a special issue of the highly rated journal, the International Journal of Market Research, in a special issue devoted to the ASC Conference.

IJMR Editor-in-Chief Peter Mouncey notes in the issue’s editorial, The conference presented delegates with a wide range of challenges and opportunities that technology is providing. Some of the key ones are presented in the papers in this issue. Macer and Wilson, in the lead paper, demonstrate clearly where the sector has, and hasn’t, embraced and applied technology to good effect.”

Commenting specifically on the topics the authors addressed in the paper, he said: “Macer and Wilson (meaning ltd) provide a fascinating picture of how the market research sector has adopted, or otherwise, the advances in technology over the past 12 years – a period of significant change fuelled by the mobile revolution. The basis for the paper is the annual global survey of research companies on technology conducted by meaning ltd since 2006. So, quoting part of the conference theme, ‘Are we there yet?’, probably the main conclusion is that the research sector is trailing consumers in adapting to, and exploiting, the mobile revolution.”

Tim Macer talks about the future of Data Collection at ESOMAR 2015

There is a future for collecting data in market research, contrary to what some Big Data analysts may say, but that future will be very different from the data collection practiced today, said Tim Macer at the first session of the day at ESOMAR’s 2015 Congress in Dublin. Invited by research technology provider Qualtrics, Macer focused on the two disruptive influences he considers are going to reshape data collection in the future.

He recalled how, 15 or more years previously, the introduction of online research had acted as a major disrupter to established research (then predominantly CATI), although looking back, it is easy to see it now as a logical, incremental evolution. Yet now, the industry faced two significant disrupters, from mobile or smartphone participation in research – whether research companies actively allow it or not – and the extent to which Big Data and data analytics from existing data sources is likely to bypass the need to collect new data.  ‘Research cannot afford to ignore the tornado that these forces will unleash on data collection”, he said. The future, he continued, lay in providing “high frequency data” that aligns better with the rhythm of the marketplace and constant availability of Big Data.

Embracing mobile, going ‘high frequency’ and achieving better integration were the key recommendations from Macer’s presentation

“For too long, the survey has been treated as a silo, and that is no longer sustainable, and it does not need to be,” he said, urging research companies to focus on how they can integrate both their data and their information systems with those of their clients, as well as their network of suppliers and collaborators.

  • View Tim Macer’s 2015 ESOMAR presentation here.

New book on Online Panels – with a chapter from meaning MD Tim Macer

Online Panel Research book coverHow good is the quality of the access panel you are using to feed participants into your online research? How would you begin to assess quality? How can you tell good practice from bad practice? How do you create and sustain a panel that will create robust and reliable samples for market research or social research?

These are the kinds of questions that a new book published by Wiley sets out to answer. The book comprises of 19 chapters which form an encyclopaedia of the issues relating to the use and also the operation of panels for online research. These chapters were curated by a team six editors: Mario CallegaroReg BakerJelke BethlehemAnja S. GöritzJon Krosnick, and Paul J. Lavrakas and contain the contributions of 50 authors around the world with a wide range of experience and expertise in the field of online research.

Each chapter is based on original research, and in the same spirit of transparency that the book espouses in the operation of online panels, all of the datasets are also made public for anyone to use in their own research.

Lead editor and contributor to the book, Mario Callegaro said: “The book is trying to answer many questions on the quality of such data. It is amazing that online panels have been used for the past 15 years and yet there is no textbook out there. This is the first book to focus on online quality, something that everyone is struggling with.”

Managing Director of meaning ltd, Tim Macer, was asked to provide the chapter on the technology of online panels. He carried out a survey of all of the technology providers offering software or technology for operating panels, and put together a model of good practice in the technology support for panels, based around ESOMAR’s 26 Questions To Help Research Buyers Of Online Samples.

Tim Macer commented: “As Mario says, little had been published on good practice in the use of online panels, but still less was known about the software people were using, and whether they were up to the task of supporting quality panels with the kind of tools, data recording and reports required. This chapter attempts to provide a framework for panel operators, panel users and software developers to use to promote best practice.”

The book may be purchased online and individual chapters may also be purchased for download directly from the publisher.

Revealing MR technology in 2023!

Having just helped to complete the (rather weighty) 2013 Confirmit MR Technology Report, I am definitely in a celebratory mood, especially as it’s the tenth anniversary of the project. To mark the occasion, we added some extra juicy questions this time. We normally use only closed questions, except for the occasional ‘other specify’, but for our 2013 survey, we challenged our participants (and ourselves!) with some open ends, but not just ordinary open ends – they were gamified.

In one of the gamified questions, we told respondents to imagine they were seeing a copy of our report in 2023 and we asked what would be the biggest technological advance within it.

Our participants, all of whom were senior decision makers at market research companies around the world, were predicting that many of the current emerging trends would continue to expand – for example mobile data collection and big data analysis – and many were also expecting more exotic developments such as augmented reality. For example, one person wrote:

“Providing useful Insights from Big Data – establishing online interaction with respondents in the social community and businesses from continuous analysis of Big Data having captured their interest in participating on a daily basis to make better decisions on their areas of interest. Big data = social media (Facebook, Twitter, Google etc.) plus business own data, respondent market research, personal blogs, purchase and location monitoring on wearable devices.”

Or, somewhat more futuristically, another commented:

“Google Glass will change our lives radically. For the well-educated, it will be trendy to have the internet and its services right in front of our eyes, continuously. Thus, artificial intelligence will accompany us everywhere we go and provide us with all the information we need along the way. It only lacks the thought control, but even that is already in the early stages. At first we will be able to control these devices with glances, without having to speak. Advertising and services that pop up when we go about our daily lives with internet glasses will occupy a large part of the market research of the future.”

In contrast, a few respondents were pessimistic about the future of traditional market research. For example:

“I think most of the data we’ll be reporting on will be passively collected. I think primary research data will be a thing of the past….”

I hope to be writing about a vibrant and healthy industry in 2023, but one thing I feel sure about is that the industry will be different. In the meanwhile, I recommend reading our 2013 report. Here’s to the next ten years!

  • Read the 2013 Confirmit MR Technology Report:  
    To download this content, please login or register for free. Reports and other downloads are only provided to registered members.
Smartphone surveys – the rule of thumb

Smartphone surveys – the rule of thumb

It’s two years since I presented a paper at ASC’s international conference in Bristol about mobile survey technology. According to Twitter, it came in for a few namechecks at last night’s MRS Live event on smartphone surveys, in which Google’s Mario Callegaro was presenting. My 2011 paper seems to have earned the name “the paper on mobile apps”, which is due, as Mario no doubt said last night too, because very little actual research has been done on survey techniques and methodology. But the paper covers much more than that.

The paper was based on a two-step survey looking at mobile self-completion methods and tools. First, I spoke with mobile survey practitioners and did a lit search to see what kinds of issues people had mentioned with respect to mobile research, and from this came up with a list of make-or-break capabilities needed to deal with these issues. I  then I contacted all the main software vendors to see if they offered that kind of support. Because there are, in essence, two ways you can do research on a smartphone – using a dedicated app, or pushing the survey to the phone’s built in browser – I covered both.

“Smartphone surveys” or mobile research still seems to polarize the research community into advocates on the one hand, and skeptics and agnostics on the other. But our annual technology survey among research companies shows that respondents are voting with their thumbs, judging by those now attempting to take online surveys on mobile devices. Across participants reporting this data, it averaged 6.7% in 2011. This had jumped to 13.1% in 2012. We’re just about to repeat the question in the 2013 survey. Anyone want to hazard a guess?

Like it or not, mobile is here to stay – it time to start looking after our new little brother because he’s growing up fast.

 

 

Technology, sustainability and the perils of sat-nav thinking

Technology, sustainability and the perils of sat-nav thinking

Why are we continuing to field half hour or even longer interviews, when we know 15 minutes is the natural limit for participants?

I gave a presentation at last week’s Confirmit Community Conference in which I looked at some of the survey results from our recent software survey through an ethics and best practice lens. Confirmit are not only one of the major players in the research technology space, but they also sponsored our research, and were keen I share some of the findings at their client gathering in London today.

More than one observer has pointed out that over the years our survey has strayed somewhat beyond the narrow remit of technology into wider research issues, such as methodology, best practice and commercial considerations. I’m not sure we can make that separation any more. Technology no longer sits in the hands of the specialists – it is ubiquitous. And in defence, I point out that everything in our survey does very much relate to technology, and the effects of technology on the industry. But that does indeed give us quite a broad remit.

Technology is an enabler, but it also often imposes a certain way of doing things on people, and takes away some elements of choice. There is always a risk that it also takes away discretion in the user, resulting in ill-considered and ultimately self-defeating behaviour. Think, for example, of the hilarious cases of people putting so much faith in their satellite navigation systems that they end up driving the wrong way along one-way streets, or even into a river.

Technology has shoved research into a particular direction of travel  – towards online research using panels, and incentivising those panels. That is a technological-induced shift, which brings about a very real set of concerns around ethics and best practice which has been rumbling round the industry since 2006 at least.

Researchers cannot afford to take a sat-nav approach to their research, and let the technology blindly steer them through their work. They must be fully in charge of the decisions and aware of the consequences. They must not lose sight of the two fundamental principles on which all research codes and standards rest – being honest to clients and being fair to participants.

Delivering surveys without checking that 30% of your responses were invented by fraudulent respondents or survey-taking bots is no more acceptable than having a member of staff fabricate responses to ensure you hit the target. Ignorance is no defence in the law. Yet this is what is certainly happening in the many cases our survey uncovered where the quality regimes reported are too often of the superficial, light-touch and easily-achieved variety.

Pushing ahead with surveys that will take half an hour or an hour to complete, when there is good shared understanding that 15 minutes is the natural limit for an online survey sounds like an act of desperation reserved for extreme cases. Yet it is the 15 minute online interview that appears to be the exception rather than the norm. This is crassly inconsiderate of survey participants. It’s sat-nav thinking.

The real issue, beyond all this, is of sustainability. Cost savings achieved from web surveys are now being squandered on incentives and on related admin. Long boring surveys lead to attrition. Respondents lost who have to be replaced, very expensively, from an ever dwindling pool.

So yes, I make no apology for being a technologist talking about research ethics. Sat-navs and survey tools aren’t intrinsically wicked – they just need to be used responsibly.