The latest news from the meaning blog

 
Technology, sustainability and the perils of sat-nav thinking

Technology, sustainability and the perils of sat-nav thinking

Why are we continuing to field half hour or even longer interviews, when we know 15 minutes is the natural limit for participants?

I gave a presentation at last week’s Confirmit Community Conference in which I looked at some of the survey results from our recent software survey through an ethics and best practice lens. Confirmit are not only one of the major players in the research technology space, but they also sponsored our research, and were keen I share some of the findings at their client gathering in London today.

More than one observer has pointed out that over the years our survey has strayed somewhat beyond the narrow remit of technology into wider research issues, such as methodology, best practice and commercial considerations. I’m not sure we can make that separation any more. Technology no longer sits in the hands of the specialists – it is ubiquitous. And in defence, I point out that everything in our survey does very much relate to technology, and the effects of technology on the industry. But that does indeed give us quite a broad remit.

Technology is an enabler, but it also often imposes a certain way of doing things on people, and takes away some elements of choice. There is always a risk that it also takes away discretion in the user, resulting in ill-considered and ultimately self-defeating behaviour. Think, for example, of the hilarious cases of people putting so much faith in their satellite navigation systems that they end up driving the wrong way along one-way streets, or even into a river.

Technology has shoved research into a particular direction of travel  – towards online research using panels, and incentivising those panels. That is a technological-induced shift, which brings about a very real set of concerns around ethics and best practice which has been rumbling round the industry since 2006 at least.

Researchers cannot afford to take a sat-nav approach to their research, and let the technology blindly steer them through their work. They must be fully in charge of the decisions and aware of the consequences. They must not lose sight of the two fundamental principles on which all research codes and standards rest – being honest to clients and being fair to participants.

Delivering surveys without checking that 30% of your responses were invented by fraudulent respondents or survey-taking bots is no more acceptable than having a member of staff fabricate responses to ensure you hit the target. Ignorance is no defence in the law. Yet this is what is certainly happening in the many cases our survey uncovered where the quality regimes reported are too often of the superficial, light-touch and easily-achieved variety.

Pushing ahead with surveys that will take half an hour or an hour to complete, when there is good shared understanding that 15 minutes is the natural limit for an online survey sounds like an act of desperation reserved for extreme cases. Yet it is the 15 minute online interview that appears to be the exception rather than the norm. This is crassly inconsiderate of survey participants. It’s sat-nav thinking.

The real issue, beyond all this, is of sustainability. Cost savings achieved from web surveys are now being squandered on incentives and on related admin. Long boring surveys lead to attrition. Respondents lost who have to be replaced, very expensively, from an ever dwindling pool.

So yes, I make no apology for being a technologist talking about research ethics. Sat-navs and survey tools aren’t intrinsically wicked – they just need to be used responsibly.

We need to put coding in a different category

We need to put coding in a different category

The people at Ascribe kindly asked me to be their keynote speaker at their European conference last week in London. It was a welcome soapbox for me, since I’ve long been critical of the dismissive approach market research takes to computer-based text processing, and its dogged attachment to manual coding the a ‘gold standard’ in finding meaning and truth from a pile of unstructured comments. All power to Ascribe (which provides software to handle open questions in surveys) and its clients, in my view.

I called my talk “Getting Ready for the Decade of the Comment” (you can view it in here). My point was that research needs to put coding in its place, and supplement it with other computer-based methods that reach further faster, and with less effort. At a time when MR is adjusting its focus from delivering data, in an era when data has become super-abundant, to generating insight and providing explanations – the time is ripe for the humble comment to take centre-stage. But it won’t if research continues to insist on turning words into numbers.

I suspected my soapbox would be before the wrong audience. Sure enough, it was heartening to hear the experience of three presenters from two different research firms – Heather Dalton from Market Strategies and Jeanette Bushman and Dino Perrota, both of Neilsen, who are already embracing these hybrid approaches.  In order to gain acceptance, they all spoke of the work they had had to do to convince researchers in their organisations of the value of these methods.

It struck me we were hearing from two firms where the message had not only been received but acted on, and offered to clients as a new service. I was thinking of all the firms that weren’t in the room – where an initiative from coding would be pooh-poohed, talked down or resisted in the many ways employed by those accustomed to holding the levers of power.

Traditional coding has two major problems: one, it’s the only method most researchers are familiar with and two, it’s just too expensive to administer on all but an elite number of surveys. It’s like leaving  the archaeology in the ground. Sure, one day someone will be able to analyze it, but that’s no help if you need to know about it now.

Dino Perrota likened moving to these new methods to altering the level of focus you can get. He showed a highly pixellated image of Da Vinci’s Mona Lisa against a fully resolved image. He said:  “Everyone is used to the fine brushed results. With [text analytics] we just provide the brush-stroke analysis. We are doing the same thing, but we are just painting a picture for our client with a slightly broader brush.”

But isn’t that what research does, as a matter of course? Sure, it would be lovely to conduct a census, but since we can only do a sample, let’s not bother at all. Research is not judging these new methods by the same standards it judges the rest of its work.

Perhaps the other problem was revealed in Heather Dalton’s talk, when she gave a perfect illustration of how the hybrid approach responds perfectly to client demands to analyse all the rest of the data that were currently being ignored, and yet not increase the cost. Tellingly, she said “I found analysts and coders need to work together closely – especially to [work through the data and interpret it]. Text analytics does not mimic coding and has to be sold as an entirely different product.”

It is not clear, in the current hierarchy or production line, where this activity sits. Researchers aren’t used to working shoulder-to-shoulder with coders, and still less, having coding make interpretive decisions. ‘Coding’ is one of those below-stairs functions in most organisations. It’s rare for anyone from coding to be brought out, blinking, into the glare of the client debrief – but as these new methods start to take hold, that will follow as a natural consequence.

Coding is a process, and it’s only going to be part of portfolio of methods in future. It’s time to drop the name, and come up with something that might mean something to clients and other buyers of research.

 

Qi from Manthan Services reviewed

In Brief

Qi

Manthan Services, India
Date of review: August 2012

What it does

Online platform for creating advanced dashboards based on survey which delivers to the end user an online environment for data exploration, review and collaboration.

Our ratings

Score 3 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4 out of 5 Value for money

Cost

SaaS with annual subscription based on volumes. Example cost $8,000 for up to 5 projects (approx. 5,000 cases and 250 variables) with discounts available for higher volumes.

Pros

  • Very comprehensive offering
  • Understands the specifics of market research data
  • Focus on collaboration and knowledge sharing
  • Takes care of any complex web- and database programming

Cons

  • Works on IE8 and IE9 but some formatting experienced on other browsers
  • Online documentation/help is fairly basic
  • Set-up requires some skill

In Depth

Dashboards tend to be among the most advanced and also the most treacherous of deliverables for research companies to provide. Tucked away at the end of an RFP, an innocuous-sounding request for “dashboard-style reporting for managers and team leaders across the enterprise, with drill-down capabilities for self-service problem solving” will almost certainly mean something vastly more sprawling and costly to provide than anyone imagined.

Dashboard delivery can be a trap for the unwary. Many an online dashboard has become the constantly-leaking plughole in the project budget through which profits keep draining away.

What makes them difficult to control is they are usually tackled as custom developments, built using tools developed for corporate database systems and business intelligence (BI) tools. Any custom development is both costly and unpredictable and research companies often don’t have the skills in-house to manage a software development project effectively. Worse than that, survey data is difficult to handle with these BI tools. They aren’t designed to function smoothly with monthly waves of data, with new questions added or weighting or percentages that need to add to a constant respondent base. It’s not just a matter of generating the number of records returned from a SQL query.

Manthan Services, an India-based developer, noticed the opportunity to build on the dashboard and business information systems it was providing corporate customers and developed a research-friendly package called Qi (as in “chi” or energy). An online platform for creating advanced dashboards based on survey data, Qi delivers an online environment for data exploration, review and collaboration. It is a tool for building dashboards and an environment in which end-users can then access those dashboards, share, collaborate and even, if allowed to, create their own analyses and dashboards.

It is very smart software that aims to find the middle ground between typical BI dashboard tools like SAP Crystal Dashboard Design (the new name for Xcelsius) and Tableau, where the possibilities are infinite, given enough time and money, and the fairly restrictive kinds of online dashboard creation capabilities found in some of the more up-to-date MR analysis tools. If you really want to produce any kind of dashboard, or have a client that is highly prescriptive about presentation, then you may find Qi is just not flexible enough.

On the other hand, you may be able to use the horizons as a useful limiting factor in what you do provide to your client, as it is likely to do 99 percent of what they need – just not necessarily in the first way they thought of it. For the real advantage of using this product is that you really can produce portals packed with data with relatively little effort and no programming expertise required. Furthermore, when you add new waves of data, all of their derivative reports will be updated too.

There are also built-in modules within the Qi environment to set up different kinds of dashboards or portals for certain applications. There is one for employee research, for example, and another for mystery shopping, with reporting at an individual case level. In addition, there are models provided for performance management, scorecarding and benchmarking. There is also a tool for building an organization hierarchy and this can then ensure each user is given the relevant view of the data when they log in. These can be tied to “group filters” which reflect the organization’s hierarchical structure in the actual data that get displayed.

There is an integrated alerts publisher and a user’s portals can be configured with an alerts area or tab. You then define the exceptions or thresholds where alerts should be generated. These are then recalculated for each individual user’s view of the data so they are only alerted on what is relevant to them.

Elegant concepts

There are some very elegant concepts at the heart of Qi which help to give your work shape. Everything you create is based on one of three “assets” based on data: charts, dashboards and tables. Dashboards come in a variety of shapes with placeholders for you to populate with charts or tables. There is also the concept of a “portlet,” which can house a report, an alert, a chart, favorites or messages. You can then arrange your portlets into pages or publish them on their own.

There is a reasonable though not especially exotic selection of charts – pretty much what you might find in Excel. There are, however, some nice multidimensional bubble charts.

Behind the scenes is a SQL Server database. It can be loaded with survey data using the survey metadata provided by either SPSS or Triple-S. If you want to work with other kinds of data – which is possible – you may need to get help from Manthan Services in setting up an appropriate database schema, however, and also help with the database load process.

A particular snare to be found in many RFPs asking for dashboards is the request for drill-down capabilities. There is often an assumption that deciding what to drill down to is a trivial, automatic choice. It is not – there is often more than one level of detail a user is likely to want to see when a particular KPI turns red or a trend chart shows a worrying dip. In Qi, you have two tools to satisfy this: a drill-down tool that lets the user trace the antecedents or components of any item of data and a drill-across tool which lets you move up and across in your hierarchy of reporting.

End users are provided with a lot of options out of the box to personalize their dashboards – they can create favorites, apply sticky notes, customize the view of the data, create their own portlets (if you allow this) and republish or share these with others. It can make for a highly collaborative environment both within the enterprise, and equally, between enterprise and research agency.

Overall, this is an industrial-strength platform for research companies to use to create portals and dashboard systems with a dizzying array of functionality to pick from. The documentation could be made a lot more comprehensive – it is cryptic in places and tends to gloss over some quite advanced capabilities. I also experienced some issues viewing the portals I was given access to on any browser on IE8 or IE9, though Manthan claims it works with different browsers and tablets.

Same set of tools

Max Zeller is head of the retail insights division for a large global research company in Europe. (His name has been changed at the request of his employer.) His division introduced a white-label version of Qi last year, which it presents to its customers as one of its own branded services. “Many of our clients today require online reporting,” he says. “As a global company we wanted to offer the same set of tools to all clients and also leverage on the one investment across all our companies and for most of our studies. We also wanted something that you could implement quite quickly locally, to create portals and dashboards, which did not require any programming or special skills to run it. Also we wanted a tool that both researchers and users could modify and even create their own views or dashboards for themselves.

“We looked at many different products but eventually chose one from Manthan Services. On all criteria they were on top and they understood market research, which was very important.”

Though the software is very extensive, with quite a lot to learn, he says, in practice his firm’s research and DP teams have found it well within their capabilities to deploy it. “The people in contact with the client – the project managers supported by DP staff – do the technical and setup work. You need someone in the team that champions the product who can translate the requirements of the client in terms of how the software is going to work. Then it can be more junior DP people who do the implementation, because it is all menu-driven – which gives them a new opportunity as well.”

Zeller estimates that setting up a new portal for a client demonstration, comprising 25 different charts and allowing different levels of access, can be achieved in a day or so by his local teams – a pace that was new for the company. “Before this we had to go though IT and the process was not just longer but so much more expensive. It would have taken several days to a week with what we had before. We need to be as lean, as quick and as close to the client as possible – and that’s exactly what we have here. You can give the specs from the client directly to the team – you don’t really have to translate the requirements into a technical specification and that is what saves the time and delay.”

Zeller strongly advises allowing adequate time to learn to use the software, however. “This is not something you can jump into in an hour – it does take two intensive days of training. But overall, I think the trade-off between functionality and ease of use is good. Once you are accustomed to the software it is easy and productive to use.”

He also stresses that everyone, especially those setting client expectations, must be aware that this is a packaged solution. In other words, not all client requests may be achievable. “[When speaking with clients] you need to be aware of what you can and can’t do. Even though it is very flexible, it is working to a standardized framework. There are many things you find you have not thought of first and when you try, you discover there is a way to do it. But it is not fully customizable so there are some areas you cannot change.”

However, in these cost-conscious times, some imposed limits can be an advantage, as Zeller points out: “It is very difficult for research companies to earn money from these portals if what you are doing fully customized.”

Overall, he concludes, “We are quite happy with this software – and I am working with people who have a lot of experience. We think it is a good solution.”

A version of this review first appeared in Quirk’s Market Research Review, January 2013 (p. 28)

Has iTracks killed the online focus group?

So, iTracks has won its action against Artafacts over patent infringement, and Artafacts have paid over an undisclosed amount to borrow its ‘invention’ relating to online focus group messaging in real time.

One the one hand, close examination of the patent shows that the US Patent Office, and now the recent upholding of the patent indicate that in the USA at least, anyone with online focus group software should be wary of infringing iTrack’s intellectual property. The invention protected in the patent is the ability for online focus group moderators to communicate independently with the group participants and with clients and other observers, through separate message areas. It’s something that every piece of online focus group I’ve looked at offers. On the other hand, iTracks has only taken this action against one of many companies that do indeed offer this capability in its software. So whether others are to be pursued over this infringement is at the moment in the hands of iTracks.

It’s something I intend to look into a bit further, to find out what this really means for the industry.

Big data? Big problem

keyboard showing special custom buttons for social media activitiesThe Economist recently ran an article on “Big Data” in a special report on International Banking. Its assessment of banking elsewhere in the report is that the industry has been surprisingly resistant to embracing the Internet as an agent of change in banking practice. It reveals, counter-intuitively, that number of bank branches has actually risen by 10-20% in most developed economies during a period when most customers pass through their doors once a year rather than once a week.

The newspaper explains this paradox thus: banks with a denser branch network tend to do better, so adding more branches is rewarded by more business. But it’s business on the bank’s terms, not necessarily the customer’s. It does not increase efficiency – it increases cost. And, as The Economist points out, banks’ response in general to customers using mobile phones for banking has been lacklustre, even though customers love it and tend to use it to keep in daily contact with their accounts. It’s a level of engagement that most panel providers would envy.

All of which is to say that there are parallels here with our own industry. Here at Meaning, we have just released the findings of the latest annual MR Software Survey, sponsored by Confirmit. In a sneak peek, Confirmit blogger Ole Andresen focuses on an alarming finding about the lack of smartphone preparedness among most research companies.

But what interests me is the Big Data – both in The Economist’s report and our own. The former offered a fascinating glimpse into the way banks were using technology to read unstructured text and extract meaning, profiling some of the players involved and the relative strengths of different methods. This is technology which is improving rapidly and can already do a better job than humans.

In our annual survey this year, we have asked a series of questions on unstructured text. Research companies, in embracing social media, “socialising” their online panels and designing online surveys with more open, exploratory questions in them, are opening the floodgates to a deluge of words that need analysing: at least that was what we suspected.

Analysis methods cited by research companies for handling unstructured text, from the 2011 Confirmit MR Software Survey by meaning ltd

In our survey we asked a series of questions on unstructured text. Research companies – in embracing social media, “socialising” their online panels and designing online surveys with more open, exploratory question – are opening the floodgates to a deluge of words that need analysing: at least, that was what we suspected.

It turns out that half of the 230 companies surveyed see an increase in the amount of unstructured text they handle from online quant surveys, and slightly more (55%) from online qual and social media work. Yet the kinds of text analytic technologies that banks and other industry sectors now rely on are barely making an impact in MR.

Even a quick glance at the accompanying chart shows that most research companies are barely scratching the surface of this problem. It’s not the only area where market research looks as if technology has moved on, and opened a gap between what is possible and what is practised. There’s much more on this in our report, which will be publishing in full on the 30th May. Highlights will also be appearing in the June issue of Research magazine.