The latest news from the meaning blog

 

Ascribe ACM reviewed

In Brief

What it does

Intelligent verbatim content management system and coding environment for researchers and coders, with options for either manually-assisted coding or machine-learning automated coding for higher volumes. Delivered as either web browser-based and web-enabled desktop software modules.

Supplier

Language Logic

Our ratings

Score 4 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Conventional coding: between 3 and 5 US cents per verbatim coded. Automated coding: between 10 and 30 US cents per verbatim coded.

Pros

  • Automated coding option will code thousands of open-ends in seconds
  • Machine learning mimics human coders and produces comparable and highly consistent results
  • Many tools to optimise effort when coding manually
  • Web based environment makes it easy to distribute coding work to satellite offices and outworkers

Cons

  • Automated coding only saves time on larger projects such as trackers
  • Web interface is in need of a refresh
  • Windows only – requires Microsoft Internet Explorer

In Depth

new A little while ago, Language Logic estimated that their Ascribe online coding product was probably handling over fifty per cent of all of the open-ended coding generated by research agencies in the United States, and a decent proportion from the rest of the world too. The challenge is where do you go next, when you have half the market and no real rivals? One direction is to grow the market for verbatims, by making it possible to code the vast number of open-ends that never get coded – and the new Ascribe Automated Coding Module or ACM, promises to do just that.

I happen to know something about the technology behind this tool, because it I worked on a prototype with the online bank Egg (and even co-presented a paper on it at the 2007 Research conference ). Language Logic has subsequently worked with its creators, the Italian government-run research foundation ISTI-CNR, to integrate theor technology into Ascribe. Though I am often hesitant to state anything is the best, the ISTI-CNR engine is easily the best I have found as it is the most MR-savvy of any automated text-processing technologies. This is not a discovery or text mining tool – it is a coding department in a box.

ACM closely mimics the normal human-intervention coding process, and fits seamlessly into the traditional Ascribe workflow. By using machine learning, it does not attempt to interpret, or extract meaning by looking up words in dictionaries – in fact, it actually does not use dictionaries at all. Instead, you provide it with examples of how you would classify your data into a codeframe, and then set it to learn from this. In Ascribe, this means you simply start coding the data in the way you normally would. As you code, you are creating the training set that ACM needs. When you have coded enough to create a decent training set, you take your foot off the pedal, and let ACM accelerate through the rest.

First, you build the ‘classifiers’ that will identify matching answers. These work by looking for telltale features of the examples you coded. For any individual answer, it could create thousands of these unique features – patterns of words, letters and so on. So many, in fact, that it easily overcomes problems of poorly spelt words, synonyms and so on. When the classifiers have been built, you can then apply them to your uncoded data, and it will categorise them too, applying a confidence score to each coding decision it takes – you can adjust this threshold to make it more or less sensitive. It takes just a few seconds to zip through thousands of verbatims. There is a process for validating the coding decisions the ACM has made, and it will helpfully present validation examples in order of those where it was least confident of its coding decision.

This validation step makes the system very manageable, as you can understand what it is doing and you can improve its performance by correcting any assignment errors, and even react to changes over time.  It feels uncanny, too, as the marginal decisions it identifies are often the ones that have the human coders debating where it should go too.

Not that you have to use the ACS with Ascribe – it does command a premium in pricing over manual coding and it is only really suitable for larger volumes. The overhead of training and validation is comparable to manually coding a couple of thousand interviews.  However, it can also be applied to qualitative projects and web content, such as blogs.

Even manual coding in Ascribe is highly optimised, with tools to let you find similar answers, code by word or phrase matching, and if you wish, re-categorise items at any point. You use it both to create your codeframe and assign answers to it in one integrated step. It’s a multi-user system, and you can give assign responsibilities among the team: some can build codeframes, others only code, and others only analyse. Ascribe also has a surprisingly rich set of analytical tools – even cross-tabbing capabilities. You are not restricted to uploading only the verbatim texts, but the entire survey can go in. It can handle data from SPSS Dimensions now with ease,and it is totally integrated into Confirmit using the Confirmit Web Services interface. Upload routes are provided for most other MR packages.

It’s not the prettiest of tools to use: the interface may be on the web but is hardly of the web and is in need of a makeover. Language Logic are redesigning some modules as thin client Window apps, which have a better-looking interface, but it would improve  the approachability of Ascribe if it’s web interface as better structured and designed. True, it is productive to use, but it does not help you get there as a novice, and the documentation (which is being redone at present) is not as comprehensive as it needs to be. It’s a pity as both make it a challenge to harness all of the power that is in this otherwise remarkable system.

Customer viewpoint: Joy Boggio, C&R Research Services, Chicago

Joy Boggio is Director of Coding at C&R Research Services, a full service agency in Chicago. Joy introduced Ascribe to C&R in 2004, having used it previously elsewhere.  Ascribe is used for all verbatim coding on quant studies at C&R and also some of their qual projects.  She explains: “Within a day or two of introducing Ascibe, we immediately cut down the deliver time on project by, in some cases, a week. The features of Ascribe that are the most attractive are it being web based – you can easily hand out the work very easily to many different people in many different places;  if you have had the study before, you can merge it with the previous study and autocode a part of it; you are not restricted in the formats of data you can input, nor are you restricted in how you export the data out, and we can do some rudimentary data processing within the tool.”

Although C&R has a research staff of around 60, Joy is able to support all of the verbatim coding activities with a team of just three coders. But it is not only the coders that use Ascribe – many of the researchers also use it to access the verbatim responses, using its filtering and analytical capabilities to indentify examples to include in reports and presentations.  “It means they can dive down a little deeper into the data. The problem you have with the process of coding data is that you can flatten out the data – the challenge is always to make sure you can retain the richness that is there. With Ascribe you can keep the data vibrant and alive – because the analytical staff can still dive into the data and bring some of that richness to the report in a qualitative way.”

Joy notes that using Ascribe telescopes the coding process, saving precious time at the start. “It’s now a one-step process, instead of having to create the codebook first, before getting everyone working on it. With this, as you work through the verbatims you are automatically creating codes and coding at the same time, so you don’t have to redo that work. When you are happy with the codebook, you can put others onto the project to code the rest. This is where the efficiency comes in.”

Joy estimates that it reduces the hours of coding effort required a typical ad hoc project by around 50 per cent, but due to the ease of allocating work, and the oversight the system provides, she remarks: “You are also likely to save at least a day of work on each project in management time too.”

C&R Research makes extensive everyday use of the manual coding optimisation tools Ascribe offers, such as to search for similar words and phrases, but so far has only experimented with using the new automated machine learning coding in ACM. Joy comments: “It seems to be more appropriate for larger volumes of work – more than we typically handle. There is a bit of work up from to train it, but once you get it going, I can see this would rapidly increase your efficiency. It would really lend itself to the larger tracking study, and result in a lot less people-time being required.”

A version of this review first appeared in Research, the magazine of the Market Research Society, December 2009, Issue 523

Dexterity MR Anywhere Reviewed

In Brief

What it does

A multi-user online project management, workflow and collaboration tool for market research projects. It handles all project communications across internal teams and suppliers and subcontractors, and integrates with a range of third party MR software products and services using Web services.

Supplier

Dexterity

Our ratings

Score 4.5 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4 out of 5Value for money

Cost

$200 (US) per month per user with discounts for larger volumes. Integration with other software via web services, from $10,000.

Pros

  • Keeps all communications and documentation in one place
  • See the latest status of all projects at a glance
  • Flexible and adaptable to different workflows
  • Links directly to other software packages and systems from other suppliers

Cons

  • Does not provide any real-time chat or IM support
  • Occasional performance issues can slow usage down

In Depth

Few people bother to count all of the steps that are involved in taking a typical research project as they are so well-rehearsed. Perhaps you should – as people scratch around looking for ways to reduce cost, there can be considerable savings to be made in streamlining processes, eliminating bottlenecks and taking out a few corners. Good communications, good documentation and timely supervision lie at the heart of the most efficient, well-run operations, and this is the approach that MR Anywhere, a Web-based research project management tool, actively encourages.

The origins of the tool are in outsourcing – in this case to India. Dexterity, based in Chennai, first developed MR Anywhere to manage communications between their own teams and clients who were outsourcing work to them. It is now available both for their clients to use on all of their projects, and for a monthly subscription, to non-clients who simply wish to use the management tools. Most use their own dedicated site on the MR Anywhere server, but it is possible to have the software installed on your own server.

There are essentially three core functions it performs, each supported within different areas of the tool. There is the set-up and request activity, where new jobs are defined and then mapped out with all their timelines, documentation and agreed deliverables. Secondly, there is the project communications area, which is based around a support ticketing system. Here, there is a vast array of management reports and overviews. Alongside of this, there is, within each project, a library and information repository that keeps track of all documents relating to the project and provides a range of data reports and management reports too. And beyond this, of course, is an administrator’s interface, to add users and define access rights.

In the setup area, you are stepped through a series of forms and checklists that model all of the activities likely to arise on a project. The framework you provide here then drives the project through the system to completion. The workflow can be customised for each company. Naturally, the system is designed to allow you to work with subcontractors, and if you are, you can give them limited access to the system too. It works equally well with internal teams or a combination of the two. It also lets you manage costs and set budgets. Through the admin module, you can ensure your subcontractors have limited access to the system.

MR Anywhere is very strong on communications. Once work starts on a project in earnest, individuals in different teams are encouraged to pass all their communications through the site, using the built in ticketing system – modelled on technical support systems where each case is allocated its own ticket number. Instead of using email to communicate, queries or tasks are raised as new tickets, and progress, clarifications and resolutions are all submitted as responses to the ticket. It avoids completely having to wade through emails to see if something has been done, or to try to discover the source of a problem. What it doesn’t do, is offer any real-time chat, which is increasingly common in support situations, and is also a common way for tech team to speak to one another, though Dexterity are now considering adding this feature, as those chat trails could also then be captured and saved.

There is an excellent Ticketing Dashboard, that shows the status of all tickets and a summary of them. From here, you can drill down and see the whole trail of messages that went into that ticket. Where tickets have lay unanswered for too long, they are flagged up on the dashboard as needing attention.

Another key area is the Project Information Repository. It is here that all of the documents for the project reside, and also where the deliverables arrive. Automatically, as someone uploads one of the project deliverables to the site, the project owner will receive an email alert that it is available.

At the top level, there is the project dashboard. It is more a kind of a leaderboard rather than a dashboard as it is a big table giving the status of all projects, and for each project, provides detailed information about the stage it has reached and all outstanding items of work to be done. At the set-up stage, you can set key dates for each activity, such as when scripting should be finished, testing signed off, sample loaded and so on. The dashboard will also flag up any due or overdue items, and alert you to trouble spots and lets you drill down through to the individual tickets, where you can probably see what was going on.

This software is much more than a souped-up email system, though. What is almost incredible is the way that Dexterity has integrated its application with other software providers’ products. At the pre-interviewing stage, there is a direct link to the Cint’s CPX panel marketplace. It can also link directly to your own panel management system, if it has a web services interface. This means you can build samples and test availability and obtain costs from within MR Anywhere.

For data collection, it interfaces directly with Confirmit, Nebu and IBM SPSS Dimensions. This means that status reports and even data are pulled directly from the data collection software and is presented within MR Anywhere’s dashboards and as dynamic reports within the Data Repository. When fieldwork is complete, the data is pulled by MR Anywhere into the Data Repository, where it is held in a generalised format. Here, the data processing team can pick it up and work on it without needing access to the actual data collection system – and it can be exported and converted into other formats, such as Triple-S or SPSS on the fly. It also supports links to Marketsight, SPSS, Quantum and Wincross – and you can also use the tool to request outsourced DP directly from Dexterity in any of these products.

Some users have reported occasional sluggish performance, but this appears to be related to the extent to which users are refreshing data dynamically – it’s easy to pull across large amounts of data from other servers and be unaware of the scale of the task. This has not always made Dexterity popular with the other suppliers they are integrating with, as it also places a load on their servers too. Now, the system will refresh data on demand – which is not an entirely satisfactory compromise.

However, it’s clear from the product that the manufacturers understand market research processes from the inside. Start using a tool like this, and there would be no going back.

Customer viewpoint: Guy Sutton, Red Dot Square Solutions, Milton Keynes, UK

Red Dot Square Solutions is a global specialist in shopper insight based in the UK, providing research and consulting services to retailers through virtual simulations of retail environments. Though some of the technology Red Dot Square uses would appear exotic to many research companies – like very large scale virtual reality and eye tracking – at the heart of their activities at the heart of their research activities you also find online surveys integrating with the virtual store and eyetracking technology. There is a lot of data to be matched up and processed, and a lot of complex surveys to program and administer.

Guy Sutton, Head of Research Operations at Red Dot Square explains: “We decided to keep our operations lean in the UK and outsource various parts of the process, outsourcing the data processing out to Dexterity as well as the survey programming work. We soon worked out how to put MR Anywhere into the process. Having emails going to and fro is not really adequate when working over distance and on a scale like this. Essentially, we use MR Anywhere for keeping track of all of the communications. Having a system that takes care of this means less stress and worry.

“My team had a couple of hours of online training and that was all that was required. It is fantastically simple. We have been disciplined in what we put in so we have not have had issues regarding the quality of document sent through for scripting. Like any software, what you put into it is what you get out, and we endeavour to be very rigorous in what we get out of it. “

Guy Sutton observes that there are often times when the message threads of MR Anywhere are simply not fast enough and the staff switch to IM chat to communicate. “This is a well rounded application,” he comments, “But if it included IM too, that would be the cherry on the cake.”

Sutton observes: “I was unsure at the outset whether we needed to use it. Because we had the documentation in place it seemed we were really only replacing Email with a more rigorous system. But it also links directly into our Confirmit server. We tick a few boxes and send through some documentation, and our outsourcing partners can get on with the work and in the same system. This happens smoothly and with pretty good accuracy. My advice to anyone considering this is to go for it – it provides a defined process and it cuts out the scope for confusion.”

A version of this review first appeared in Research, the magazine of the Market Research Society, November 2009, Issue 522

Globalpark EFS 7 Panel and Communities Reviewed

In Brief

What it does

Fully hosted software-as-a-service online research suite that offers a high level of performance and flexibility, with tightly integrated panel management capabilities. The panel module now offers support for online research communities

Supplier

Globalpark

Our ratings

Score 3.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Three components: Set-up and customisation fee for panel typically £10,000-£14,000; plus, annual company-wide licence fee for survey module: £2,700 and for panel on sliding scale, from £6,830 (10,000 members) up to £20,630 (half a million or above); plus, usage fee per complete interview on a sliding scale, e.g. 49p for 10,000-20,000 in a year; 12p for 2 million.

Pros

  • A captive application for CATI interviewers and supervisors rather than a web browser interface
  • Integrated question and media library for rapid survey development
  • Works with any modern browser or OS
  • Provides a full web content management system (CMS) for multiple panel/community sites
  • Panel can work standalone with other interviewing software, e.g. for other modes

Cons

  • Online and mobile interviewing are the only survey modes supported
  • Steep learning curve
    A lot of web technical knowledge needed to fully exploit panel customisation
  • Contains quant research elements but no obvious survey workflow for quant projects

In Depth

How a panel differs from a community has become a bit of a topic among the research profession of late: how to avoid influence, whether incentives should be paid or not, or even whether the two differ at all. It’s clear that there is diversity in understanding and practice, and in introducing community support to the Globalpark EFS interviewing suite (the EFS stands for enterprise feedback management) this research software provider leaves those decisions to the individual. You could use the software to run multiple communities, multiple panels or any combination of the two, with different websites for members to use for each, and behind the scenes you may choose to keep all your panel members in one database, and segregate them logically, or physically segregate them into separate databases.

Globalpark EFS splits the task into three essential components: panel (or panels), projects and websites (the panel members’ portal). Therefore, if you had a panel of customers, and wanted to create a community of premium customers, as an elite group drawn from the panel, you could create a special website for these customers. Surveys are deployed through the respondent-facing website, and can be deployed to more than one site. They can even be skinned differently, so the survey the premium customers get be the same survey as in the general panel, but could take on a different look, consistent with the premium site’s theme. It also makes this a very appropriate pick for research companies, alongside the corporate EFM customers that Globalpark target, since panels and surveys can easily be branded for different customers or contexts.

The real power of the system is in its ability to create multiple panel and community websites, and for these sites to contain dynamic content driven from a number of sources. It means that once the site has been configured, no further technical tweaking is required, provided you do not fundamentally change the scope of what you are doing. All the routine activities such as putting surveys live, inviting panellists to participate, collecting demographics and contact updates from members, reward redemption, and the more community-oriented capabilities such as adding content to news feeds, featuring snap polls and results of surveys are simply managed through a set of attractive and straightforward control panels.

The site builder is another matter though – this is something aimed squarely at the web technician, and even then it will tax even the specialist, as there is a lot to learn and a lot of layers to work through. What Globalpark give you is a fully functioning web content management system (or CMS) which conveniently happens to understand surveys and panels. It is HTML and PHP based, browser-independent and, following best web practice, rigorously separates presentation from function. In an attempt to make it a little less complex, rather than having to write any PHP code, most text content can be written in Smarty, a text markup system. This makes it easy to pull fields from the panel database for display, and put logic into the text too.

It’s a highly accomplished implementation of a CMS and you could certainly use this software to build big fast-moving content-rich sites in which the survey activity was only a small component. It is a clever stance to take, though the trade-off is that all this flexibility is the time and expertise required to create a new site. This will not let you pop-up a new community in a couple of hours. To be fair, the people at Globalpark recognise that only a minority of customers would be able to do the configuration from scratch and tend to quote for doing the initial configuration work with new customers.

Version 7 also introduces a number new Web 2.0-style ‘community’ building blocks. Forums allow you to create threaded discussions, with members contributing responses, or optionally, defining new topics too. Whiteboards let you create a simple single-topic forum. Blogs let you turn the commenting over to your participants, who can add their own content and upload images, documents and so on. You can also feature selected blogs on the home page. Chat lets you hold one-to-one or group discussions in real time, to a limited extent, though stops well short of a full online group.

You can restrict access to forums and all the community components, so you can work with an invited subset of members only. Whenever content upload is an option, you can restrict the files you permit, e.g. only to allow JPEG images or Word documents, and the size can be limited too. It’s all very sensible, but it does not really jive yet for the qualitative researcher wanting to pull panel members into open, semi-structured research. There is no built-in workflow in the way there is for a quant survey and your data is likely to end up scattered all over the place. This needs more thinking through, and no doubt later versions will improve the situation.

However, praise must go to Globalpark for providing these features and making the software entirely DIY, if you have the skills to do the CMS configuration work behind the scenes, because many other community tools do not give you this degree of control or flexibility. You could do a lot of novel and interesting community-based and collaborative research with what this offers.

Customer perspective

Sony Music in Germany started using Globalpark EFS a year ago for a range of research activities carried out in-house using their own panel. These include new product and concept testing, as well as song cover and artist image tests for upcoming artists or newcomers. Michael Pütz, Director CRM, Web Strategy and Research explains: “We also create target group profiles, including information about media usage, which is useful for developing marketing and media plans, later on, and we use it to gain additional overall consumer insights.

“It is sometimes said that the music industry is failing to meet consumers’ needs and adapting too slowly to new business models and technologies; our activities with our online panel at www.musikfreund.de (along with other initiatives) shows evidence to the contrary. For some years now, our consumers have become regular part of a&r [artist and repertoire] and marketing decisions and our reliable partners in developing new business models and proofs of concepts.”

The market research team was therefore seeking something that would let them create well-structured and well-designed surveys and offer integrated panel management capabilities too – and to expand some of these into communities – something else EFS offered.

Mr Pütz continues: “The possibilities with EFS are huge. We are constantly challenging EFP and the Globalpark team, and they nearly always come up with good ideas on how to transfer what we want to do into solutions.” He notes in particular the ways in which Globalpark allows users to save time and improve consistency through the use of both standardized ready-made types of questions and the ability to set up a media library to make it easy to insert audio and video clips, which are fundamental to the research he does.

“The basic functionalities of EFS are easy to learn and to teach, however, configuration and tool menus of EFS can be a little bit confusing to beginners – it is not self-explanatory, which is when the help of Globalpark support teams and experts is needed.”

A version of this review first appeared in Research, the magazine of the Market Research Society, October 2009, Issue 521

Confirmit Horizons Reviewed

In Brief

What it does

A complete mixed mode interviewing system with high-quality support for CATI, CAPI and web interviewing, integrated with panel and community management, and telephony integration for CATI, operating in a web-based or cloud computing environment.

Supplier

Confirmit

Our ratings

Score 5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 3.5 out of 5Value for money

Cost

Confirmit Horizons CATI hosted solution. Entry-level system from £8,000 per year. Web surveys and other pricing on application.

Pros

  • Captive application for web CATI interviewers and supervisors (rather than a web browser interface)
  • Switch easily from Web to CAPI or CATI and back again
  • Clean, modern and customisable look throughout
  • Open system capabilities through a range of API extensibility kits

Cons

  • Sample loading capabilities are rudimentary
  • No offline scripting capabilities

In Depth

There is a new professional telephone interviewing system on the block with a surprising name to those familiar with it – Confirmit. Confirmit Horizons, released earlier this year, extends this granddaddy of a Web survey platform into the phone room and also the street and shopping mall, with full online/offline CAPI too.

Presumptions that CATI would dwindle away as an interviewing channel, in the face of Internet research have so-far been wide of the mark, rather like those predictions about long-awaited paperless office. Industry stats show decline, but CATI is far from collapsing. For many years Confirmit (in its earlier FIRM days) appeared to procrastinate over whether it would or would not introduce CATI into its Web survey offering. Then two years ago, it bought up Pulse Train, which appeared to have two jewels in its crown – a superior reporting platform, Pulsar, and a real workhorse of a CATI system, Bellview. The combined firm set about merging together the two product lines – an initiative that is not for the feint-hearted.

In a surprisingly short period of time, a merged set of tools for data collection has emerged, and in doing so, the Confirmit product line has matured into a very comprehensive offering for the whole spectrum of quant research. The manufacturer has clearly learned from the mistakes of others. First, there is an upgrade route for Bellview customers to convert their legacy QSL scripts into Confirmit, and secondly, the revised platform is not a hotch-potch of legacy modules and new modules – the capabilities of the old Bellview system have almost all been reproduced within Horizons without compromising the Confirmit environment or way of working.

Bellview was a very flexible system for CATI, giving a lot of control to supervisors over the management of interviewers, sample and callbacks through its admin interface and offered limitless possibilities to the script writer. Scripts were written either using a proprietary script language called QSL or a graphical (GUI) authoring tool called Visual QSL — though in reality, most people wrote QSL syntax. Confirmit, on the other hand, has always been a GUI system, and the developer took the brave decision to go with GUI alone. There are those who will defend the syntax approach to the bitter end on the grounds that it is ‘more efficient’ and quicker to do certain tasks. But QSL was quirky and took time to learn – and it was always easier to make mistakes than it was to notice them and correct them.

The Confirmit authoring GUI is, in practice, highly efficient and the developer has worked on optimizing it. There is often a fear that a Web-based editing interface will be sluggish in operation. Not so with Horizons – moving from screen to screen is instant and effortless. There will be a fairly steep learning curve for anyone experienced in QSL that has not worked with Confirmit, as the design interface is completely different.

The legacy code bridge will not convert everything – it seems to get you around 80 to 90 per cent of the way to a working CATI script in Confirmit, and it deals with all of the more tedious aspects of converting texts, even in multiple languages, variables and logic, but you will probably have to patch up any complex execution logic or clever scripting by hand. Another bonus is that a lot of the concepts are very similar between the two systems, such as the block-structured skip logic, and those that were missing from Confirmit (such as QSL’s ‘pblocks’ for handling beginnings, endings and handover from one mode to another) all now appear in Confirmit as ‘call blocks’. QSL users will, however, miss some of the syntax tricks that were possible – the Confirmit interface is not as slick as QSL in writing logic and performing operations on variables.

Better-looking CATI

There is no separate CATI-specific module – all of the interviewing and supervision capabilities are also implemented now through the main Confirmit platform, which has been skilfully enhanced to provide a true CATI experience to users, within a Web-based environment. However, to avoid the problem of Interviewer and Supervisors having to work in a Web browser, both of these interfaces are provided as dedicated applications, or ‘consoles’. This is easily set up on Interviewer machines, or if interviewers and supervisors are working remotely, they can be emailed a link to download and install the package.

CATI interviewer screens now take on a very web-like appearance – you can control their exact look and feel through the same template gallery as a Web interview, and there are some templates optimised for CATI too. However, the console environment also allows interviewers to control all aspects of the interview from the keyboard, rather than the mouse, which is typical in a Web interview – though they can also use the mouse if they wish.

Clever reporting

The web environment certainly improves the look of the CATI screen – they can be branded, for instance, with the clients logo, or made distinctive for different projects. These are not merely cosmetic matters – theming the screens can help with project recognition, and better screen design can aid concentration and legibility. They can also contain many more answer options.

The supervisor console similarly packs a punch, with added style, for supervisors. Again, how it does things is a little different for those familiar with Bellview, but the concepts are similar, and a lot of effort has gone into making the features intuitive to use. A real innovation here is the alerting report – triggered by events such as an interviewer not entering any data, or an appointment falling overdue. This is potentially a big money-saver for CATI units, as it makes it feasible for a single supervisor to supervise 15 or more interviewers as effectively as 6-10 with a more conventional system. This could even free up a few monitoring stations for interviewer use instead.

The next day CATI centre

Telephony integration is also possible through Magnetic North, a telephony specialist, with interfaces for both conventional telephone lines or VOIP. It is a web-based dialler, which means it will support either conventional call centres or satellite offices and homeworkers. When an interviwer logs in, they are asked to enter the number of the phone on their desk, and a call is placed to that number. The line is then kept open, and the dialler then starts placing calls. The dialler will support every level of automation from click-to-dial through to full predictive dialling, with the supervisors able to monitor and control the nuisance call rate through the alerting reports.

The telephony support also allows for recording of chosen segments of the interview, such as an open-ended question, or the entire interview. The web telephony also means that you could set up a real or a virtual call centre using the cloud computing model pretty much overnight.

Where does it end?

The sheer depth of functionality becomes apparent when you take a look at the other interfaces offered, to allow you to integrate Confirmit with other software. The software is developed using Microsoft SQL Server and using the Microsoft .NET framework, which already makes integration relatively straightforward for other Microsoft users. Confirmit have also embraced Web Services with a passion – this allows applications to exchange data and drive one-another across the Internet, effectively going in through the back-door but in a highly secure and also efficient way. There are currently nine different APIs for such touch-points as reporting, data transfer, quotas and panel integration. There is also an ‘extensibility framework’ to make it easy to develop specialist applications within Confirmit. One has already been designed by a third party for fraud detection and another is being developed to support interviewing on iPhones.

It is going to be hard in future for the humble reviewer to find anything that this software cannot do, since there are likely to be many more of these third party plug-ins. Indeed, the few niggles I do have are relatively minor in scope – sample management is still a bit too Web interview-oriented, and you have to be a Windows user to exploit the software. Confirmit are cagey about the pricing too, but it appears to be towards the top end.

Confirmit have pulled off quite a feat with Horizons – it’s a serious product with a friendly face that makes for a hard act to follow.

A version of this review first appeared in Research, the magazine of the Market Research Society, September 2009, Issue 520.

Ruby Reviewed

In Brief

What it does

Modern, GUI-driven cross-tabulation, analysis and charting suite for market research data aimed at the tabulation specialist. Capable of handling large and complex data sets, trackers and other ‘difficult’ kinds of research project.

Supplier

Red Centre Software, Australia

Our ratings

Score 3 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Full version $4,800 (allows set-up); additional analyst versions $2,400. Annual costs; volume discounts available.

Pros

  • Cross-tabs and charts of every kind from large or complex datasets, and so much more
  • Quick and efficient to use for DP specialist, using a choice of GUI access and scripting
  • Push-pull integration with Excel and PowerPoint for report preparation and automation
  • Superb proprietary charting to visualize MR data more effectively than in Excel or PowerPoint
  • Excellent support for managing trackers

Cons

  • Interface is bewildering to beginners: a steep learning curve
  • No simple web-browser interface for end users or to provide clients with portal access to studies

In Depth

We always try to present something new in these software reviews, but this time, we think we are onto something that could break the mold: a new tabulation software package from an Australian producer, Red Centre Software, that leaves most of the existing choices looking decidedly dated. It’s refreshing, because for a while, most efforts in market research software seem to have gone into improving data collection and making it work across an ever-broadening spectrum of research channels. Innovation at the back-end seems to have focused on presentation, and has often left research companies and data processing operations with a mish-mash of technology and a few lash-ups along the way to transform survey data into the range of deliverables that research clients expect today.

Ruby could easily be mistaken for yet another end-user tabulation tool like Confirmit’s Pulsar Web or SPSS’s Desktop Reporter, with its GUI interface and drag-and-drop menus. The reality is that it is a fully-fledged tabulation and reporting system aimed squarely at the data processing professional. If you are looking for a Quantum replacement, this program deserves a test-drive.

As far as I could see, there were no limits on the data you could use. It will import data from most MR data formats, including Quantum, Triple S and SPSS. Internally, it works with flat ASCII files, but it is blisteringly fast, even when handling massive files. It will handle hierarchical data of any complexity, and offers the tools to analyse multi-level data throughout, which is something modern analysis tools often ignore.

It is equally at home dealing with textual data. The producers provided me with a series of charts and tables they had produced from analyzing Emily Brontë’s Wuthering Heights by treating the text as a data file. The same could be done for blogs, RSS feeds and the mass of other Web 2.0 content that many researchers feel is still beyond their grasp.

More conventionally, Ruby contains a broad range of tools specifically for handing trackers, so that you are not left having to automate the reconciliation of differences between waves due to variations in the question set and answer lists.

Ruby is a very intelligent tool to use when it comes to processing the data. The data in the tables reported or charted in MR have often gone through a long chain of transformations, and in the old tools, there could be yards of ‘spaghetti code’ supporting these transformations. Trying to work out why a particular row on a table is showing zeroes when it shouldn’t do can take an age in the old tools, as you trace back through this tangle of code, but Ruby will help you track back through the chain of definitions in seconds, and even let you see the values as you go. It is the kind of diagnostic tool that DP professionals deserve but rarely get.

In Ruby, you will probably make most of these data combinations and transformations visually, though it does also allow you to write your own syntax, or export the syntax, fiddle with it, and import it again (the combination that DP experts often find gives them the best of both worlds). However, Ruby keeps track of the provenance of every variable, and at any point, you can click on a variable and see exactly where the data came from, and even see the values at each stage.

The range of options for tabulation and data processing is immense, with a broad range of expressions that can be used to manipulate your data or columns and rows in tables. There is complete flexibility over percentaging and indexing values off other values, or basing one table on another, so it is great for producing all of those really difficult tables where every line seems to have a different definition

With charting, Ruby gives you the choice of using its own proprietary charting engine, or pushing the data out to PowerPoint or Excel charts. The native Ruby charts are a treat to work with, as the developers seem to have gone out of their way to redress the inadequacies of Excel and PowerPoint charts. For time-series charts, concepts such as smoothing and rolling periods are built-in. You can add trend lines and arbitrary annotations very easily. Charts can be astonishingly complex and can contain thousands of data points or periods, if you have the data. Yet it will always present the data clearly and without labels or points clashing, as so often happens in Excel.

Excel and PowerPoint charts are also dynamic, and the Ruby data source will be embedded in the chart, so that the charts can be refreshed and updated, if the underlying data changes.

Amy Lee is DP Manager at Inside Story, a market research and business insights consultancy based in Sydney, Australia, where she has been using Ruby for two years, alongside five other researchers and analysts. Ruby is used to analyze custom quantitative projects and a number of large-scale trackers.

Asked if the program really did allow a DP analyst to do everything they needed to, Amy responds: “We were able to move to Ruby a couple of years ago, and it is now the main program we use, because it can do everything we need to do. I find it is an extremely powerful and flexible tool. Whenever I need to do anything, I always feel I can do it with Ruby. Other tools can be quite restrictive, but Ruby is very powerful and completely flexible.”

Amy considered the program went beyond what more traditional DP cross-tab tools allowed her. She observes: “Compared with other programs I have used, Ruby allows me to filter and drill down into the data much more than I could with them. It’s especially good at exporting live charts and tables into documents.

“Once they are in PowerPoint or Word, trend charts can be opened up and adjusted as necessary.  When it is a live chart, it means you can update the data, and instead of having to go back to Ruby, open it up and try, find the chart and then read the data, you can just double click it inside PowerPoint, and you can see all the figures change.  And there is even an undo feature, which is good for any unintentional errors.”

Amy freely admits that this is not a program you can feel your way into using, without having some training, and allowing some time to get to understand it.  “It is really designed for a technical DP person,” she explains. “If you have someone with several years’ experience of another program they will have no problem picking this up as everything will be very familiar to them. But we also had a client who wanted to use it, someone with a research rather than a DP background, and they found it a bit overwhelming, because it can do so much, and it is not that simple. It looks complex, but once you get the hang of it, you can do what you need very quickly.”

Among the other distinguishing features Amy points to are the speed of  the software, which is very fast to process large amounts of data and produce large numbers of tables and charts; its in-built handling of time-series, allowing you to combine or suppress periods very easily,  and the range of charts offered, in particular the perceptual maps.

Some of the research companies I speak with are becoming uneasy that the legacy data processing tools they depend on have fallen so far behind, and are in some cases, dead products. They have endured because the GUI-based ‘replacements’ at the back of the more modern data collection tools just don’t cover the breadth of functionality that is needed. You get breadth and depth with Ruby – even in the sheer range of functionality it offers is bewildering to the newcomer.

A version of this review first appeared in Quirk’s Marketing Research Review, August 2009.

SPSS Text Analytics for Surveys Reviewed

In Brief

What it does

Textual analysis software which uses the Natural Language Processing method to process textual data from verbatim response to surveys which will categorise or group responses, find latent associations and perform classification or coding, if required.

Supplier

SPSS

Our ratings

Score 3.5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 3.5 out of 5Value for money

Cost

One-off costs: standalone user £2,794; optional annual maintenance £559; single concurrent network user: £6,985 software, plus maintenance £1,397

Pros

  • Flexible – can use it to discover and review your verbatims individually, or to produce coded data automatically under your supervision
  • User interface is simple, straightforward and productive to use, once you are familiar with the concepts
  • Lets you relate your open-ended data to closed data other questions or demographics
  • Easy import and exports from SPSS data formats or Microsoft Excel

Cons

  • This is an expert system which requires time and effort to understand
  • System relies on dictionaries, which need to be adjusted for different subject domains
  • Rules-based approach for defining coded data requires learning and using some syntax

In Depth

One of the greatest logistical issues with online research is handling the deluge of open-ended responses that often arrive. While much of the rest of the survey process can be automated, analysing verbatim responses to open questions remains laborious and costly. If anything, the problem is gets worse with Web 2.0-style research. A lot of good data gets wasted simply because takes too long and costs too much to analyse – which is where this ingenious software comes in.

PASW Text Analytics for Surveys (TAfS) operates as either an add-on to the PASW statistical suite – the new name for the entire range of software from SPSS (see box) – or as a standalone module. It is designed to work with case data from quantitative surveys containing a mixture of open and closed questions, and will help you produce a dazzling array of tables and charts directly on your verbatim data, or provide you with automatically coded data.

A wizard helps you to start a new project. First, you specify a data source, which can be data directly from PASW Statistics or PASW Data Collection (the new name for Dimensions an ODBC database, or an Excel file (via PASW Statistics). Next, you select the variables you wish to work with, which can be a combination of verbatim questions, for text analysis, and ‘reference questions’ which are any other closed questions you would like to use in comparisons, to classify responses or to discover latent relationships between text and other answers. Another early decision in the process is the selection of a ‘text analysis package’ or TAP.

SPSS designed TAfS around the natural language processing method of text analysis. This is based on recognising words or word stems, and uses their proximity to other word fragments to infer concepts. The method has been developed and researched extensively in the field of computer-based linguistics, and can perform as well if not better than human readers and classifiers, if used properly.

A particular disadvantage of using NLP with surveys is the amount of set-up that must be done. It needs a lexicon of words or phrases and also a list of synonyms so that different ways of expressing the same idea converge into the same concept for analysis. If you wish to then turn all the discovered phrases and synonyms into categorised data, you need to have classifiers. The best way to think of an individual classifier is as a text label that describes a concept – and behind it, the set of computer rules used to determine whether an individual verbatim response falls into that concept or not.

TAfS overcomes this disadvantage by providing you with ready-built lexicons (it calls them ‘type’ dictionaries), not only in English, but in Dutch, French, German, Spanish and Japanese. It also provides synonym dictionaries (called ‘substitution dictionaries) in all six supported tongues, and three pre-built sets of classifiers – one for customer satisfaction surveys, another for employee surveys and a third for consumer product research. It has developed these by performing a meta-analysis of verbatim responses in hundreds of actual surveys.

Out of the box, these packages may not do a perfect job, but you will be able to use the analytical tools the software offers to identify answers that are not getting classified, or those that appear to be mis-classified, and use them to fine tune them or even develop your own domain-specific packages. So, selecting dictionaries and classifiers is done in just couple more clicks in the wizard, the software then processes your data and you are ready to start analysing the verbatims.

The main screen is divided into different regions. One region lets you select categories into which the answers have been grouped, another lets you review the ‘features’ or words and phrases identified , and in the largest region, there appears a long scrolling list of all your verbatim responses to the currently selected category or feature. All of the extracted phrases are highlighted and colour coded. The third panel shows the codeframe or classifers, which is a hierarchical list. As you click on any section of it, the main window is filtered to show just those responses relating to that item. However, it also shows you all of the cross-references to the other answers, which is very telling. There is much to be learned about your data just from manipulating this screen, but TAfS has much more up its sleeve.

One potentially useful feature is sentiment analysis, in which each verbatim is analysed according to whether it is a positive or a negative comment. Interface was not able to test the practical reliability of this, but SPSS claim that it works particularly well with customer satisfaction type studies. In this version, sentiment analysis is limited to the positive/negative dichotomy, though the engine SPSS uses is capable of other kinds of sentiment analysis too.

The software also lets you use ‘semantic networks’ to uncover connections within the data and build prototype codeframes from your data, simply by analysing the frequency of responses to words and phrases and combinations of words and phrases, rather like perform a cluster analysis on your text data – except it is already working at the conceptual level, having sorted out the words and phrases into concepts.

You can build codeframes with, or without help from semantic networks. It’s a fairly straightforward process, but it does involve building some rules using some syntax. I was concerned about how transparent and how maintainable these would be as you handed project from one researcher to another.

Another very useful tool, which takes you beyond anything you would normally consider doing with verbatim data, is a tool to look for latent connections between different answers, and even the textual answers and closed data, such as demographics or other questions.

This may be a tool for coding data, but it is not something you can hand over to the coding department – the tool expects the person in control to have domain expertise and moreover, to possess not a little understanding of how NLP works, otherwise you will find yourself making some fundamental errors. If you put in a little effort, though, this tool not only has the potential to save hours and hours of work, but to let you dig up those elusive nuggets of insight you probably long suspected were in the heaps of verbatims, if only you could get at them.

A version of this review first appeared in Research, the magazine of the Market Research Society, June 2009, Issue 517