The latest news from the meaning blog

 

Ascribe ACM reviewed

In Brief

What it does

Intelligent verbatim content management system and coding environment for researchers and coders, with options for either manually-assisted coding or machine-learning automated coding for higher volumes. Delivered as either web browser-based and web-enabled desktop software modules.

Supplier

Language Logic

Our ratings

Score 4 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Conventional coding: between 3 and 5 US cents per verbatim coded. Automated coding: between 10 and 30 US cents per verbatim coded.

Pros

  • Automated coding option will code thousands of open-ends in seconds
  • Machine learning mimics human coders and produces comparable and highly consistent results
  • Many tools to optimise effort when coding manually
  • Web based environment makes it easy to distribute coding work to satellite offices and outworkers

Cons

  • Automated coding only saves time on larger projects such as trackers
  • Web interface is in need of a refresh
  • Windows only – requires Microsoft Internet Explorer

In Depth

new A little while ago, Language Logic estimated that their Ascribe online coding product was probably handling over fifty per cent of all of the open-ended coding generated by research agencies in the United States, and a decent proportion from the rest of the world too. The challenge is where do you go next, when you have half the market and no real rivals? One direction is to grow the market for verbatims, by making it possible to code the vast number of open-ends that never get coded – and the new Ascribe Automated Coding Module or ACM, promises to do just that.

I happen to know something about the technology behind this tool, because it I worked on a prototype with the online bank Egg (and even co-presented a paper on it at the 2007 Research conference ). Language Logic has subsequently worked with its creators, the Italian government-run research foundation ISTI-CNR, to integrate theor technology into Ascribe. Though I am often hesitant to state anything is the best, the ISTI-CNR engine is easily the best I have found as it is the most MR-savvy of any automated text-processing technologies. This is not a discovery or text mining tool – it is a coding department in a box.

ACM closely mimics the normal human-intervention coding process, and fits seamlessly into the traditional Ascribe workflow. By using machine learning, it does not attempt to interpret, or extract meaning by looking up words in dictionaries – in fact, it actually does not use dictionaries at all. Instead, you provide it with examples of how you would classify your data into a codeframe, and then set it to learn from this. In Ascribe, this means you simply start coding the data in the way you normally would. As you code, you are creating the training set that ACM needs. When you have coded enough to create a decent training set, you take your foot off the pedal, and let ACM accelerate through the rest.

First, you build the ‘classifiers’ that will identify matching answers. These work by looking for telltale features of the examples you coded. For any individual answer, it could create thousands of these unique features – patterns of words, letters and so on. So many, in fact, that it easily overcomes problems of poorly spelt words, synonyms and so on. When the classifiers have been built, you can then apply them to your uncoded data, and it will categorise them too, applying a confidence score to each coding decision it takes – you can adjust this threshold to make it more or less sensitive. It takes just a few seconds to zip through thousands of verbatims. There is a process for validating the coding decisions the ACM has made, and it will helpfully present validation examples in order of those where it was least confident of its coding decision.

This validation step makes the system very manageable, as you can understand what it is doing and you can improve its performance by correcting any assignment errors, and even react to changes over time.  It feels uncanny, too, as the marginal decisions it identifies are often the ones that have the human coders debating where it should go too.

Not that you have to use the ACS with Ascribe – it does command a premium in pricing over manual coding and it is only really suitable for larger volumes. The overhead of training and validation is comparable to manually coding a couple of thousand interviews.  However, it can also be applied to qualitative projects and web content, such as blogs.

Even manual coding in Ascribe is highly optimised, with tools to let you find similar answers, code by word or phrase matching, and if you wish, re-categorise items at any point. You use it both to create your codeframe and assign answers to it in one integrated step. It’s a multi-user system, and you can give assign responsibilities among the team: some can build codeframes, others only code, and others only analyse. Ascribe also has a surprisingly rich set of analytical tools – even cross-tabbing capabilities. You are not restricted to uploading only the verbatim texts, but the entire survey can go in. It can handle data from SPSS Dimensions now with ease,and it is totally integrated into Confirmit using the Confirmit Web Services interface. Upload routes are provided for most other MR packages.

It’s not the prettiest of tools to use: the interface may be on the web but is hardly of the web and is in need of a makeover. Language Logic are redesigning some modules as thin client Window apps, which have a better-looking interface, but it would improve  the approachability of Ascribe if it’s web interface as better structured and designed. True, it is productive to use, but it does not help you get there as a novice, and the documentation (which is being redone at present) is not as comprehensive as it needs to be. It’s a pity as both make it a challenge to harness all of the power that is in this otherwise remarkable system.

Customer viewpoint: Joy Boggio, C&R Research Services, Chicago

Joy Boggio is Director of Coding at C&R Research Services, a full service agency in Chicago. Joy introduced Ascribe to C&R in 2004, having used it previously elsewhere.  Ascribe is used for all verbatim coding on quant studies at C&R and also some of their qual projects.  She explains: “Within a day or two of introducing Ascibe, we immediately cut down the deliver time on project by, in some cases, a week. The features of Ascribe that are the most attractive are it being web based – you can easily hand out the work very easily to many different people in many different places;  if you have had the study before, you can merge it with the previous study and autocode a part of it; you are not restricted in the formats of data you can input, nor are you restricted in how you export the data out, and we can do some rudimentary data processing within the tool.”

Although C&R has a research staff of around 60, Joy is able to support all of the verbatim coding activities with a team of just three coders. But it is not only the coders that use Ascribe – many of the researchers also use it to access the verbatim responses, using its filtering and analytical capabilities to indentify examples to include in reports and presentations.  “It means they can dive down a little deeper into the data. The problem you have with the process of coding data is that you can flatten out the data – the challenge is always to make sure you can retain the richness that is there. With Ascribe you can keep the data vibrant and alive – because the analytical staff can still dive into the data and bring some of that richness to the report in a qualitative way.”

Joy notes that using Ascribe telescopes the coding process, saving precious time at the start. “It’s now a one-step process, instead of having to create the codebook first, before getting everyone working on it. With this, as you work through the verbatims you are automatically creating codes and coding at the same time, so you don’t have to redo that work. When you are happy with the codebook, you can put others onto the project to code the rest. This is where the efficiency comes in.”

Joy estimates that it reduces the hours of coding effort required a typical ad hoc project by around 50 per cent, but due to the ease of allocating work, and the oversight the system provides, she remarks: “You are also likely to save at least a day of work on each project in management time too.”

C&R Research makes extensive everyday use of the manual coding optimisation tools Ascribe offers, such as to search for similar words and phrases, but so far has only experimented with using the new automated machine learning coding in ACM. Joy comments: “It seems to be more appropriate for larger volumes of work – more than we typically handle. There is a bit of work up from to train it, but once you get it going, I can see this would rapidly increase your efficiency. It would really lend itself to the larger tracking study, and result in a lot less people-time being required.”

A version of this review first appeared in Research, the magazine of the Market Research Society, December 2009, Issue 523

Simstat, Wordstat & QDA Miner reviewed

In Brief

Wordstat with QDA Miner and SimStat

Provalis Research, Canada
Date of review: September 2008

What it does

Windows-based software for analysing textual data such as answers to openended questions along side other quantitative data, or to analyse qualitative transcripts. Uses a range of dictionary-based textual statistical analysis methods.

Our ratings

Score 3 out of 5Ease of use

Score 3.5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

SimStat + QDA Miner + WordStat bundle – one-off purchase cost of $4,195 or $14,380 for 5 licences. 75% discount for academic users on most prices. Upgrades offered to licenced users at a discount.

Pros

  • Analyse and cross-tab verbatim responses as you would any standard question
  • Can use to code data automatically using a machine learning method
  • Easy to create own subject-specific dictionaries, which can be reused on similar projects
  • Full versions available to download for month’s free trial

Cons

  • Standalone system – no collaborative or multi-user capabilities
  • Steep learning curve – requires an expert user
  • Dictionary and word based analysis only: does not support natural language processing or learn by example methods
  • Does not support Triple S data

In Depth

It’s often the answers to openended questions that offer the richest insights in a survey – especially in online surveys, if the questions are specific and well targeted. Yet this valuable resource remains almost unexplored in most quantitative surveys due to the sheer effort involved in analysing it. The principal method used – manual coding to a codeframe – is virtually unchanged since the 1940s. The only nod to the information age is the number of coding departments that use Excel as a surrogate electronic coding sheet – the IT equivalent of equipping your horse and cart with satnav.

WordStat is a very versatile bit of software that offers you countless ways to analyse openended textual data with virtually the same ease as analysing normal ‘closed’ questions in a cross-tab package. It comes as an add-on module to Simstat – which is a feature-rich desktop statistical package for analysing survey data with decent cross-tab and charting capabilities. Interestingly, WordStat also functions equally well as an add-on to another program: QDA Miner, a code-and-retrieve analysis suite for qualitative researchers. All three are developed by Provalis Research, based in Montreal.

WordStat, combined with the other two programs, offers a bewildering choice of ways to dig deeply into verbatim data and make sense of it. In this review, I will focus on two of the more interesting ones to market researchers, but this versatile program has many more tricks up its sleeve than I can cover here.

First, without doing any conventional coding at all, you can effectively cross-tab or slice your data according to any demographics or dependent variables in your data. For this, you would start out in SimStat.

SimStat will let you carry out normal quantitative analysis of data as simple cross-tabs or by applying a wide range of statistics – factors and cluster analysis, regression, correlation and so on. To do textual analysis, you need to start with the verbatim data in the same data file as the other numeric and coded questions. This is pretty much how most web interviewing packages provide the data these days. SimStat works around the concept of dependent and independent variables: pick the any demographic want as a the independent variable, pick the verbatim question as the dependent variable and pick Content Analysis from the bottom of the Statistic menu, and SimStat will fire up the separate WordStat module to let you cross-tab and dig into your openended text.

Once within WordStat there is a wide array of reports and ways to look at the words that respondents have given, in aggregate or case by case, against your dependent variable or by the total sample. There are charts, including dendograms and very informative heatmaps that show the relationship between words, and you can adjust the proximity factor used when looking at words used in the vicinity of other words. Indeed, there is more you are ever likely to use.

However, if you would like to follow a more conventional coding model, you need to enter your data via QDA Miner, which is equally comfortable dealing with records that contain only unstructured text such as focus group transcripts, or quant data with a mix of closed and open fields. QDA Miner has the concept of codeframes at the heart of it, and you can create multiple codeframes and code directly into then. You can search for similar items and then code them all together, and you can extend the codeframe as you go too. Any coding you make can be exported back to the data, for analysis in SimStat or other tools. Nothing too unconventional there.

Step into WordStat, though, and you shift from one era to the next, for the capabilities of WordStat are now at your disposal to build automatic machine learning-based verbatim text classifiers. In other words, WordStat will take your coded examples, identify all the words and combinations of words that characterise those examples, and build the algorithms to perform automated text categorisation according to your examples and your codeframe. Again, there are reports and charts available to you, to understand the extent to which your classifiers are accurate. Accuracy depends on many factors, but suffice to say here, the automatic classifiers can be as good as human coders, and on large datasets, will be much more consistent.

Unlike some other classification models, WordStat is a dictionary-based system, and it works principally on words and the relationship of words to others, rather than on actual phrases. There is a separate module for creating reusable subject-specific dictionaries and the system comes with general dictionaries in about 15 languages. It also contains a range of tools to clean up texts and to overlook elementary spelling mistakes, with its fuzzy matching logic.

There are extensive academic debates about whether this is the best method for coding, but as everything is does is transparent, can be interrogated, changed and improved, it is as likely to be as good as any other method – and it is certainly better than throwing away 10,000 verbatim responses because nobody has the time or energy to look at them.

This is, however, an expert’s system. Coders and coding supervisors would probably struggle with it in its present form. Coding is not the only function that WordStat handles, and because it has to be accessed via one of two other programs, that adds another layer of complexity.

The disjunction between these three different programs is a slightly awkward one, though it is something users report they get used to. Neither does the system offer anything in the way of collaborative tools – through of course, if you are able to code data automatically, it does mean the work done by ten people really can be done by one. Don’t expect to be able to use this simply by reading the manuals – you would need to have some education in text categorisation basics from Provalis Research first. However, with a little effort, this tool could save weeks of work, and even allow you start including, rather than avoiding openended questions in your questionnaire design.

Customer Viewpoint: US Merit Systems Protection Board, Washington DC

John Ford is a Research Psychologist at the US Merit Systems Protection Board in Washington DC, where he uses WordStat to process large surveys of Federal employees which often contain vast amounts of openended data.

“In the last two large survey we have done, we have had around 40,000 responses. We have been able to move away from asking the openend at the end which say ‘do you have any comments’ to doing more targeted openended questions that ask about specific things.

“We asked federal employees to identify their most crucial training need and describe it in a few sentences. We used a framework of 27 competencies to classify them. QDA Miner was very flexible in helping us to decide what the framework was, settle on it, and very quickly classify the competencies.

“With WordStat I was able to build a predictive model that would duplicate the manual coders performance at about 83% accuracy, and by tweaking a couple of other things we were able to gain another three to four per cent in predictive accuracy.

“We also observed that some of the technical competencies are much easier to classify than some of the soft skills – which is not a surprising result – but we were able to look at the differences and make some decisions around this.

“When you look at a fully automated method, there is always going to be variation according to what kind of question you are working with. Using QDA Miner and WordStat together helps you understand what those difference are.

“With Wordstat you can start out with some raw text, and you can do some mining of it, you can create dictionaries, you can expand them with synonyms and build yourself a really good dictionary in very little time. If you work in example mode, you can mine the examples you need for your dictionary. The software will tell you the words and phrases that characterise those answers.

“To make the most of automated coding, you have to focus the questions more, and move people away from asking questions that are very general. In educating people about these, I have started to call then the ‘what did you do on your summer vacation’ question’. You can never anticipate where everyone is going to go. I have noticed there is also a role that the length of the response plays. Ideally, the question should be answerable in a sentence or two. You cannot do much with automated classification if the answer goes on for a couple of pages.”

A version of this review first appeared in Research, the magazine of the Market Research Society, September 2008, Issue 507