The latest news from the meaning blog

 

Big data? Big problem

keyboard showing special custom buttons for social media activitiesThe Economist recently ran an article on “Big Data” in a special report on International Banking. Its assessment of banking elsewhere in the report is that the industry has been surprisingly resistant to embracing the Internet as an agent of change in banking practice. It reveals, counter-intuitively, that number of bank branches has actually risen by 10-20% in most developed economies during a period when most customers pass through their doors once a year rather than once a week.

The newspaper explains this paradox thus: banks with a denser branch network tend to do better, so adding more branches is rewarded by more business. But it’s business on the bank’s terms, not necessarily the customer’s. It does not increase efficiency – it increases cost. And, as The Economist points out, banks’ response in general to customers using mobile phones for banking has been lacklustre, even though customers love it and tend to use it to keep in daily contact with their accounts. It’s a level of engagement that most panel providers would envy.

All of which is to say that there are parallels here with our own industry. Here at Meaning, we have just released the findings of the latest annual MR Software Survey, sponsored by Confirmit. In a sneak peek, Confirmit blogger Ole Andresen focuses on an alarming finding about the lack of smartphone preparedness among most research companies.

But what interests me is the Big Data – both in The Economist’s report and our own. The former offered a fascinating glimpse into the way banks were using technology to read unstructured text and extract meaning, profiling some of the players involved and the relative strengths of different methods. This is technology which is improving rapidly and can already do a better job than humans.

In our annual survey this year, we have asked a series of questions on unstructured text. Research companies, in embracing social media, “socialising” their online panels and designing online surveys with more open, exploratory questions in them, are opening the floodgates to a deluge of words that need analysing: at least that was what we suspected.

Analysis methods cited by research companies for handling unstructured text, from the 2011 Confirmit MR Software Survey by meaning ltd

In our survey we asked a series of questions on unstructured text. Research companies – in embracing social media, “socialising” their online panels and designing online surveys with more open, exploratory question – are opening the floodgates to a deluge of words that need analysing: at least, that was what we suspected.

It turns out that half of the 230 companies surveyed see an increase in the amount of unstructured text they handle from online quant surveys, and slightly more (55%) from online qual and social media work. Yet the kinds of text analytic technologies that banks and other industry sectors now rely on are barely making an impact in MR.

Even a quick glance at the accompanying chart shows that most research companies are barely scratching the surface of this problem. It’s not the only area where market research looks as if technology has moved on, and opened a gap between what is possible and what is practised. There’s much more on this in our report, which will be publishing in full on the 30th May. Highlights will also be appearing in the June issue of Research magazine.

Industry taking a twin-track approach to social media research

Social media research continues to be one of the hottest topics in research. I’ve just been reviewing the abstracts for this year’s CASRO Technology Conference in New York in June, which I will be co-chairing, and of all the topics, its the one with the longest string of submissions. Not only that, but there is some diversity of opinion into what it is, how to do it, and whether it adds anything at all to the existing researchers’ toolkit. Closer to home, it’s a topic that will be debated in next week’s Research conference in London too.

Analysis technology used on social media research projects, based on the 17% of firms who are active in social media research

Social media research is also one of the new topics we focused on in our 2010 annual software survey, sponsored by Globalpark, the results of which are published today. There are some curious findings – and some predictable ones too – that add perspective to the current debate.

Our survey of over 200 research companies of all size around the world, shows social media research is still at the early-adopter stage,  accounting for revenue-generating activity in just 17% of the firms surveyed. Close to the same number – 19% – say they are unlikely to offer social media research, and of the remaining 63% who gave an answer, 31% say they are either experimenting with it and 32% are considering it for the future. Small firms and research companies in Europe are the least likely to be doing social media research and are also the most likely to have ruled it out, whereas large firms are the ones that are most active. The actual volumes of work are still low – we also asked how much revenue social media research accounted for. It is 5% or under for  two-thirds of the agencies that do it and tails off beyond that – but there appear to be some specialists emerging, with a handful of firms deriving more than 20% of their income from it.

Many firms are bullish about the future, though, with 20% predicting strong growth, and a further 52% anticipating some growth, with North America, and again the larger firms, most optimistic about its future.

As a technologist, I was most interested to see what technology firms were applying to what is, after all, something born out of technology. Were the tech-savvy gaining the upper hand, or were researchers taking the conventional, low-tech approach beloved of qualitative researchers. Again, it’s a bit of both. Of all the software-based or statistical methods we suggested for data analysis, the one that came top, was “manual methods”, used by 57%. For analysis, this followed by 54% citing “text mining” (as correspondents could pick all that they used). Text mining, though it uses some computing power, is also very much a hands-on method – but it’s good to see more than half turning to this method. Other methods make much less of an appearance, and the method that I consider shows most promise for dealing with the deluge of data, machine learning-based text classification, was bottom of the list, cited by one in six practitioners.

For data collection, technology was much more apparent – although it is hard to avoid here. We were still intrigued by the massive 54% who say they are using manual methods to harvest their social media data from the web; 57% were using web technologies to collect the data, and the more exotic methods were also fairly abundant, including using bots (43%), crowdsourcing (41%) and avatars (24%).

I’ll pick up on some of the other intriguing findings from the study later. But as the report is out now, you can pick up your own copy by visiting this webpage – and there will be a full report in the May issue of Research magazine.

Ascribe ACM reviewed

In Brief

What it does

Intelligent verbatim content management system and coding environment for researchers and coders, with options for either manually-assisted coding or machine-learning automated coding for higher volumes. Delivered as either web browser-based and web-enabled desktop software modules.

Supplier

Language Logic

Our ratings

Score 4 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Conventional coding: between 3 and 5 US cents per verbatim coded. Automated coding: between 10 and 30 US cents per verbatim coded.

Pros

  • Automated coding option will code thousands of open-ends in seconds
  • Machine learning mimics human coders and produces comparable and highly consistent results
  • Many tools to optimise effort when coding manually
  • Web based environment makes it easy to distribute coding work to satellite offices and outworkers

Cons

  • Automated coding only saves time on larger projects such as trackers
  • Web interface is in need of a refresh
  • Windows only – requires Microsoft Internet Explorer

In Depth

new A little while ago, Language Logic estimated that their Ascribe online coding product was probably handling over fifty per cent of all of the open-ended coding generated by research agencies in the United States, and a decent proportion from the rest of the world too. The challenge is where do you go next, when you have half the market and no real rivals? One direction is to grow the market for verbatims, by making it possible to code the vast number of open-ends that never get coded – and the new Ascribe Automated Coding Module or ACM, promises to do just that.

I happen to know something about the technology behind this tool, because it I worked on a prototype with the online bank Egg (and even co-presented a paper on it at the 2007 Research conference ). Language Logic has subsequently worked with its creators, the Italian government-run research foundation ISTI-CNR, to integrate theor technology into Ascribe. Though I am often hesitant to state anything is the best, the ISTI-CNR engine is easily the best I have found as it is the most MR-savvy of any automated text-processing technologies. This is not a discovery or text mining tool – it is a coding department in a box.

ACM closely mimics the normal human-intervention coding process, and fits seamlessly into the traditional Ascribe workflow. By using machine learning, it does not attempt to interpret, or extract meaning by looking up words in dictionaries – in fact, it actually does not use dictionaries at all. Instead, you provide it with examples of how you would classify your data into a codeframe, and then set it to learn from this. In Ascribe, this means you simply start coding the data in the way you normally would. As you code, you are creating the training set that ACM needs. When you have coded enough to create a decent training set, you take your foot off the pedal, and let ACM accelerate through the rest.

First, you build the ‘classifiers’ that will identify matching answers. These work by looking for telltale features of the examples you coded. For any individual answer, it could create thousands of these unique features – patterns of words, letters and so on. So many, in fact, that it easily overcomes problems of poorly spelt words, synonyms and so on. When the classifiers have been built, you can then apply them to your uncoded data, and it will categorise them too, applying a confidence score to each coding decision it takes – you can adjust this threshold to make it more or less sensitive. It takes just a few seconds to zip through thousands of verbatims. There is a process for validating the coding decisions the ACM has made, and it will helpfully present validation examples in order of those where it was least confident of its coding decision.

This validation step makes the system very manageable, as you can understand what it is doing and you can improve its performance by correcting any assignment errors, and even react to changes over time.  It feels uncanny, too, as the marginal decisions it identifies are often the ones that have the human coders debating where it should go too.

Not that you have to use the ACS with Ascribe – it does command a premium in pricing over manual coding and it is only really suitable for larger volumes. The overhead of training and validation is comparable to manually coding a couple of thousand interviews.  However, it can also be applied to qualitative projects and web content, such as blogs.

Even manual coding in Ascribe is highly optimised, with tools to let you find similar answers, code by word or phrase matching, and if you wish, re-categorise items at any point. You use it both to create your codeframe and assign answers to it in one integrated step. It’s a multi-user system, and you can give assign responsibilities among the team: some can build codeframes, others only code, and others only analyse. Ascribe also has a surprisingly rich set of analytical tools – even cross-tabbing capabilities. You are not restricted to uploading only the verbatim texts, but the entire survey can go in. It can handle data from SPSS Dimensions now with ease,and it is totally integrated into Confirmit using the Confirmit Web Services interface. Upload routes are provided for most other MR packages.

It’s not the prettiest of tools to use: the interface may be on the web but is hardly of the web and is in need of a makeover. Language Logic are redesigning some modules as thin client Window apps, which have a better-looking interface, but it would improve  the approachability of Ascribe if it’s web interface as better structured and designed. True, it is productive to use, but it does not help you get there as a novice, and the documentation (which is being redone at present) is not as comprehensive as it needs to be. It’s a pity as both make it a challenge to harness all of the power that is in this otherwise remarkable system.

Customer viewpoint: Joy Boggio, C&R Research Services, Chicago

Joy Boggio is Director of Coding at C&R Research Services, a full service agency in Chicago. Joy introduced Ascribe to C&R in 2004, having used it previously elsewhere.  Ascribe is used for all verbatim coding on quant studies at C&R and also some of their qual projects.  She explains: “Within a day or two of introducing Ascibe, we immediately cut down the deliver time on project by, in some cases, a week. The features of Ascribe that are the most attractive are it being web based – you can easily hand out the work very easily to many different people in many different places;  if you have had the study before, you can merge it with the previous study and autocode a part of it; you are not restricted in the formats of data you can input, nor are you restricted in how you export the data out, and we can do some rudimentary data processing within the tool.”

Although C&R has a research staff of around 60, Joy is able to support all of the verbatim coding activities with a team of just three coders. But it is not only the coders that use Ascribe – many of the researchers also use it to access the verbatim responses, using its filtering and analytical capabilities to indentify examples to include in reports and presentations.  “It means they can dive down a little deeper into the data. The problem you have with the process of coding data is that you can flatten out the data – the challenge is always to make sure you can retain the richness that is there. With Ascribe you can keep the data vibrant and alive – because the analytical staff can still dive into the data and bring some of that richness to the report in a qualitative way.”

Joy notes that using Ascribe telescopes the coding process, saving precious time at the start. “It’s now a one-step process, instead of having to create the codebook first, before getting everyone working on it. With this, as you work through the verbatims you are automatically creating codes and coding at the same time, so you don’t have to redo that work. When you are happy with the codebook, you can put others onto the project to code the rest. This is where the efficiency comes in.”

Joy estimates that it reduces the hours of coding effort required a typical ad hoc project by around 50 per cent, but due to the ease of allocating work, and the oversight the system provides, she remarks: “You are also likely to save at least a day of work on each project in management time too.”

C&R Research makes extensive everyday use of the manual coding optimisation tools Ascribe offers, such as to search for similar words and phrases, but so far has only experimented with using the new automated machine learning coding in ACM. Joy comments: “It seems to be more appropriate for larger volumes of work – more than we typically handle. There is a bit of work up from to train it, but once you get it going, I can see this would rapidly increase your efficiency. It would really lend itself to the larger tracking study, and result in a lot less people-time being required.”

A version of this review first appeared in Research, the magazine of the Market Research Society, December 2009, Issue 523

SPSS Text Analytics for Surveys Reviewed

In Brief

What it does

Textual analysis software which uses the Natural Language Processing method to process textual data from verbatim response to surveys which will categorise or group responses, find latent associations and perform classification or coding, if required.

Supplier

SPSS

Our ratings

Score 3.5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 3.5 out of 5Value for money

Cost

One-off costs: standalone user £2,794; optional annual maintenance £559; single concurrent network user: £6,985 software, plus maintenance £1,397

Pros

  • Flexible – can use it to discover and review your verbatims individually, or to produce coded data automatically under your supervision
  • User interface is simple, straightforward and productive to use, once you are familiar with the concepts
  • Lets you relate your open-ended data to closed data other questions or demographics
  • Easy import and exports from SPSS data formats or Microsoft Excel

Cons

  • This is an expert system which requires time and effort to understand
  • System relies on dictionaries, which need to be adjusted for different subject domains
  • Rules-based approach for defining coded data requires learning and using some syntax

In Depth

One of the greatest logistical issues with online research is handling the deluge of open-ended responses that often arrive. While much of the rest of the survey process can be automated, analysing verbatim responses to open questions remains laborious and costly. If anything, the problem is gets worse with Web 2.0-style research. A lot of good data gets wasted simply because takes too long and costs too much to analyse – which is where this ingenious software comes in.

PASW Text Analytics for Surveys (TAfS) operates as either an add-on to the PASW statistical suite – the new name for the entire range of software from SPSS (see box) – or as a standalone module. It is designed to work with case data from quantitative surveys containing a mixture of open and closed questions, and will help you produce a dazzling array of tables and charts directly on your verbatim data, or provide you with automatically coded data.

A wizard helps you to start a new project. First, you specify a data source, which can be data directly from PASW Statistics or PASW Data Collection (the new name for Dimensions an ODBC database, or an Excel file (via PASW Statistics). Next, you select the variables you wish to work with, which can be a combination of verbatim questions, for text analysis, and ‘reference questions’ which are any other closed questions you would like to use in comparisons, to classify responses or to discover latent relationships between text and other answers. Another early decision in the process is the selection of a ‘text analysis package’ or TAP.

SPSS designed TAfS around the natural language processing method of text analysis. This is based on recognising words or word stems, and uses their proximity to other word fragments to infer concepts. The method has been developed and researched extensively in the field of computer-based linguistics, and can perform as well if not better than human readers and classifiers, if used properly.

A particular disadvantage of using NLP with surveys is the amount of set-up that must be done. It needs a lexicon of words or phrases and also a list of synonyms so that different ways of expressing the same idea converge into the same concept for analysis. If you wish to then turn all the discovered phrases and synonyms into categorised data, you need to have classifiers. The best way to think of an individual classifier is as a text label that describes a concept – and behind it, the set of computer rules used to determine whether an individual verbatim response falls into that concept or not.

TAfS overcomes this disadvantage by providing you with ready-built lexicons (it calls them ‘type’ dictionaries), not only in English, but in Dutch, French, German, Spanish and Japanese. It also provides synonym dictionaries (called ‘substitution dictionaries) in all six supported tongues, and three pre-built sets of classifiers – one for customer satisfaction surveys, another for employee surveys and a third for consumer product research. It has developed these by performing a meta-analysis of verbatim responses in hundreds of actual surveys.

Out of the box, these packages may not do a perfect job, but you will be able to use the analytical tools the software offers to identify answers that are not getting classified, or those that appear to be mis-classified, and use them to fine tune them or even develop your own domain-specific packages. So, selecting dictionaries and classifiers is done in just couple more clicks in the wizard, the software then processes your data and you are ready to start analysing the verbatims.

The main screen is divided into different regions. One region lets you select categories into which the answers have been grouped, another lets you review the ‘features’ or words and phrases identified , and in the largest region, there appears a long scrolling list of all your verbatim responses to the currently selected category or feature. All of the extracted phrases are highlighted and colour coded. The third panel shows the codeframe or classifers, which is a hierarchical list. As you click on any section of it, the main window is filtered to show just those responses relating to that item. However, it also shows you all of the cross-references to the other answers, which is very telling. There is much to be learned about your data just from manipulating this screen, but TAfS has much more up its sleeve.

One potentially useful feature is sentiment analysis, in which each verbatim is analysed according to whether it is a positive or a negative comment. Interface was not able to test the practical reliability of this, but SPSS claim that it works particularly well with customer satisfaction type studies. In this version, sentiment analysis is limited to the positive/negative dichotomy, though the engine SPSS uses is capable of other kinds of sentiment analysis too.

The software also lets you use ‘semantic networks’ to uncover connections within the data and build prototype codeframes from your data, simply by analysing the frequency of responses to words and phrases and combinations of words and phrases, rather like perform a cluster analysis on your text data – except it is already working at the conceptual level, having sorted out the words and phrases into concepts.

You can build codeframes with, or without help from semantic networks. It’s a fairly straightforward process, but it does involve building some rules using some syntax. I was concerned about how transparent and how maintainable these would be as you handed project from one researcher to another.

Another very useful tool, which takes you beyond anything you would normally consider doing with verbatim data, is a tool to look for latent connections between different answers, and even the textual answers and closed data, such as demographics or other questions.

This may be a tool for coding data, but it is not something you can hand over to the coding department – the tool expects the person in control to have domain expertise and moreover, to possess not a little understanding of how NLP works, otherwise you will find yourself making some fundamental errors. If you put in a little effort, though, this tool not only has the potential to save hours and hours of work, but to let you dig up those elusive nuggets of insight you probably long suspected were in the heaps of verbatims, if only you could get at them.

A version of this review first appeared in Research, the magazine of the Market Research Society, June 2009, Issue 517

Simstat, Wordstat & QDA Miner reviewed

In Brief

Wordstat with QDA Miner and SimStat

Provalis Research, Canada
Date of review: September 2008

What it does

Windows-based software for analysing textual data such as answers to openended questions along side other quantitative data, or to analyse qualitative transcripts. Uses a range of dictionary-based textual statistical analysis methods.

Our ratings

Score 3 out of 5Ease of use

Score 3.5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

SimStat + QDA Miner + WordStat bundle – one-off purchase cost of $4,195 or $14,380 for 5 licences. 75% discount for academic users on most prices. Upgrades offered to licenced users at a discount.

Pros

  • Analyse and cross-tab verbatim responses as you would any standard question
  • Can use to code data automatically using a machine learning method
  • Easy to create own subject-specific dictionaries, which can be reused on similar projects
  • Full versions available to download for month’s free trial

Cons

  • Standalone system – no collaborative or multi-user capabilities
  • Steep learning curve – requires an expert user
  • Dictionary and word based analysis only: does not support natural language processing or learn by example methods
  • Does not support Triple S data

In Depth

It’s often the answers to openended questions that offer the richest insights in a survey – especially in online surveys, if the questions are specific and well targeted. Yet this valuable resource remains almost unexplored in most quantitative surveys due to the sheer effort involved in analysing it. The principal method used – manual coding to a codeframe – is virtually unchanged since the 1940s. The only nod to the information age is the number of coding departments that use Excel as a surrogate electronic coding sheet – the IT equivalent of equipping your horse and cart with satnav.

WordStat is a very versatile bit of software that offers you countless ways to analyse openended textual data with virtually the same ease as analysing normal ‘closed’ questions in a cross-tab package. It comes as an add-on module to Simstat – which is a feature-rich desktop statistical package for analysing survey data with decent cross-tab and charting capabilities. Interestingly, WordStat also functions equally well as an add-on to another program: QDA Miner, a code-and-retrieve analysis suite for qualitative researchers. All three are developed by Provalis Research, based in Montreal.

WordStat, combined with the other two programs, offers a bewildering choice of ways to dig deeply into verbatim data and make sense of it. In this review, I will focus on two of the more interesting ones to market researchers, but this versatile program has many more tricks up its sleeve than I can cover here.

First, without doing any conventional coding at all, you can effectively cross-tab or slice your data according to any demographics or dependent variables in your data. For this, you would start out in SimStat.

SimStat will let you carry out normal quantitative analysis of data as simple cross-tabs or by applying a wide range of statistics – factors and cluster analysis, regression, correlation and so on. To do textual analysis, you need to start with the verbatim data in the same data file as the other numeric and coded questions. This is pretty much how most web interviewing packages provide the data these days. SimStat works around the concept of dependent and independent variables: pick the any demographic want as a the independent variable, pick the verbatim question as the dependent variable and pick Content Analysis from the bottom of the Statistic menu, and SimStat will fire up the separate WordStat module to let you cross-tab and dig into your openended text.

Once within WordStat there is a wide array of reports and ways to look at the words that respondents have given, in aggregate or case by case, against your dependent variable or by the total sample. There are charts, including dendograms and very informative heatmaps that show the relationship between words, and you can adjust the proximity factor used when looking at words used in the vicinity of other words. Indeed, there is more you are ever likely to use.

However, if you would like to follow a more conventional coding model, you need to enter your data via QDA Miner, which is equally comfortable dealing with records that contain only unstructured text such as focus group transcripts, or quant data with a mix of closed and open fields. QDA Miner has the concept of codeframes at the heart of it, and you can create multiple codeframes and code directly into then. You can search for similar items and then code them all together, and you can extend the codeframe as you go too. Any coding you make can be exported back to the data, for analysis in SimStat or other tools. Nothing too unconventional there.

Step into WordStat, though, and you shift from one era to the next, for the capabilities of WordStat are now at your disposal to build automatic machine learning-based verbatim text classifiers. In other words, WordStat will take your coded examples, identify all the words and combinations of words that characterise those examples, and build the algorithms to perform automated text categorisation according to your examples and your codeframe. Again, there are reports and charts available to you, to understand the extent to which your classifiers are accurate. Accuracy depends on many factors, but suffice to say here, the automatic classifiers can be as good as human coders, and on large datasets, will be much more consistent.

Unlike some other classification models, WordStat is a dictionary-based system, and it works principally on words and the relationship of words to others, rather than on actual phrases. There is a separate module for creating reusable subject-specific dictionaries and the system comes with general dictionaries in about 15 languages. It also contains a range of tools to clean up texts and to overlook elementary spelling mistakes, with its fuzzy matching logic.

There are extensive academic debates about whether this is the best method for coding, but as everything is does is transparent, can be interrogated, changed and improved, it is as likely to be as good as any other method – and it is certainly better than throwing away 10,000 verbatim responses because nobody has the time or energy to look at them.

This is, however, an expert’s system. Coders and coding supervisors would probably struggle with it in its present form. Coding is not the only function that WordStat handles, and because it has to be accessed via one of two other programs, that adds another layer of complexity.

The disjunction between these three different programs is a slightly awkward one, though it is something users report they get used to. Neither does the system offer anything in the way of collaborative tools – through of course, if you are able to code data automatically, it does mean the work done by ten people really can be done by one. Don’t expect to be able to use this simply by reading the manuals – you would need to have some education in text categorisation basics from Provalis Research first. However, with a little effort, this tool could save weeks of work, and even allow you start including, rather than avoiding openended questions in your questionnaire design.

Customer Viewpoint: US Merit Systems Protection Board, Washington DC

John Ford is a Research Psychologist at the US Merit Systems Protection Board in Washington DC, where he uses WordStat to process large surveys of Federal employees which often contain vast amounts of openended data.

“In the last two large survey we have done, we have had around 40,000 responses. We have been able to move away from asking the openend at the end which say ‘do you have any comments’ to doing more targeted openended questions that ask about specific things.

“We asked federal employees to identify their most crucial training need and describe it in a few sentences. We used a framework of 27 competencies to classify them. QDA Miner was very flexible in helping us to decide what the framework was, settle on it, and very quickly classify the competencies.

“With WordStat I was able to build a predictive model that would duplicate the manual coders performance at about 83% accuracy, and by tweaking a couple of other things we were able to gain another three to four per cent in predictive accuracy.

“We also observed that some of the technical competencies are much easier to classify than some of the soft skills – which is not a surprising result – but we were able to look at the differences and make some decisions around this.

“When you look at a fully automated method, there is always going to be variation according to what kind of question you are working with. Using QDA Miner and WordStat together helps you understand what those difference are.

“With Wordstat you can start out with some raw text, and you can do some mining of it, you can create dictionaries, you can expand them with synonyms and build yourself a really good dictionary in very little time. If you work in example mode, you can mine the examples you need for your dictionary. The software will tell you the words and phrases that characterise those answers.

“To make the most of automated coding, you have to focus the questions more, and move people away from asking questions that are very general. In educating people about these, I have started to call then the ‘what did you do on your summer vacation’ question’. You can never anticipate where everyone is going to go. I have noticed there is also a role that the length of the response plays. Ideally, the question should be answerable in a sentence or two. You cannot do much with automated classification if the answer goes on for a couple of pages.”

A version of this review first appeared in Research, the magazine of the Market Research Society, September 2008, Issue 507