The latest news from the meaning blog

 

SPSS Dimensions Desktop Reporter 4.1 reviewed

In Brief

What it does

Windows-based cross-tabular reporting program for end-users.

Supplier

SPSS An IBM Company

Our ratings

Score 4.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 2.5 out of 5Value for money

Cost

$1,500 (approx £750) annually for single user annual licence. Perpetual licence $2,450 (approx £1225) for a single user, plus 20% annually for support and updates. Volume discounts are offered.

Pros

  • Powerful, but still simple to use
  • Supports hierarchical data
  • Outputs in Excel and PowerPoint can be auto-refreshed from live data link
  • Will work directly on Quanvert databases

Cons

Cannot colour-code, or highlight variances or significant valuesGetting data in can be a challenge (in the current version)

Some performance issues with large datasets in XML or SQL

In Depth

There are some old programs that refuse to die. Even the original authors of Quanvert must be taken aback by its astonishing longevity. So must SPSS, its current owners, who had lined up mrTables, its new online cross-tab tool, to be a Quanvert killer. But the convenience of running an application at full tilt on your desktop or laptop continued to give Quanvert the upper hand for many users. Despite a gnarly old interface, Quanvert did many things that were hard, if not impossible to do in mrTables.

Now, SPSS has launched a second potential Quanvert-buster on the market. It is called Desktop Reporter, thus wearing its USP on its sleeve – and it is clear that this time SPSS are serious about providing the critical mass of functionality that should entice users away from old Q for ever.

Like mrTables, it is a Dimensions product, which means that it sits on top of SPSS’ tiered architecture of a standardised Data Model and another model relating to the creation of tables called the Tables Object Model, or TOM. The immediate benefits to end-users of this are not always easy to see, but it does make it a lot easer to integrate Dimensions software with other applications, or even use these products as a springboard to create your own analytical systems, information portals and so on.

Indeed, SPSS has tended to take an engineer’s rather than a designer’s approach to much of its Dimensions line. The user experience was often disjointed, leaving you with the feeling that it was not as easy as it should be, or could be in some of its rivals’ offerings. Desktop Reporter looks to me like a break with the recent past – it’s elegant, sophisticated, sassy and, best of all, highly intuitive.

The main screen has the now typical column on the left where selections take place, a large pane on the right, where the action happens, and buttons and controls on top to effect actions, change options and so on. Right-clicking always seems to bring up a sensible menu of options relevant to what you are doing, and these often duplicate buttons, menu options and keyboard shortcuts, so providing power users with many ways to skip through producing tables

Some users like to handcraft all their analysis, question by question. A table is simply created by dragging and dropping items on to it. A separate tab in the main window lets you set up filters using parallel techniques.

But it is just as easy to throw all of your questions into the pot, and let Desktop Reporter produce one tab for each one, as a total or with a standard break, if you prefer.

Existing tables can be used as templates for new tables, to speed up the definition. Stats and sig. tests are also easily applied, and instead of overwhelming users with options (most of which 99% of users will never use), more obscure options are hidden away, but accessible generally from a “More” button. Your default output can be a cross tab or a chart, or both; charts and tables are easily posted into Excel or PowerPoint, and both can retain links to the data for automated refreshing when more data arrives.

It even offers some multivariate analysis for the statistically challenged – letting you present a table of ‘difference attributes’ which lets you throw together any number of variables, and it will rank the combinations of answers to show those where the disparity is greatest at the top. You can alter the parameters to show affinity too.

There is much more that the program does very well – sensible defaults, right mouse button menus and tight navigation mean that whether you are exploring your data or finessing your output, a couple of mouse clicks is usually sufficient to get you where you want to be.

Quanvert users are likely to take to the program straight away. It appears to do much of what Quanvert did, with the benefit of a modern graphical user interface. Even hierarchical data are supported. Some Quanvert terminology, like Axes and Levels has come over too, which may be a puzzle to other users.

There are some constraints in the version I saw, however. The whole business of getting data in is a work-in-progress, as Jeff Thompson, our interviewee, points out. SPSS promise that version 4.5, due out this month, will remedy much of this. It probably will not remedy some of the other constraints that the Tables Object Model imposes on what tables look like, so that some of the manipulation features (often scripted in Quantum) are unachievable in this, and are likely to remain so.

An opportunity missed is to update significance testing. A faithful reproduction of the letter-code sig test tables from Quanvert will satisfy some users, but leave many more baffled – those who are used to looking at colour-coded tables based on exceptions. Unlike most of its rivals, this is something Desktop Reporter won’t let you do.

Despite its version 4 number, this is effectively the first version of Desktop Reporter (the number reflects the generation of the Dimensions suite) and, as a first version, it is most impressive. Will it turn into a real Quantum-buster we must wait and see what the loyal Quanvert users have to say.

Customer viewpoint: Jeff Thompson, Kantar, Austin TX

Kantar Operations provides operational support for Kantar Group research companies, and is an early adopter of SPSS Desktop Reporter. Jeff Thompson, Director of Research Technology, based in Austin, Texas, describes his experiences.

We are at the beginning of deployment, so it is not on everyone’s desktop yet – we have only used it with a few select teams. But we have now made it our preferred tool for delivering SPSS Dimensions-based output. We are already using it to work on Quanvert data. We did a really thorough gap analysis between this tool, Quanvert and some other in-house tools, and we found it really did do everything we wanted it to. There are a some limitations in this version, but 4.5, which is due anytime now, will probably address a number of these.

We have found that the ease of use of Desktop Reporter is so much greater than that of Quanvert. The user interface design is so much better. What we are seeing, overall, is that the number of things that people can do themselves is increased, so the number of requests to DP is diminished.

“For instance, it is so much easier for users to create nets, to create and edit filters and they can derive new variables very easily. These are often things that users had to come back to DP team for changes. It has some interesting new ways of looking at data too. It allows you to select a whole set of variables and run a report, and look for things that are statistically significant. But for us, the real advantage is the ability it provides for researchers to do things on their own and not have to come back to the operational teams to script new variables or tables.

“We are in very early stages and have only used it with a few select teams. They have been impressed, but these people are the ‘early adopter’ types who are often able to get the most out of new tools. We have yet to see if users as a whole will use it in the same manner.

In the past, in the Dimensions side of SPSS, there have been some weak user interfaces. The improvement in this over mrTables is enormous – it stands head and shoulders above anything the Dimensions team have produced to date. We have long felt they had good engines behind their software, but the interfaces were poorly, or perhaps quickly thought out. This interface seems to have been done really well, and I am really hopeful that we are seeing the start of a new age in their development – one where they put more thought into the user interface design of their tools.

The one awkward area is still there is not an ideal data store. You can easily hit Quanvert data but you cannot easily write out Quanvert; you can use XML data, but it does not perform well, or you can use SQL server, which performs quite well, but it just isn’t portable. The tools around the whole distribution of SQL server databases are lacking somewhat. We are expecting this to go away with 4.5.”

A version of this review first appeared in Research, the magazine of the Market Research Society, June 2007, Issue 493

Cluetec mQuest reviewed

In Brief

What it does

Handheld interviewing for Windows mobile devices, with capabilities for mystery shopping, public transport measurement and self-completion diary surveys

Supplier

Cluetec, Germany

Our ratings

Score 4 out of 5Ease of use

Score 3 out of 5Compatibility with other software

Score 5 out of 5Value for money

Cost

Pay-per-use model from €0.30 per interview, with support contracts from €100 (inc 1 hr phone support). Other pricing plans available for longer-term users.

Pros

  • QuestEditor authoring tool very simple and easy to use
  • Special ‘traffic’ version is ideal for transport and travel surveys
  • Robust and reliable in the field
  • Cluetec offers PDA rental and per-survey cost

Cons

  • Cannot pre-populate interviews with case data
  • No support for quotas
  • Lacks tools to manage the allocation of fieldwork to individuals and devices
  • Windows Mobile/Pocket PC devices only, not Palm

In Depth

“Great idea, but not really practical.” This is a common reaction from experienced fieldwork managers to handheld interviewing for long and complex surveys – and it is fair to say that it has also been the experience of some users when trying to do demanding surveys such as mystery shopping or travel audits. Long surveys with complex routing are often no problem, but surveys where the interviewer needs to follow a routing determined by what they are observing rather than a pre-determined flow of questions can be very tricky to present on a PDA.

Cluetec, a German software company, has developed a palmtop interviewing system which aims to provide support for these problem children of mobile interviewing, as well as the more standard fare of face-to-face interviews.

Surveys are created in a Windows-based tool called QuestEditor. It is fairly standard fare, with a tree-view on the left and tabbed forms where you define your question texts, answers, conditions and validation. It is virtually syntax-free, though some of the routing and validation logic can be a bit cryptic, and branches are achieved using a ‘goto’ type construct, which has the potential to become very confusing for complex routings. A block-structured approach would be safer all round.

Cluetec offers a special ‘Traffic’ version of mQuest which is aimed at public transport service measurement, though it could be equally useful in a variety of other mystery shopping situations. Traffic lets the auditor toggle between two surveys, which is ideal for on-board measurement on a train, tram or bus, as a full audit of the service and the vehicle can be carried out when the vehicle is moving. However, when the vehicle reaches a stop, the auditor can toggle to a second survey which lets them count those boarding and alighting, and in a number of categories such as by age, disability, with bikes, pieces of luggage and so on.  The boarding survey is already populated with information about the route, which is downloaded to all the devices. All the auditor needs to do is enter the route code and the survey will then anticipate each stop in sequence. There appears to be no practical limit on how many routes can be loaded on the palmtop device for any transportation region.

An auto-completion feature, available in mQuest, makes travel and mystery shopping very effective, and overcomes the constraint of typing letters with a stylus from a pop-up keyboard on screen. As each letter is entered, the list will narrow down to the most likely candidates that begin with or contain those letters. It will make light work of several thousand destination or transport interchange points.

The software is strong on validation – both rigorous error checking and lighter-touch plausibility checking, and contains several other features to make completion both fast and reliable in the field. Single click mode moves to the next question as soon as an answer is given – and this can be turned on and off as required. Global variables allow some questions, such as location, to be filled in automatically, until the interviewer changes it. Also ideal for mystery shopping is its ability to integrate any pictures or short movies taken using the PDA’s built-in camera with the interview, or audio, so that these images or recordings are passed back along with the rest of the data. There are special question types for each of these, making programming a breeze.

There are also some surprising lacks in the product. There is no support for quota control, which is perhaps forgivable if the emphasis is mystery shopping. Less easy to understand is the absence of the means to pre-populate surveys with case-specific data, such as data pertaining to the places being mystery shopped. Neither is there a decent solution for allocating work to fieldworkers or respondents yet. Improvements are promised here for a version due out in June. Though multiple languages are supported, the authoring tool only shows one at a time. As you enter the translation, the original disappears from view, which is an accident waiting to happen, in my view.

Underneath all of this, the software is developed in Java, and Cluetec have written their own emulator and delivery platform, which means the software has the potential to run on any platform that supports Java, not just Windows. Market demand, however, means that mQuest is currently only supported on Windows mobile devices and support for PalmOS has recently been withdrawn. But the emulator, which you can use to test surveys, works in any Java environment device. It worked faultlessly on my Mac, for example.

The server, which the handhelds connect to for up- and downloading, works under Linux or UNIX as well as Windows. But the QuestEditor authoring tool insists that you are a Windows user. Yet the experience of users seems to point to the product being rock solid for reliability when used in the field. Data transfer can be achieved wirelessly, using cellular telephony for distributed fieldwork, or Bluetooth for short distances such as at a conference centre, or simply by docking the devices’ flash memory cards.

Conveniently. Cluetec also maintains a large stock of loan devices, and allows renters to use its servers for data transfer and fieldwork management. With rental charged on a per-interview basis, it can offer both a low-cost and a low-risk was to dip a toe into the fast-moving waters of mobile interviewing.

Media company MindShare has made extensive use of mQuest for its MindSet media survey in Germany. Here, Christian Franzen, Director Advanced Techniques Group and Christian Maerten, Project Manager MindSet, speak of their experiences.

Customer Perspective: MindShare, Germany

Media company MindShare has made extensive use of mQuest for its MindSet media survey in Germany. Here, Christian Franzen, Director Advanced Techniques Group and Christian Maerten, Project Manager MindSet, speak of their experiences.

CF: “We belong to a media agency, so we do research on media usage and ad effectiveness. The aim of our MindSet study is to evaluate total media usage of people, because all we have are highly separated studies in this market. We do not have something that does not cover all media – not just TV, radio and newspapers but all the other media such as posters, beer mats, internet, email and so on.

CM: “Our aim was to do a study where people filled in a questionnaire every hour for three days. The idea was to have something that was something between a questionnaire and measurement – very near to measurement. We wanted to do this via PDA so [respondents] could fill in the questionnaire on their own. We wanted a solution that was simple for them to use. The software did not do all of this, but Cluetec were very open to change, and created a special version for us.

CF: “With our system we could have a very detailed look at what the person is watching or has seen, and also ask them about their attention and general feeling. This provides a very good combination and overcomes the problem of people either forgetting things or changing things. A lot of things get lost [with traditional methods]. We chose mQuest on the basis that it would be very good for our research.

CM: “The technology for respondents is very easy to use. We did a pilot and in that asked people if they would participate again and 70 per cent said ‘yes’. Only two per cent said they did not like it. The technique is amazingly stable and Cluetech have made it very robust. For example, the PDA crashes for some reason – and that usually does not happen – immediately the software starts again. This is very important if you are handing out a PDA to your respondent.

Working with this software is really easy. It’s no problem to create a questionnaire and it does not take much time to learn – you can learn what you need to make the questionnaire after one or two hours.

CF: “We have been amazed by the technology – when we started, our concerns were about whether people could work with it and if it would was stable enough to do a quick study. It was. We eventually had 200 PDAs out in the field.”

CM: “Our questionnaire is huge and contains a lot of complex filtering, but this means is not time-consuming for the respondent. It takes about a minute and a half to complete each time. But if you print it out, the questionnaire is 120 pages.”

CF: “We are definitely going to do more studies this way.

A version of this review first appeared in Research, the magazine of the Market Research Society, May 2007, Issue 492

XSight 2.0 reviewed

In Brief

What it does

Windows-based qualitative researcher’s workbench which helps researchers to organise all their material and their thoughts in one place. It supports thorough and systematic analysis of transcripts or other qualitative data using a variety approaches ranging from the quasi-quantitative to through to unstructured, intuitive or ideas-led methods.

Supplier

QSR

Our ratings

Score 3.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

£795 for single user with volume discounts up to 25% and special rates for educational and government sector users. Annual support and maintenance is £160.

Pros

  • Much improved ease-of-use over version 1
  • Now incorporates a mapping tool for free-format thinking or mind-mapping
  • Very flexible: offers many different way to sift and analyse data
  • Can share work and collaborate on the analysis locally or internationally

Cons

  • Hard to analyse groups where participants of individual groups don’t all share the same sample characteristics
  • Performing queries can still be a challenge
  • No easy way to import structured lists or tagged items from Excel or Word
  • Word export uses styles in an illogical way; PowerPoint export virtually unusable

In Depth

XSight, when it launched midway through 2004, was the first serious attempt to bring IT to bear on the work of the qualitative market researcher. It presented a welcome breakaway from the many code-and-tag qual tools used by social researchers. But after two and a half years and a couple of minor upgrades on, the program was looking tired – and some researchers were finding fault for it being a bit stiff and unfriendly.

It is clear that QSR, authors of both XSight and also NVivo, which is widely used in academia, have listened carefully to users and critics, because the transition from version 1 to 2 is like taking a flight from wintry Britain and emerging into the warmth and sunshine of the tropics. The program is brighter, more colourful and altogether more fluid in its approach. There are many great new features to save time and effort on the journey that starts with a wall of words and ends with those of nuggets of wisdom and understanding that get clients excited.

Not only does the program look much nicer, but it addresses several serious lacks in version 1. For example, free-form tagging, outside of analysis frameworks, which was a serious deficit. Now, you have a stock of coloured circles, stars and the like that you can apply, giving each one your own annotation. You can track these all the way through to your report, and your annotation will appear as a tooltip whenever you mouse the shape, which is handy if you use a lot of them.

The biggest change is what QSR call maps. This allows you to map out a tree diagram of ideas for analysis. Anyone familiar with mind maps will recognise the similarity, though it is not a true mind map. Your concepts sit in discrete but linked boxes. You cannot annotate the links as you can with a mind map, only the boxes. But it is fit for purpose, as you can grow your ideas organically, like on a whiteboard and when you are ready, transform it into a ready-to-go analysis framework. It partly overcomes another of my bugbears, in that setting up the analysis frameworks can be extremely tedious – especially if you have book for a discussion guide to work to. But the program still lacks a decent import from structured lists in either Word or Excel, where either the guide or analysis data has already been given a structure externally.

The new interface makes the software much more usable overall. It now supports drag and drop throughout. It also provides plenty of toolbars, which are all configurable, so you can put your favourite tools together, or hide the ones you rarely use. All the options are also available from the toolbar and many also by keyboard shortcuts or from right mouse clicks.

In the old program, your display would split vertically to let you have two work areas in view at once, but the analysis framework and outputs from queries always had exclusive use of the top pane. In version 2 you can use your two display regions however you like, simply by dragging the tab of view from one pane to another, which alone makes the application much more productive to use.

Querying is still a bit of a dark art, and in my view, the default display remains somewhat cluttered and unfriendly, though the capability is extremely useful. Perhaps this will be the focus for version 3. However, the report writing capabilites are greatly improved, and the export to Word works a bit better than it did. Unfortunately it writes out headings as character styles, not paragraph styles, which means you cannot generate a table of contents, and a lot of the rest of the formatting is hard coded as local overrides, so you could spend hours trying to make an almost nice document into something presentable. I cannot imagine who would use the ‘presentation’ feature, which is a bit like PowerPoint, but not enough like it to be much help.

But these niggles are fairly peripheral and should not deter anyone from requesting an evaluation copy. Many users write their presentations in a separate window in Word or PowerPoint, and it is terribly easy to cut and past between the two.

Now I have used the software for analysing some transcripts myself, I cannot think of a better way to do it. If you are still shuffling documents on paper or trying to organise material in Excel and Word, a copy of this should save you enough time to be able to leave the office on time most days.

Customer perspective: Liz Montgomery at GfK London

GfK NOP now make significant use of XSight, with 23 licences in the UK and further afield. Liz Montgomery, Divisional Director, GfK NOP Business and Technology, is a regular user and has moved from version 1 to version 2. She comments on her experiences with XSight 2:

“One of the things I have noticed  you can use it in a lot of different ways. Some people like to start by using the whiteboards, which they (QSR) call maps. This works well if you are time rich and have a more open palette to analyse. Other people will start with a much more structured approach, with a detailed analysis framework, yet others will just put themes in and work their transcript items back into the themes.

It is a very good collaborative tool, especially for large international projects, and is getting stronger in this. We often have five or more consultants working on a project in different locations; it can really help to give them a structure to write things up, overall. But it also gives them to freedom to add stuff in, and it does not mess up the layout, – and we can import it and incorporate and re-analyse it later.

I particularly like the free search capability, and also ‘ideas’ – if you have an idea it encourages you to write it down now, when you think of it, even if the idea falls over later when you come back to look at it, but sometimes it gives you something very exciting.

You could do a lot with version 1, but the old interface was clunky and slow. In the new version it is much faster, and the toolbars seem to be more logical.

The biggest improvement is maps, which I think of these as whiteboards, and along with it the ability to link everything together onto the whiteboard. And tagging is great. It was absent before and was really needed. Tags give you lots of flexibility to organise your data.

Another big improvement is the way you can select and output quotes, which worked before but was rather ugly in the old program. You can even write your whole presentation in this if you want to, and I know of some people who do just that. For me, it is really useful being able to output the analysis and the quotes into a nicely formatted Word document.

At GfK NOP, we see it as adding a lot of richness. It is a very positive marketing tool with some clients, for example, but also it does genuinely gives you efficiency and the ability to handle the data in ways you could not easily do before. I have found particularly, with some complex studies where there is a lot of data, that it would be quite hard to analyse with traditional methods and look at it really thoroughly, which this lets you do, and do it quickly too.

The fear for some qualitative researchers is this is automating thinking and it definitely doesn’t do that. You have to do the thinking. It enables you to think more and look at everything quicker – and just get more out of the data.”

A version of this review first appeared in Research, the magazine of the Market Research Society, April 2007, Issue 492

CodingModul reviewed

In Brief

What it does

Windows-based verbatim answer classification management for coding and/or transcription of handwritten scanned images from self-completion surveys or for coding open text responses from CATI, CAPI and Web interviews.

Supplier

StreamBASE GmbH

Our ratings

Score 4 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

€7,500 one-off for base 2-user system, additional users €500 or less, according to volume; support and maintenance 18% of purchase price annually. Special terms for public sector and academic users.

Pros

  • Well-crafted system full of practical features for coding
  • ‘Packages’ option allows coding work to be distributed to outworkers with a standalone PC
  • Seamless integration with Readsoft Forms (formerly Eyes and Hands)Powerful administrative features to manage workflow and simplify tasks for coders

Cons

  • Windows based only – not web-enabled
  • Automation features for typed texts are limited in current version
  • Documentation not yet in English (due April with version 3)

In Depth

It’s been a while since anyone attempted to provide better software to manage the coding of open-ended questions. Most data collection suites offer rudimentary tools that cope with verbatim responses, but do little to automate the work. One early attempt, Verbastat, has now sadly disappeared from the market – probably shaded out by web-based Ascribe. But neither of these products are much help if the openended responses originate as handwritten items on paper. This is the gap that streamBASE GmbH, a German software provider. has plugged with its Coding-Modul.

The program actually consists of two modules – a ‘Coding Control’ for administrators, and ‘Coding Station’ for the coder to use. The Control module neatly strikes the balance between a clean, simple to learn interface while packing in a lot of options to provide flexibility in the way coding workflows are managed. A panel to the left provides a tree view of the work, split logically into Surveys and Users. Everything can be found within this tree. A survey contains entries for questions, coding rules and transaction rules, each of which can be added to or altered simply by right-mouse clicking and selecting from a context sensitive menu.

It is within ‘Questions’ that codeframes are defined, and these can be as simple or as complex as you like, with multiple hierarchies allows and a wealth of tools for managing codeframe changes over time on continuous studies. You can also preview the quality of scanned images here, to check that coders will be able to work with what they are being given.

At the core of the system are ‘rules’, which define the work to be done. A rule will select and filter verbatims from the pool of work to be done in any survey, and send it to the coders you designate. Therefore one rule could assign one question to one coder, another could assign several questions to one coder, another question to several coders and so on. Rules have a rich set of options associated with them which you can switch on to do fancier things. For instance, when working with scanned images, you can have it sort and deliver the openended boxes to coders in order of the density of the response, so that non-responses such as a dash or the word ‘nothing’ tend to come at the end, and the coder can decide when there is no more real data is coming up, without having to plough through all of them. Transaction rules determine how the data get exported out for analysis.

Graphical displays and a reporting tool give you some lovely snapshots of all the work in progress and work still to be done – it is though having this kind of management information available that real productivity gains can be achieved.

The coder’s interface is very simple and obvious to use. Administrators can allow greater flexibility to more experienced coders to manage their own workflows and even to add to the codeframe as they go. The software also provides support for satellite workers using a standalone PC or laptop – on the move, or even on the kitchen table, by creating a ‘package’ of work which you despatch by CD or DVD to the remote coder. Email is not normally an option, due to the size of the bitmap images.

Mid-stream software like this needs to be versatile with its inputs and outputs. A variety of imports, exports and more tightly coupled ‘plug-ins’ are offered, best demonstrated with Readsoft Documents for Forms (formerly Eyes and Hands), where Coding-Modul communicates directly with the Forms database, avoiding the need for any intermediate file transfer. ODBC is widely accepted open standard for exchanging data between database, and the software offers an ODBC interface, which makes it easy to integrate with any other database-oriented data collection platform. Already, Streambase offers plug-ins to Confirmit, NIPO’s ODIN and even non-database Quancept, and the firm expects to add other interfaces, as customers request them.

At this point, it might be worth waiting a month or two until version 3 is out. Alongside a much needed English translation of the Help system, Streambase is also promising to increase the support for handling electronic data, using word counts for sorting texts prior to classification and other automation techniques for the coding of texts. In my view, this is a must. Where verbatim texts are in a machine readable form, as they are from CATI or Web, the bottom-up approach followed by this software, which may be a necessity for scanned bitmaps, is not really good enough. Word searching and some mass aggregation techniques will be required to provide a measurable advantage over the built-in coding module you are likely to find in any existing data collection suite.

How far version 3 will take us up the escalator to coding heaven remains to be seen, though something to that should make CATI centre managers sit up is the planned support for coding of audio snippets where verbatims have been digitally recorded.

Customer viewpoint: IPSOS Germany

Ipsos Germany has been using Coding Module for the bulk of its scanned paper-based surveys since 2003. It is used in combination with the company’s Readsoft Documents for Forms data capture system. Britta Dorn, manager of the coding and data entry department, says: “It is a lot faster using this – you save a lot of time with it. You don’t have to go through the questionnaire twice. For example, if you have a questionnaire with 75 pages, you do not have to waste time going through all these pages looking for the next question to code. Everything is there on the screen -it is much better. We use it also when we need verbatims to send directly to the client. Rather than go through the questionnaires, we can use the coding module and transcribe from there. So it is quicker here too.”

Often Ipsos will transcribe verbatims from the images on screen into actual text as well as coding them. But if the client only wants to have a snapshot of what people are saying, without classifying the answers, Britta will request a verbatim report which simply lists the images.

Britta continues: “That way, it does not take a lot of time. And it is also very quick getting the data out, for quality control. When you need to output data for evaluation purposes, you can do this in just a few minutes.

“We don’t use it for every survey. For example, if you have questionnaires with semi-open questions, it may be quicker to code these manually. I would say we use if for 90% of our paper and pencil interviewing – but we look at each job separately.

“It is very easy to operate. It does not take a long time to train coders – it takes about an hour and then they know everything they need for Coding. And the administration is very easy to handle. Our coders are very experienced – many of them have worked here for 15 or 20 years – and they work well with the software.”

“We have also had very good experiences with the company. If you want some modifications to the software they will usually do this quickly if it’s possible for them. ”

A version of this review first appeared in Research, the magazine of the Market Research Society, March 2007, Issue 490

Instant Intelligence reviewed

In Brief

What it does

A delightfully simple online cross-tab and topline reporting tool which works with survey data from most standard MR data collection tools, and also offers an optional integrated web-based data capture service from paper.

Supplier

Data Liberation, UK

Our ratings

Score 4 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Entry level for analysis: £3600 per year, includes 1 admin user, 4 report writers and 10 viewers. Extra report writers £50 per month; viewers £10 per month. Scanning from 2p per duplex page; free scanner for committed volumes.

Pros

  • Easy import via triple-S or SPSS for most MR web or CATI data collection tools
  • Very quick to learn – majors on simple, straightforward analysis
  • Very clever scanning solution, if you need to capture data from paper
  • Gives research buyers an independent alternative to using agency’s embedded analysis tools

Cons

  • Filtering options rather too limited
  • Cannot create new variables, regroup variables or create a composite breakdown
  • Analysis is queued rather than done in real time, so there can be a wait for each table

In Depth

It has long been my belief that the greatest opportunities that the web provides for research are not in data collection, but in the downstream activities. One firm I have been watching for a while that takes this to heart is Data Liberation. They developed a web-based scanning or data capture tool for paper questionnaires two years ago, and last year added a simple online analytical tool. Now, the two have been integrated and launched at the 2006 Insight Show in November. If scanning is not your thing, this still provides a great web-enabled way to analyse data from any source.

Web-based scanning is not the oxymoron it might seem to be in Data Liberation’s hands. As a DIY user, you can design your own paper questionnaire very easily using Excel on your desktop. It provides the grid — you tend to work with very narrow columns to give you better layout control, and by adding texts of various size and hue then selectively adding borders you can create great looking questionnaires. Tickboxes you create using the box character in the Wingdings font.

It is not the most obvious way to design a paper questionnaire, but once you see how it is done, it is ingenious. In some ways, it is better than using Word as you always know exactly where everything will be, to the pixel – hence its attraction as a robust way to define scannable forms. It handles multi-page and double sided documents too, but if you decide to change your pagination, then it can involve a lot of manual editing as there is no way to flow the content as you can in Word.

The document can then be printed on your office laser and copied in bulk. What is usually the nasty part — defining the scanning template — is done by logging into your account on the website and uploading your Excel document. You then view a bitmapped image of your forms on screen, and use a smart mark-up tool to point out the regions where questions and answers appear. It makes intelligent guesses about the questions and answers, snapping to rows and columns of boxes, and letting you confirm or guide it to the right place by dragging on screen. Again, it is a painless process.

Existing or ‘legacy’ documents can be handled by sending them to Data Liberation who, as a service, will create a matching Excel template. You also send them the paper to scan — though if you are doing a lot of scanning, they will provide you with your own scanner connected over the Internet, to their site.

Speaking to several of their customers, it is clear that this very unconventional approach actually yields highly accurate results — as good as anything else on the market, it seems, and capable of handling very high volumes as well as small one-off jobs.

The part of Instant Intelligence you are likely to spend most time in, though, is the analysis tool. Unlike most analysis tools, there is very little setup involved in bringing data in for analysis. As an administrator, you can import all of the variables and texts when you start from a triple-s or an SPSS file, which many packages will output. If you only have an Excel or CSV file, you can still work with this, but will have more setting up to do. Variables can be grouped into a hierarchy of folders or sections, which makes navigating around very large projects much easier. You can also define users and to some extent configure permissions, such as allowing them to view reports, or perform analysis and select the projects they may view either by group or individual.

The analytical capabilities are basic: cross-tabs, frequencies and percentages. To me, it seemed to be a slightly hollow centre after all sorts of tantalising goodies on the outside, but it does most of the things that most end-users want. It does not go very far with filtering, and options to combine variables are too limited to be useful. It also incorporates a topline report which does the job but without much grace in its output style. These are all things that are likely to change in future versions.

The tool could struggle to live up to its instant credentials if you hit the server at a busy time: analyses are queued in the background and not done in real time, so there can be a wait before your output is displayed. This is not uncommon in ASP solutions, though to be fair, when I was testing it, the response was more than acceptable.

But just as this product is versatile on the way in, it is also versatile on the way out and dovetails seamlessly into Excel and PowerPoint. Overall, this is one of the most imaginatively different software products I have seen in years. With very little extra development, it could be a stunner.

Customer Perspective: Global luxury goods company, London

A global luxury goods company uses Instant Intelligence for an annual in-store survey of customers — a massive global project that spans 30 countries. A major attraction was the system’s fusing of paper and web approaches, with automated data capture.

“The survey runs for a month in store and questionnaires are then sent back to London for scanning” explains Laura Simmonds, global market researcher at the company’s head office in London. Scanning is carried out by Data Liberation, who also verify and clean the data, then post the results directly into Instant Intelligence, for Laura and her team to analyse.

“If we did not have the data entry scanned in this way, we would have no way of doing this. If we had to key all data in manually it could take all year!”

Some of the data originates from an online survey, the data is then easily imported into Instant Intelligence and merged with the other data, ready for analysis. It is really only the data interrogation and analysis part of the suite that is visible to users. However, online access means the company can access the survey results around the world.

“The nice thing about Instant Intelligence is that it is so easy to use and train others on,” Laura observes. “For our purposes, it performs all the cross-tabs and you can safely let other people pull information off for themselves.

“Other databases that I have used are often more complex and usually have to be managed centrally because there are so many factors to consider to ensure that correct information is taken from the database. These databases take some time to be trained on to become a competent user, compared to Instant Intelligence, which can be taught in an about one hour.”

She concludes: “Instant Intelligence is perfect for our needs, it provides a reliable, user-friendly database at a relatively low cost.”

Names have been changed in this article at the company’s request. A version of this review first appeared in Research, the magazine of the Market Research Society, January 2007, Issue 488