The latest news from the meaning blog

 

Vovici 4.0 reviewed

In Brief

Vovici Community Builder and Feedback Intelligence, version 4.0

Vovici, United States
Date of review: November 2008

What it does

Web-based suite for building online research communities and custom panels, for both quantitative and qualitative research. Allows you to create fully branded respondent community portals easily, using an online point-and-click interface. Feedback Intelligence module offers sophisticated dashboard and drilldown reporting systems for individualised reporting to stakeholders across the enterprise from multiple data sources, and integrated with Business Objects

Our ratings

Score 4.5 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 3.5 out of 5Value for money

Cost

Annual fees in US dollars: Community Builder module starts at $24,995 for up to three named users; Enterprise Feedback Management module starts at $24,995 plus $1,500 for each portal user.

Pros

  • Standardise and co-ordinate surveys, questions and measurements across all a company’s survey activities
  • Requires no web programming or HTML skills for the most part
  • Platform independent – Windows, Mac or Linux with any modern browser
  • Integrates with Oracle CRM and a range of industry standard CRM systems

Cons

  • No built-in incentive or reward management
  • Can only execute surveys created in the Vovici EFM survey module
  • Relatively expensive

In Depth

Today more and more companies are realising the benefit of building online panels of customers to involve in research. The idea is simple, but the reality can be complex and costly to deliver from a technical standpoint. Whether firms try to build them for themselves or park the problem with a research agency, it is an area crying out for an off-the-shelf solution like the new Vovici Community Builder, which was released last month and effectively relaunches the concept of the panel management tool for the demands of Web 2.0-style research.

The product is a completely web-based suite which sits astride a database of contacts or panellists, and allows you to interface directly with the Vovici EFM survey engine as well as other enterprise platforms or CRM systems – such Siebel or Hyperion – so that sample selections can refer to real behavioural data from recent transactions for that customer. Configuring the interfaces with other enterprise data sources is, understandably, beyond the lay user, but once these have set up a customer’s purchase history can be used just like any other piece of panel profile data, such as age or location, or be used to drive sample selections for just-in-time research.

The Portal Builder

At the heart of the software is the Portal Builder, in which you design the pages of the community site your target panel members will visit. It is effectively a content management system which allows you to lay out pages with placeholders for content that will be streamed in from other sources, and which you can arrange neatly in different columns and boxes in the way most websites are organised these days. So, in the centre you could choose to put a list of the surveys the respondent is invited to some introductory text above, headlines from the current community newsletter on the left, highlights of recent survey results on the right and so on. The portal has built-in support for just about all of the objects you are likely to need to add when building a research community site: a profile editor so panellists can view and update their personal data; current survey invitations; past surveys taken; containers for welcome messages, help and links to more information or contacts. The list reaches far into the Web 2.0 milieu: you can add forums for collaborative discussion, blogs for respondents to view and react to, data mash-ups.

There is also a wealth of collaborative tools, from a simple suggestion box to access-controlled forums that can be used for asynchronous focus groups, so that quantitative surveys can be backed up by some selective qual work or vice versa. The highly modular approach means that any tool can be access controlled, and only available to invited participants. And if you are concerned that this portal page is getting a bit busy, it is easy to spread it across a series of tabbed pages, which you can title and organise how you like. There is a large template library, and it is very simple to create an overall theme with your own imagery and branding.

You can also publish results through the portal and make these relevant to the respondent – you could present each member with a report showing their answers compared to the survey as a whole, for instance, show highlights and add commentaries. Vovici emphasise this as the means to build interest and engagement, and work on the assumption that the kind of interest a community member will derive from the experience as a whole will eliminate the need to offer financial inducements. As a consequence, there is no built-in incentive and reward management capability in the product – something that will not go down well with agencies wishing to build panels.

Though the Vovici name may be unfamiliar to many, what is now branded as Vovici EFM was originally developed as Perseus EFM. The main web survey capabilities and engine are an incremental development of the Perseus EFM software which Interface reviewed in Research July 2006 when it was already a mature and capable offering for online research. Vovici has recently established sales and support offices in London and Singapore alongside three existing locations in the United States.

The other major addition since Vovici took over is in reporting. There is now a dashboard reporting system largely in place, with some development ongoing. It follows a similar philosophy to the Community Builder by allowing you to arrange graphical and tabular reports across the screen in columns and rows – as designer, you choose what reports to show simply by pointing and clicking, selecting them from menus and so on. Again, the overall appearance is controlled by externally defined templates and stylesheets, so the entire reporting experience can be themed and branded to match a corporate intranet site.

Reporting in Business Objects

The reporting system is, in fact, built on Business Objects (using Crystal Reports), which is a widely used reporting tool in the mainstream corporate database and business intelligence sector. However, Business Objects is typically of limited use with survey data, because it does not understand common survey concepts such as multiple-response data, respondent bases that may differ from the number of responses to a given question, or one-off data formats for each short ad hoc survey. The breakthrough with Vovici is that the developers have created a data model and accompanying metadata to make research data comprehensible to Business Objects.

The beauty of this is that any reporting can be a composite of hard commercial data alongside softer attitudinal and intentional survey data. Questions too can be analysed across different surveys. By smashing through the old silo approach, Vocivi is also working towards delivering true benchmark capabilities. The idea is that any question can be reused across any survey, and once the same question has been reused, all responses to it can be used to provide a benchmark, or by filtering that benchmark to provide sector-specific comparisons.

Enterprise Feedback Management providers like Vovici are probably more aware than most MR software suppliers that their products will appeal to both the corporate user wishing to do their own research, and the research agency – and the platform lends itself to collaborative working between client and supplier. For example, the community portal and interfaces with corporate data sources could all be under the responsibility of the corporate client, while the creation of actual surveys and the preparation and publishing of results can be contracted out to one or more research suppliers, using the same platform.

This is a vast system with massive potential, which in its very design reveals some research-literate minds were behind it. Users I spoke to report that that the community functionality is stable, reliable and relatively easy to learn, and has enabled users to standardise and systematise their research and harmonise measures across very large enterprises. Perhaps the product’s greatest strength is in its ability to integrate with CRM systems and other business intelligence sources, making research more relevant and mainstream within the corporate enterprise.

A version of this review first appeared in Research, the magazine of the Market Research Society, November 2008, Issue 509

Mopinion reviewed

In Brief

Mopinion

The 3rd Degree, UK
Date of review: October 2008

What it does

SMS and WAP interviewing software, provided as a hosted web-based service which allows you to design and deliver short surveys via SMS using a free shortcode so there is no cost of reply to the respondent (in the UK) or via WAP to mobile web-enabled handsets.

Our ratings

Score 4 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 5 out of 5Value for money

Cost

From £1500 per month which give access to a zero-rated UK shortcode and international long code, capacity for 50,000 messages per month (more than enough for four 500-respondent surveys), authoring, admin tools, training and support. Message costs charged in addition: typically 6p outbound and 5p inbound on the UK free shortcode. Price breaks for higher volumes are offered.

Pros

  • Analyse and cross-tab verbatim responses as you would any standard question
  • Platform-independent set-up: can use any modern browser on Windows, Mac, Linux
  • Clean, easy-to-use interface requires no technical skills
  • Good back-end support for integrating with other systems or data collection tools

Cons

  • Sampling options a bit fiddly
  • Limited reporting capabilities
  • No online help
  • Hosted solution only: no enterprise version available (yet)

In Depth

Mopinion has come a long way since we first looked at it in May 2005, when the first version had just been released. Back then, it was very much limited to asking a few simple closed questions in a straight line, for delivery as an SMS interview. Now the software has broadened to include a range of supporting tools and options and, to an extent, support for WAP surveys and even web surveys.

There are many other useful changes, such as support for uploading samples or panels and some rudimentary top-line reports to give you a snapshot of response and preliminary results. There is also a complete administrator interface, in which you can define the permissions you wish to give to different back office users – so you can define different roles for survey authors, survey testers, data editors and so on.

Surveys can be more ambitious now, with support for scale questions, open-ended questions, much better support for multiple- response questions. Unlike before, you can route questions on prior responses and you can you can pipe text in from other answers or from the sample, such as the respondent’s name or the name of the brand they selected. You can also create ‘cyclic’ question groups, where the software will randomly select one or more questions from a pool of questions – a handy way to pose ten questions but keep the interview to only five for any one individual. The conventional wisdom is half a dozen questions is just about the limit in an SMS interview. The trade-off can be fieldwork times of an hour from start to finish.

Last time, we marked the software down for being unrealistically tolerant of errors in the data. This time, it is due for some well-deserved praise, as a range of strategies for weeding out and correcting errors has been added. The tightest control will put the interview into a hold state until someone given editing rights reviews the response and makes a data correction. There is also an auto-correct feature which will learn by example, so that common mistypes or systematic errors (e.g. people who type the response ‘yes’ to a question instead of using the numeric code indicated) can be substituted without manual intervention, so you should only get clean data out of the other end.

The interface is clean and easy to use – the software does not try to do too much, so it is relatively quick to learn and navigate around, and works even on relatively slow internet connections. Mopinion does benefit from being a very standard HTML implementation – there is no Flash or Java involved, so it will run on just about any web browser and does not mind what operating system you are using. With patience, you could even write it on an iPhone. It offers a range of data outputs, including a hassle-free Triple-S export.

This almost ruthless simplicity means it is not always quite as friendly as it could be though. Defining sample selections is rather clunky, as is editing existing questions – some changes are only achieved by deleting and adding again. The system would also benefit from having some online help or documentation, which was not in evidence – and as more features are added, this will become imperative.

We learned there are some very useful additions in the pipeline, such as support for diary surveys, where a survey segment can be repeated – the survey will be kept open for the respondent to submit each diary entry over an extended period. We also understand the rather rudimentary reports are about to get a makeover too.

A real strength of mOpinion’s SMS implementation, though, is the SMS gateway that The 3rd Degree (T3D) provides as an integral part of the service. Without this, SMS interviewing is a nightmare of multiple SIM cards and modems. In the UK you can use T3D’s own shortcode number, which is free to the respondent – you will be billed 5p per incoming text, which we consider to be a bargain. T3D say this is the wholesale rate and they add no margin to it. If you try to rent your own shortcode from a mobile network it will cost you tens of thousands of pounds annually, even before any call charges. The firm is very experienced now in the vagaries of the mobile networks, both in the UK and internationally, and can provide low-cost solutions in other markets too. You would need to be doing very high volumes indeed for it to be worth your while trying to run your own gateway.

Mopinion now supports three interviewing modes: SMS, WAP and web, though a survey can only be one of these, and you are committed to that mode once you have started on it. WAP surveys are rather more sophisticated in what you can do than SMS, and the software helpfully provides templates for defining each screen.

Don’t expect the web survey capabilities to match up to those of the other specialist web survey tools, as options are effectively limited by the scope of WAP and SMS surveys. However, it does mean you can use the same tool to manage a simple web recruit as a screener to the SMS interview, without needing to move data between one tool and another. Indeed, you can recruit directly off the web, sign a respondent up and initiate the SMS interview without any manual intervention, as an alternative to mass-invites by SMS message, from samples or panel data. All in all, mOpinion now provides a safe and research-savvy way to get into SMS interviewing with little fuss and for a very reasonable cost.

Client perspective: Ipsos Mori, London

Ipsos Mori launched the Orange Business Jury last year, which is a panel of over 1,000 small business owners and SME decision-makers. Where a very fast reaction to breaking news is sought, the panel is contacted by SMS. The SMS component is driven by Mopinion, as AJ Johnson at Ipsos Mori Online explains:

“What I like about The 3rd Degree is the way they are willing to integrate their technology with other systems, so that we can combine their niche area of data collection with more mainstream activities. We have been able to integrate this seamlessly with our main interviewing platform, and link it to a subsection of our panel, so we are not having to move data around between systems.

“It provides excellent PR-type feedback and research. With the Orange Business Jury, it has been very successful and the results are coming in as fast as everyone says they will with text messaging. It is very good for PR-type research and instant feedback.

“The software side is important, but with SMS, it is of even greater importance to have an understanding of the mobile networks, having relationships with the networks and having a gateway that is as stable as it can be. In our experience, The 3rd Degree is strong in all of this.

“Because the surveys are simple, with just three or four questions, we have been able to move the survey into the business areas, to people working on the research side, who are able to run their own surveys and call on the technical people only if they need to. They like it because it is easy to use and they can go right through all the steps of a project without it becoming too technical for them and get results back very quickly, often in just a few hours.

“It is also an advantage that it works with WAP too. I do think WAP is in a better place for research than it was a year ago, and with people switching to BlackBerries and iPhones, the market is swinging in that direction. But I am still a big fan of SMS, because representivity-wise, it is better even than online research.

“The 3rd Degree have also fully covered any worries about compensating our respondents or ensuring that they are not out of pocket, because in the UK they provide completely free text messaging.

Looking to the future, Johnson is excited by the potential to add location data to each survey response, either from the mobile network or using GPS to pinpoint the position. He remarks: “SMS and WAP have huge potential for point-of-experience surveys and could answer some of our sampling issues for location-based interviewing. This would provide the big benefit of being able to interview people who are on the move, if you can know exactly where they are each time they respond.”

A version of this review first appeared in Research, the magazine of the Market Research Society, October 2008, Issue 50

Simstat, Wordstat & QDA Miner reviewed

In Brief

Wordstat with QDA Miner and SimStat

Provalis Research, Canada
Date of review: September 2008

What it does

Windows-based software for analysing textual data such as answers to openended questions along side other quantitative data, or to analyse qualitative transcripts. Uses a range of dictionary-based textual statistical analysis methods.

Our ratings

Score 3 out of 5Ease of use

Score 3.5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

SimStat + QDA Miner + WordStat bundle – one-off purchase cost of $4,195 or $14,380 for 5 licences. 75% discount for academic users on most prices. Upgrades offered to licenced users at a discount.

Pros

  • Analyse and cross-tab verbatim responses as you would any standard question
  • Can use to code data automatically using a machine learning method
  • Easy to create own subject-specific dictionaries, which can be reused on similar projects
  • Full versions available to download for month’s free trial

Cons

  • Standalone system – no collaborative or multi-user capabilities
  • Steep learning curve – requires an expert user
  • Dictionary and word based analysis only: does not support natural language processing or learn by example methods
  • Does not support Triple S data

In Depth

It’s often the answers to openended questions that offer the richest insights in a survey – especially in online surveys, if the questions are specific and well targeted. Yet this valuable resource remains almost unexplored in most quantitative surveys due to the sheer effort involved in analysing it. The principal method used – manual coding to a codeframe – is virtually unchanged since the 1940s. The only nod to the information age is the number of coding departments that use Excel as a surrogate electronic coding sheet – the IT equivalent of equipping your horse and cart with satnav.

WordStat is a very versatile bit of software that offers you countless ways to analyse openended textual data with virtually the same ease as analysing normal ‘closed’ questions in a cross-tab package. It comes as an add-on module to Simstat – which is a feature-rich desktop statistical package for analysing survey data with decent cross-tab and charting capabilities. Interestingly, WordStat also functions equally well as an add-on to another program: QDA Miner, a code-and-retrieve analysis suite for qualitative researchers. All three are developed by Provalis Research, based in Montreal.

WordStat, combined with the other two programs, offers a bewildering choice of ways to dig deeply into verbatim data and make sense of it. In this review, I will focus on two of the more interesting ones to market researchers, but this versatile program has many more tricks up its sleeve than I can cover here.

First, without doing any conventional coding at all, you can effectively cross-tab or slice your data according to any demographics or dependent variables in your data. For this, you would start out in SimStat.

SimStat will let you carry out normal quantitative analysis of data as simple cross-tabs or by applying a wide range of statistics – factors and cluster analysis, regression, correlation and so on. To do textual analysis, you need to start with the verbatim data in the same data file as the other numeric and coded questions. This is pretty much how most web interviewing packages provide the data these days. SimStat works around the concept of dependent and independent variables: pick the any demographic want as a the independent variable, pick the verbatim question as the dependent variable and pick Content Analysis from the bottom of the Statistic menu, and SimStat will fire up the separate WordStat module to let you cross-tab and dig into your openended text.

Once within WordStat there is a wide array of reports and ways to look at the words that respondents have given, in aggregate or case by case, against your dependent variable or by the total sample. There are charts, including dendograms and very informative heatmaps that show the relationship between words, and you can adjust the proximity factor used when looking at words used in the vicinity of other words. Indeed, there is more you are ever likely to use.

However, if you would like to follow a more conventional coding model, you need to enter your data via QDA Miner, which is equally comfortable dealing with records that contain only unstructured text such as focus group transcripts, or quant data with a mix of closed and open fields. QDA Miner has the concept of codeframes at the heart of it, and you can create multiple codeframes and code directly into then. You can search for similar items and then code them all together, and you can extend the codeframe as you go too. Any coding you make can be exported back to the data, for analysis in SimStat or other tools. Nothing too unconventional there.

Step into WordStat, though, and you shift from one era to the next, for the capabilities of WordStat are now at your disposal to build automatic machine learning-based verbatim text classifiers. In other words, WordStat will take your coded examples, identify all the words and combinations of words that characterise those examples, and build the algorithms to perform automated text categorisation according to your examples and your codeframe. Again, there are reports and charts available to you, to understand the extent to which your classifiers are accurate. Accuracy depends on many factors, but suffice to say here, the automatic classifiers can be as good as human coders, and on large datasets, will be much more consistent.

Unlike some other classification models, WordStat is a dictionary-based system, and it works principally on words and the relationship of words to others, rather than on actual phrases. There is a separate module for creating reusable subject-specific dictionaries and the system comes with general dictionaries in about 15 languages. It also contains a range of tools to clean up texts and to overlook elementary spelling mistakes, with its fuzzy matching logic.

There are extensive academic debates about whether this is the best method for coding, but as everything is does is transparent, can be interrogated, changed and improved, it is as likely to be as good as any other method – and it is certainly better than throwing away 10,000 verbatim responses because nobody has the time or energy to look at them.

This is, however, an expert’s system. Coders and coding supervisors would probably struggle with it in its present form. Coding is not the only function that WordStat handles, and because it has to be accessed via one of two other programs, that adds another layer of complexity.

The disjunction between these three different programs is a slightly awkward one, though it is something users report they get used to. Neither does the system offer anything in the way of collaborative tools – through of course, if you are able to code data automatically, it does mean the work done by ten people really can be done by one. Don’t expect to be able to use this simply by reading the manuals – you would need to have some education in text categorisation basics from Provalis Research first. However, with a little effort, this tool could save weeks of work, and even allow you start including, rather than avoiding openended questions in your questionnaire design.

Customer Viewpoint: US Merit Systems Protection Board, Washington DC

John Ford is a Research Psychologist at the US Merit Systems Protection Board in Washington DC, where he uses WordStat to process large surveys of Federal employees which often contain vast amounts of openended data.

“In the last two large survey we have done, we have had around 40,000 responses. We have been able to move away from asking the openend at the end which say ‘do you have any comments’ to doing more targeted openended questions that ask about specific things.

“We asked federal employees to identify their most crucial training need and describe it in a few sentences. We used a framework of 27 competencies to classify them. QDA Miner was very flexible in helping us to decide what the framework was, settle on it, and very quickly classify the competencies.

“With WordStat I was able to build a predictive model that would duplicate the manual coders performance at about 83% accuracy, and by tweaking a couple of other things we were able to gain another three to four per cent in predictive accuracy.

“We also observed that some of the technical competencies are much easier to classify than some of the soft skills – which is not a surprising result – but we were able to look at the differences and make some decisions around this.

“When you look at a fully automated method, there is always going to be variation according to what kind of question you are working with. Using QDA Miner and WordStat together helps you understand what those difference are.

“With Wordstat you can start out with some raw text, and you can do some mining of it, you can create dictionaries, you can expand them with synonyms and build yourself a really good dictionary in very little time. If you work in example mode, you can mine the examples you need for your dictionary. The software will tell you the words and phrases that characterise those answers.

“To make the most of automated coding, you have to focus the questions more, and move people away from asking questions that are very general. In educating people about these, I have started to call then the ‘what did you do on your summer vacation’ question’. You can never anticipate where everyone is going to go. I have noticed there is also a role that the length of the response plays. Ideally, the question should be answerable in a sentence or two. You cannot do much with automated classification if the answer goes on for a couple of pages.”

A version of this review first appeared in Research, the magazine of the Market Research Society, September 2008, Issue 507

mTABview previewed

In Brief

What it does

Report automation software to convert mTAB cross-tab analysis directly to PowerPoint presentations containing tables and charts, with push and pull technology to allow for easy update for trackers or new releases of data.

Supplier

PAI & Gamma Associates

Our ratings

Score 4.5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 5 out of 5Value for money

Cost

Available as an add-on to mTABweb at £500 per required user.

Pros

  • All-in-one process for analysing data and creating PowerPoint slide
  • When adding new waves, overlay tool detects any differences and helps you to reconcile them

Cons

  • Charts limited to histograms – no pie chart
  • Cannot create your own templates yet
  • Windows only

In Depth

In the Autumn, Gamma will be releasing mTABview, an add-on to mTABweb that will allow users to build PowerPoint slides directly from their mTAB databases. We were given an unreleased ‘beta-test’ version to review.

mTABview is also a web-based program, but is Windows only. Rather than adding more features directly into the mTABweb interface, mTABview works in its own browser window. Logging in, you start by selecting the template you wish to use, as you would with PowerPoint. It works in tandem with mTABweb, pulling its data directly from the same database structures, and driving the mTAB analysis engine. This gives it an advantage over other PowerPoint creators like E-Tabs and Rosetta Studio, which can only work with static outputs, as you can refine and reanalyse you data as you go. Although the tool produces PowerPoint, you do not work in PowerPoint – it produces the PPT file when you are finished. However, as you build your deck within mTABview, you work with completely life-like previews of the actual PowerPoint slides. It offers a range of different slide types to work with: a title slide, a section header slide and information slides organised into one or two columns.

You might start by setting up the title slide. The template will provide defaults for font, size, or colour, to save time and achieve consistency. Any of these options can be overridden easily and intuitively, but regrettably in version 1, there is no way to save your modifications as a new template or to create your own – you would need to get Gamma to set one up for you.

Next, you define the slides you wish to see, and choose a one- or two-column layout into which you add your table or chart. As soon as you pick either, mTABview hands you over to mTABweb to build your table. You are then back in the familiar Filofax view to assemble your table and run the analysis. Hit the button to run the table, and it will post the results into the mTABview window.

Anyone preparing presentations will know that much of the time is spent in picking the right subset of data to show – combining categories or omitting them, suppressing either frequencies or percentages, switching off total columns and rows and hiding ‘Others’ and ‘Don’t know’ categories. All of this is made pretty effortless by mTABview. You pick the figures to display by clicking on the relevant column and row headers and the selected portions are clearly highlighted. It also lets you choose whether you work with frequencies or percentages. If you need to combine categories, then you can post the analysis back to mTABweb. It cleverly stores the original query you used to generate the analysis, so at any time – even months later – you can go back and recreate the table from the source data. From within mTAB, you can easily perform any recodes or combinations you need, and it will always post the results straight back into the slide you are working on.

Charts are just as simple. It does not use the PowerPoint chart engine to build the charts, but ingeniously, creates native PowerPoint charts in a two-stage process. When you export your finished deck to PowerPoint, you can use a small mTABview plug-in to pull in proper, editable PowerPoint objects, from the charts and tables created in mTABview.

Sadly, the current the range of charts is very limited – there are histograms in various forms and that’s it. No pie charts, no correspondance maps (even though mTAB will produce them), and no automated means to highlight significant data. These omissions are likely to disappoint early adopters when the program is released, though Gamma should be able to address them subsequently, now the building blocks are in place.

There is also excellent support for trackers, as there is in mTAB. Here, it uses the concept of an overlay. You take a previously saved presentation, and overlay a new data source on it. The program rattles through and reports all of the items where there are changes, not just in the labels, but in the underlying data structures. You can then click through these and decide how to resolve them there and then. Updating PowerPoint decks manually can take as long as creating them in the first place – but use this, and the prospect of a tedious day or two of work and careful checking vanishes into an hour or less.  If Gamma can grow mTABview to offer more flexibility in its outputs, this software is potential killer app material.

A version of this review first appeared in Research, the magazine of the Market Research Society, August 2008, Issue 506

mTABweb reviewed

In Brief

What it does

Web-based analysis software for end users with extensive capabilities for handling trackers and syndicated research.

Supplier

PAI & Gamma Associates

Our ratings

Score 4.5 out of 5Ease of use

Score 2.5 out of 5Compatibility with other software

Score 3.5 out of 5Value for money

Cost

Entry level cost around £5,000 for 5 users. Data conversion costs from £200 per project typically apply.

Pros

  • Packs in a lot of functionality yet is extremely simple and highly intuitive to learn and use
  • Effortless support for time series and grouping real numbers and dates sensibly for analysis
  • Cross-platform Windows and Mac

Cons

  • Set-up of surveys via Gamma or an affiliated DP bureau only
  • Interface a little tired and dated in places
  • Severely limited access control: can only set usage permissions at a survey level

In Depth

The latest of a growing number of desktop data analysis tools to find itself reincarnated on the web is mTAB. mTABweb is a surprisingly faithful reproduction of what was, until now, a Windows-only analysis program, widely used in the specialist field of syndicated research. The online version is Java based, so it supports a wide range of browsers and platforms including Mac and Linux. Being web-based, it makes the task of distributing software and data to end-users very much simpler, as it is all controlled centrally.

Just as with the desktop version, mTABview’s interface hinges – quite literally – on simulated Filofax organiser, with two pages open in front of you and a series of tab-dividers on either side that let you choose which pages you wish to show side-by-side. You select the variables to tabulate from the Questions tab and by opening the Row tab or the Column tab, drag and drop them to build the table you want to view. There are other tabs for choosing filters, switching datasets or adding in a third level beyond the columns and rows. Percentage and respondent base selections are easily selected from dropdown menus.

When your table is assembled, you click a button to generate the table. In our tests, with some realistically large datasets, the table appeared within a second or two. A line of buttons at the top gives you access to other options, one of which takes you back to the Filofax view. The table looks disarmingly like an Excel spreadsheet, which gives the output window a very intuitive feel to users. Other buttons open up a wide range of options for finessing the output, from omitting columns or rows, to selectively adding shading or borders.

The simplicity of the interface (which is starting to look a bit dated now) and the ease by which you can move from data to tables belies this program’s actual level of sophistication as a serious survey analysis tool. Look at any of the features or options, and you will find an intelligent set of capabilities on offer. If you need statistics, there are means, standard deviations, medians, Chi-Square, t- and Z-test scores. It will also automatically create top-2 and bottom-2 box scores for any rating scale type question without requiring any recoding.

Filters and new variables are also easily created, using graphical editors. There is a range of built-in options for cutting numbers into categories or ranges, as well as intelligent handling of date fields. An interview date can be converted into a profiling variable based on calendar months, or fiscal quarters with surprising ease, which is particularly handy on trackers.

The support for trackers goes much further. You can combine different datasets, and there are tools for managing the differences between the waves of a tracker within the software.

Back in the analysis view, once you have viewed a table, you can save it, give it a name and come back to it later. You can also select a portion of it and turn it into a chart. There are a dozen chart styles to choose from, though the output styles are limited, compared to Excel or PowerPoint. However, you can also run correspondence analysis in the charting module and display these as maps. These too can be saved or pasted into presentations and reports.

mTABweb and mTABview are programs which appeal directly to the consumers of research data. Virtually any form of survey data can be transformed into an mTAB database (both programs use the same database format). The drawback for those that like to be self-sufficient, is that Gamma does not distribute the conversion programme: you have to send your data to Gamma for conversion. This inevitably adds delay and cost, though both may be modest. For those buying research from different research providers, however, there can be real advantage is in being able to use one tool regardless of the fieldwork provider, and the conversion stage can provide a valuable independent quality check on the data being provided.

A version of this review first appeared in Research, the magazine of the Market Research Society, August 2008, Issue 506

Optimus reviewed

In Brief

What it does

Web-based suite of interview fraud detection measures for online surveys which can be applied to any online panel source, including panel providers or your own samples.

Supplier

Peanut Labs

Our ratings

Score 4.5 out of 5Ease of use Score 4 out of 5

Compatibility with other software
Score 3.5 out of 5

Value for money

Cost

From $2,500 to scan 5,000 completes, with discounts for higher volumes

Pros

  • Highly accurate detection of the most common types of internet fraud
  • User can determine the level of policing
  • Interfaces directly with Confirmit, Market Tools Ztelligence and SPSS Dimensions
  • Works with most browsers, Windows or Mac

Cons

  • Some programming involved if using an unsupported interviewing package
  • Does not detect all kinds of fraud, such as straightlining and ‘satisficing’
  • Rules are system-wide: cannot vary them by project or client
  • Fraud not detected during scheduled or unscheduled downtime of the Optimus server

In Depth

Optimus is a standalone software-as-a-service or ASP solution for tackling fraudulent respondents that will work with any sample source and, effectively, any internet interviewing system. It comes from Peanut Labs, an online sample provider, though the service is not in any way tied to their samples.

If you happen to use Confirmit, SPSS Dimensions or Ztelligence – then it is easy to a set a command at the beginning and end of your interview to link your survey to the Optimus service. If you use other software, you will need to do a small amount of ad hoc web programming to link it in each time. Essentially, the link is achieved using a ‘redirect’, where the survey momentarily hands control over to the Optimus server, which then probes the respondent’s browser, gathers some information and then hands back to the server running the survey. None of this to-and-fro is visible to the respondent. Neither is any personally identifiable data involved. All that Optimus holds on your behalf is your respondent ID, so you can later identify problem respondents. It does not use email addresses or cookies.

The real strength of the software, and single reason you wish to use it, is the firm’s proprietary digital fingerprinting technology though which it is able to build up a database of individual PCs which it has ever encountered for your sample and for anyone else’s too. It relies on the fact that any web-browser will reveal a large amount of information about the configuration and resources available on the PC – and there is enough variation for this to be as good as being able to get the manufacturer’s serial number. None of this information is personally identifiable. But once logged against a panellist ID, Optimus is able to start pointing the finger at some respondents for various reasons.

Optimus collects two other factual measures: interview completion times and IP location. Speed is detected as the time taken to complete against the anticipated time, set by the researcher, and short interviews are logged as potential speeding violations.

The IP address of the ISP or company network the respondent uses to access the internet contains some useful high-level geographical information, which will pin the respondent down to a country, if not to a city. This can then be used or ignored as you choose. A panellist on a consumer survey in France is unlikely to be using an ISP in the Philippines, for example, though a business executive could be, if using the wireless network in their hotel bedroom, which could as easily be in Manila as Manchester.

From this raw data, Peanut Labs deduces six measures of suspect behavour: duplicates, Geo-IP violators, hyperactive respondents, respondents belonging to multiple panels, speeding and a sin-bin category of ‘repeat offenders’, where the respondent has repeatedly transgressed in the past.

When you log into the system, you have options to register new surveys and also the different panel sources or companies you wish to use. The ‘controls’ area is where you define your own rules of what constitutes suspect behaviour. You can switch on or off any of the rules for your own samples, and also you have considerable flexibility over adjusting the threshold for each one. For example, for hyperactive respondents, you can set an absolute limit on how much multiple participation is acceptable to you, set a period, and choose whether you restrict this just to your projects or across all projects by all users of the service. It is a pity that you can only have one set of rules for all your projects: the rules for a B2B survey could be very different to what you allow in consumer research, for example.

There are two principal outputs from the system: reports and files containing the IDs of violators, determined by your rules, together with the type of violation recorded, either to update your own panel database or to seek replacements and refunds from sample providers.

A range of largely graphical reports are well presented. The main ones chart each type of violation every day, which you can filter by project or sample source. But reporting choices are limited, and there really there need to be more options available – for example to allow comparisons between different surveys or between different sample sources.

It is also worth considering the effect of scheduled maintenance on the service, which though minimal tends to be scheduled for prime-time Monday morning in Europe, and when it is down, your interviewing will be unprotected.

Ultimately, the success of the solution will depend on the volume of traffic passing through it, so it achieves the critical mass of fingerprinted PCs to be able to differentiate clearly between the responsible and the abusive survey-taker.

Customer Viewpoint : Kristin Luck, Decipher Inc

Decipher started to use Optimus in April of this year, to control sample quality when using sample from multiple sources on client projects.

“The system is designed to track respondents from any sample source. Where it really comes in handy is where you are using a multiple source sample approach and you want track people who are trying to enter the survey multiple times, either from a single source or from multiple sources.”

“Some of the other solutions on the market are tied to a particular sample provider. What was appealing to us about Optimus was that it was a technology we could use even if we were not working with Peanut Labs for sample on a particular study.”

Decipher uses Optimus with its own in-house web interviewing solution. Although this means Decipher does not benefit from a direct program interface, as with some mainstream packages, linking a new survey in takes very little time “We currently have to use a programmer to connect into Optimus.” Kristin explains, “and the first time it was about an hour’s work, but it is a pretty short learning curve, and we now have it down to about 15 minutes on a new project. In the future we will be able to implement without the use of a programmer.”

Another attraction was that the web-based interface can provide controlled access to the data to their clients, so that the entire quality control process is transparent to everyone. “It is really easy to use” says Kristin. By using the service, Decipher has identified and removed around 11% of the sample from multiple sources.

“We have found some panel providers have 21% or more of their sample has a problem and we have others where it is 8% or less,” Kristin states. “We tend to see lower percentages from the companies that have been making a lot of noise about panel quality, and higher percentages from those that have been largely silent about this.”

Being able to specify their own rules to determine fraud is another advantage for Kristin, as Decipher tend not to exclude hyperactive respondents. However, Kristin would like more granularity in how rules are applied, so that a client or a project can have its own particular rules applied- currently this is not possible without a manual programming process.

A version of this review first appeared in Research, the magazine of the Market Research Society, July 2008, Issue 505