The latest news from the meaning blog

 

Qi from Manthan Services reviewed

In Brief

Qi

Manthan Services, India
Date of review: August 2012

What it does

Online platform for creating advanced dashboards based on survey which delivers to the end user an online environment for data exploration, review and collaboration.

Our ratings

Score 3 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4 out of 5 Value for money

Cost

SaaS with annual subscription based on volumes. Example cost $8,000 for up to 5 projects (approx. 5,000 cases and 250 variables) with discounts available for higher volumes.

Pros

  • Very comprehensive offering
  • Understands the specifics of market research data
  • Focus on collaboration and knowledge sharing
  • Takes care of any complex web- and database programming

Cons

  • Works on IE8 and IE9 but some formatting experienced on other browsers
  • Online documentation/help is fairly basic
  • Set-up requires some skill

In Depth

Dashboards tend to be among the most advanced and also the most treacherous of deliverables for research companies to provide. Tucked away at the end of an RFP, an innocuous-sounding request for “dashboard-style reporting for managers and team leaders across the enterprise, with drill-down capabilities for self-service problem solving” will almost certainly mean something vastly more sprawling and costly to provide than anyone imagined.

Dashboard delivery can be a trap for the unwary. Many an online dashboard has become the constantly-leaking plughole in the project budget through which profits keep draining away.

What makes them difficult to control is they are usually tackled as custom developments, built using tools developed for corporate database systems and business intelligence (BI) tools. Any custom development is both costly and unpredictable and research companies often don’t have the skills in-house to manage a software development project effectively. Worse than that, survey data is difficult to handle with these BI tools. They aren’t designed to function smoothly with monthly waves of data, with new questions added or weighting or percentages that need to add to a constant respondent base. It’s not just a matter of generating the number of records returned from a SQL query.

Manthan Services, an India-based developer, noticed the opportunity to build on the dashboard and business information systems it was providing corporate customers and developed a research-friendly package called Qi (as in “chi” or energy). An online platform for creating advanced dashboards based on survey data, Qi delivers an online environment for data exploration, review and collaboration. It is a tool for building dashboards and an environment in which end-users can then access those dashboards, share, collaborate and even, if allowed to, create their own analyses and dashboards.

It is very smart software that aims to find the middle ground between typical BI dashboard tools like SAP Crystal Dashboard Design (the new name for Xcelsius) and Tableau, where the possibilities are infinite, given enough time and money, and the fairly restrictive kinds of online dashboard creation capabilities found in some of the more up-to-date MR analysis tools. If you really want to produce any kind of dashboard, or have a client that is highly prescriptive about presentation, then you may find Qi is just not flexible enough.

On the other hand, you may be able to use the horizons as a useful limiting factor in what you do provide to your client, as it is likely to do 99 percent of what they need – just not necessarily in the first way they thought of it. For the real advantage of using this product is that you really can produce portals packed with data with relatively little effort and no programming expertise required. Furthermore, when you add new waves of data, all of their derivative reports will be updated too.

There are also built-in modules within the Qi environment to set up different kinds of dashboards or portals for certain applications. There is one for employee research, for example, and another for mystery shopping, with reporting at an individual case level. In addition, there are models provided for performance management, scorecarding and benchmarking. There is also a tool for building an organization hierarchy and this can then ensure each user is given the relevant view of the data when they log in. These can be tied to “group filters” which reflect the organization’s hierarchical structure in the actual data that get displayed.

There is an integrated alerts publisher and a user’s portals can be configured with an alerts area or tab. You then define the exceptions or thresholds where alerts should be generated. These are then recalculated for each individual user’s view of the data so they are only alerted on what is relevant to them.

Elegant concepts

There are some very elegant concepts at the heart of Qi which help to give your work shape. Everything you create is based on one of three “assets” based on data: charts, dashboards and tables. Dashboards come in a variety of shapes with placeholders for you to populate with charts or tables. There is also the concept of a “portlet,” which can house a report, an alert, a chart, favorites or messages. You can then arrange your portlets into pages or publish them on their own.

There is a reasonable though not especially exotic selection of charts – pretty much what you might find in Excel. There are, however, some nice multidimensional bubble charts.

Behind the scenes is a SQL Server database. It can be loaded with survey data using the survey metadata provided by either SPSS or Triple-S. If you want to work with other kinds of data – which is possible – you may need to get help from Manthan Services in setting up an appropriate database schema, however, and also help with the database load process.

A particular snare to be found in many RFPs asking for dashboards is the request for drill-down capabilities. There is often an assumption that deciding what to drill down to is a trivial, automatic choice. It is not – there is often more than one level of detail a user is likely to want to see when a particular KPI turns red or a trend chart shows a worrying dip. In Qi, you have two tools to satisfy this: a drill-down tool that lets the user trace the antecedents or components of any item of data and a drill-across tool which lets you move up and across in your hierarchy of reporting.

End users are provided with a lot of options out of the box to personalize their dashboards – they can create favorites, apply sticky notes, customize the view of the data, create their own portlets (if you allow this) and republish or share these with others. It can make for a highly collaborative environment both within the enterprise, and equally, between enterprise and research agency.

Overall, this is an industrial-strength platform for research companies to use to create portals and dashboard systems with a dizzying array of functionality to pick from. The documentation could be made a lot more comprehensive – it is cryptic in places and tends to gloss over some quite advanced capabilities. I also experienced some issues viewing the portals I was given access to on any browser on IE8 or IE9, though Manthan claims it works with different browsers and tablets.

Same set of tools

Max Zeller is head of the retail insights division for a large global research company in Europe. (His name has been changed at the request of his employer.) His division introduced a white-label version of Qi last year, which it presents to its customers as one of its own branded services. “Many of our clients today require online reporting,” he says. “As a global company we wanted to offer the same set of tools to all clients and also leverage on the one investment across all our companies and for most of our studies. We also wanted something that you could implement quite quickly locally, to create portals and dashboards, which did not require any programming or special skills to run it. Also we wanted a tool that both researchers and users could modify and even create their own views or dashboards for themselves.

“We looked at many different products but eventually chose one from Manthan Services. On all criteria they were on top and they understood market research, which was very important.”

Though the software is very extensive, with quite a lot to learn, he says, in practice his firm’s research and DP teams have found it well within their capabilities to deploy it. “The people in contact with the client – the project managers supported by DP staff – do the technical and setup work. You need someone in the team that champions the product who can translate the requirements of the client in terms of how the software is going to work. Then it can be more junior DP people who do the implementation, because it is all menu-driven – which gives them a new opportunity as well.”

Zeller estimates that setting up a new portal for a client demonstration, comprising 25 different charts and allowing different levels of access, can be achieved in a day or so by his local teams – a pace that was new for the company. “Before this we had to go though IT and the process was not just longer but so much more expensive. It would have taken several days to a week with what we had before. We need to be as lean, as quick and as close to the client as possible – and that’s exactly what we have here. You can give the specs from the client directly to the team – you don’t really have to translate the requirements into a technical specification and that is what saves the time and delay.”

Zeller strongly advises allowing adequate time to learn to use the software, however. “This is not something you can jump into in an hour – it does take two intensive days of training. But overall, I think the trade-off between functionality and ease of use is good. Once you are accustomed to the software it is easy and productive to use.”

He also stresses that everyone, especially those setting client expectations, must be aware that this is a packaged solution. In other words, not all client requests may be achievable. “[When speaking with clients] you need to be aware of what you can and can’t do. Even though it is very flexible, it is working to a standardized framework. There are many things you find you have not thought of first and when you try, you discover there is a way to do it. But it is not fully customizable so there are some areas you cannot change.”

However, in these cost-conscious times, some imposed limits can be an advantage, as Zeller points out: “It is very difficult for research companies to earn money from these portals if what you are doing fully customized.”

Overall, he concludes, “We are quite happy with this software – and I am working with people who have a lot of experience. We think it is a good solution.”

A version of this review first appeared in Quirk’s Market Research Review, January 2013 (p. 28)

SurveySwipe reviewed

In Brief

SurveySwipe

Survey Analytics, USA
Date of review: February 2012

What it does

Survey platform for creating and administering mobile survey apps for participants or panel members to download to their mobile device, in order to participate in surveys.  Works across a range of mobile devices and integrates with panels.

Supplier

Survey Analytics

Our ratings

Score 3.5 out of 5Ease of use
Score 4 out of 5Compatibility with other software
Score 4.5 out of 5Value for money

Cost

Standard ‘co-branded’ package for SurveySwipe mobile apps, panel, up to 30,000 members and one admin user: $8,000 annually plus $2,000 one-off set-up fee. Premium package, including custom apps, custom panel, ideation module, unlimited members and three admin users, from $23,000 annually, plus $2,000 one-off.

Pros

  • App-based surveys on Android, BlackBerry, iPhone, and Windows Phone
  • Location-triggered surveys are easy to do
  • Tightly integrated with easily-defined custom panels

Cons

  • Location-based surveys may drain participant’s phone batteries
  • Limited set of community engagement tools
  • Documentation and help files are inadequate

In Depth

Mobile research – or more specifically, self-completion surveys on participants’ smartphones – remains something of a conundrum for many professional market researchers. The opportunities it offers are tantalizing, with respondent-centric benefits such as convenience, immediacy, intimacy (as it is a more personal device) and even fun balanced by some great benefits for the researcher, including better engagement, quicker response, sharper and fuller insights, greater candor and less distortion from delayed recall – which have all been reported by practitioners. The problem is that these gains have to be paid for through a ruthless commitment to brevity. This is the brave new world of the five-to-10-question survey and it is one that calls for a fundamental rethink not just of survey design but of the technology required to support these surveys.

Seattle-based Survey Analytics is one technology provider that has embraced mobile research with gusto. Its mobile offer is styled as a solution for creating mobile communities – comprising four complementary modules for deployment to mobile devices, mobile panel and community, a mobile quali-quant ideation tool and, of course, survey management, design and analysis.

There is always the dilemma with mobile research as to whether the mobile survey should use the smartphone or tablet’s built-in browser or run as an app that the participant first needs to download. Survey Analytics lets you choose because its SurveySwipe will let you deploy your survey as an app that participants can download on to any of the four main smartphone platforms – Android, iPhone, BlackBerry or Windows Phone 7 or above, or to the device’s browser, or mix modes between handheld and desktop/laptop devices. If that is not enough, yet another program in the suite, SurveyPocket, is designed for iPads for offline data collection or where a network connection is intermittent.

Location-based services

A great strength of SurveySwipe is its use of location-based services, which means a survey can be triggered by the participant reaching a particular place – which could be a city, a retail outlet within the city or even a particular aisle within that outlet. All you need is the latitude and longitude of each trigger location and to then set the size of the active zone, which can be as little as a few tens of feet.

For location triggering to work, the participant needs to have the app on his or her phone and to have agreed to allow the app to use location services. Then, when the participant strays into the defined zone, SurveySwipe will ping an alert to the phone with a message to say there is a survey to take and cue the relevant survey within the app. It seems to be as seamless and foolproof as it can be, both for participant and for survey creator.

The drawback at present with all location-based services is they eat up the battery life of your participants’ devices when switched to the more accurate GPS mode. Cell-based location, which is kinder on the power consumption, can only help to pinpoint locations to within a mile or two.

Sophisticated tool

The survey editor is not specific to mobile surveys and can be used to design conventional online surveys too – it is a sophisticated tool with a wide range of question types and options. Routing logic, randomizations, dynamic answer list masking, text piping and most other advanced survey features seem to be well catered for. You can also start your survey off in Word and then import it. It allows you to designate various question options by putting keywords in braces within your text. This is rather fiddly in practice and the import seems to be most helpful when used simply to import very long questions or simple, unformatted text. However, this lack of focus on mobile at the editing stage means you need to plan your pocket-sized surveys very carefully and select judiciously from a range of options that do not all apply to mobile surveys. Neither does there appear to be quick, simple way to preview the survey as you are writing it to see how it is likely to appear on the target devices.

Though the editor does not make skip logic explicit, as it is largely hidden within the question where the branch occurs, there is an extremely useful diagram you can call up which reveals the logical structure of your questionnaire as a flowchart.

Overall, the survey editor is reasonably intuitive, though being Web-based, it can feel somewhat hesitant and lumpy to use, even on a fast connection. Context-sensitive help is available but it tends to be rather verbose on the obvious points and less forthcoming when more obscure information is sought. For more general questions, an FAQ approach has been taken. For such a vast and sprawling application, this is inadequate and does not do it justice. Professional users need their infrequently-asked questions answered too.

Work with a panel

The developers have rightly anticipated that most survey designers will be creating mobile surveys to work with a panel or research community. MicroPanel is provided as another integrated Web-browser-based module with the aim of making it easy to create your own custom panels or communities. It is certainly very easy to create either a one-off panel or, by adding facilities for user-contributed microblogs and polls, the panel can be run as a community. In the next release, Survey Analytics will be adding a “Badge Farm” that will allow community members to earn kudos and recognition from their contributions as well as, or as an alternative to, points.

Panels/communities can be co-branded with some limited artwork modifications to demonstrate your own identity within the standard pricing but you can pay extra for the creation of a custom fully-branded panel. The same approach applies to the survey app itself, which can either be a generic SurveySwipe app, within the standard price, which may mean sharing surveys with other researchers, or Survey Analytics will create and register a custom app with the different download sites (e.g., Apple’s App Store and the Android Market).

Researchers are increasingly finding that mobile, as a research channel, sits at the junction of quant and qual – especially with the ease by which participants can upload pictures or videos taken with their smartphone and then provide commentary or captions for these. This capability is well-supported in SurveySwipe and there is even some support for analyzing unstructured text within the Survey Analytics analysis module. But this can be taken further with another add-on module, IdeaScale, which is a co-creative idea generator that allows panelists to contribute ideas and vote on other’s ideas. The module will work both as an app on the same range of mobile devices or on a Web browser.

At this point, beyond IdeaScale, no other advanced engagement tools are offered on this platform, though more are surely bound to follow.

SurveySwipe in action

Dhaval Shah is project manager for business applications and a member of the innovation team at Ipsos Loyalty in Parsippany, N.J. He has recently guided the company through the process of implementing SurveySwipe and related technologies from Survey Analytics. “Our clients were coming out with their own mobile apps to reach their customers, especially for their retail brands,” he says. “We knew we wanted to do something similar with our own Ipsos Loyalty mobile app to help clients engage directly with customers. We also wanted to bring together the power of two-way communication and the potential of location services to create a mobile app with strong community capabilities.

“We looked to the market to identify existing technology solutions available for this kind of research. We found most companies were behind the times on location services and they did not share our vision for creating an app that would lend itself to building strong communities. But the team at Survey Analytics shared our enthusiasm for combining two-way communication and location services and they were excited about working with us to build a robust solution.”

To build this solution for Ipsos Loyalty, Survey Analytics integrated SurveySwipe with IdeaScale. It also customized back-end analysis tools to extract and process the results. “We are able to host IdeaScale within the community and trigger location-based surveys. For example, we can trigger surveys when panel members enter a particular store. Members can then take a short survey, take pictures and post them as part of their feedback. This information helps generate meaningful analytics and also helps us provide real-time feedback to the store manager.”

Asked about the reliability of the location-based triggers, Shah reports, “So far we are seeing good accuracy to within 50 to 300 feet. But we can also expand the zone to 500 feet based on the requirements of the study.”

The response from participants has also been encouraging. Panel members can choose how they want to participate – using mobile phones or via a PC. “So far, we are getting great responses on the Ipsos Loyalty mobile phone app – it’s a very useful mechanism for giving prompt feedback and people find it easy to respond immediately,” he says.

“Not only is the software performance up-to-standards, it is also easy to use while setting up surveys and administering them. Survey design and setup is largely done by the client service teams at Ipsos Loyalty, with occasional consultation from the firm’s technology group. Training new users to write surveys and administer them is an easy process and rarely takes more than a day. The solution has proved to be very productive for some of our research at Ipsos Loyalty.

“We can get a survey out to panel members within the hour and get insightful and timely results back to the client within 24-48 hours. It is a great example of what we mean by ‘point-in-time’ research,” Shah says.

Each survey engagement is carefully restricted to five minutes or less. “We can ask a couple of open-ended questions or maybe up to 10 closed questions in this time,” he says, admitting that this does restrict the kinds of research the channel is suitable for. “But with this you get immediate feedback on whatever is happening at that time. We think of it as a ‘flash mob’ survey. Combined with traditional research, this feedback can prove to be an invaluable tool to measure customer loyalty while creating a rich dialogue with customers.”

A version of this review was first published in Quirk’s Marketing Research Review in February 2012, p. 24. Copyright © 2012 meaning ltd. Reproduction prohibited. All rights reserved. 

Q from Numbers reviewed

In Brief

Q

Numbers, Australia
Date of review: August 2010

What it does

Survey data analysis software for in-depth analysis, with built-in expert analysis features which will select the best analysis depending on context.

Our ratings

Score 4.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5 Value for money

Cost

Single-user annual license of Q Professional is $1,499 for single user. Q Basic with a reduced feature set is $849 annually. Multi-user and volume discounts available.

Pros

  • Easy graphical way to recode variables, merge categories and create filters
  • Makes applying sig tests and statistical models to research data very easy
  • Excellent range of use guides, help and online tutorials

Cons

  • Output styling a little lackluster
  • Limited support for tracking studies
  • USA support currently comes from Australia

In Depth

Q is a new data analysis program from Australia-based Numbers International that is designed to allow researchers to reveal hidden depths in their survey data using the power of statistical testing and modeling but without expecting researchers to become advanced statisticians. It’s perhaps fitting for software from Down Under that this tool can turn the process of analyzing market research data on its head. If it borrows from any school of data analysis, it is probably from the SPSS Base statistics approach, but in a way that is much more market research-savvy than the more general-purpose SPSS.

Q deals intelligently with every kind of survey question – single-coded, multi-coded, numeric value, even grids – in a consistent and even-handed way. Unlike the majority of conventional market research tabulation tools, it is not afraid of letting researchers – the primary audience for this software – get eyeball-close to the data: All the case data is only a mouse click away, on a dedicated data tab.

Q is offered as a desktop tool that works under Windows. You start by opening your Q file (with the file type “.Q”, which is the study’s database containing both case data and survey metadata) just as you would open a Word document or PowerPoint deck. The expectation is that your data provider or data processing department would set this up for you and even create some reports ready to work on. If you want to do it yourself, you can also create your own Q database by importing directly from SPSS SAV or SPS files or from Triple-S files. These will load in all the variables from your study and give them the right designations (single-coded, numeric, etc.), which are important to ensure Q knows the most appropriate models or tests to apply to each question. CSV import is also there as a fallback, though to get the best from the program you will then need to spend some time setting up appropriate question and category labels and ensuring the right question types are set.

Easy to learn and use

This software is very easy to learn and to use, though it is not necessarily intuitive at first sight – probably because there’s some unlearning to do for most experienced researchers. To make the point, Numbers provides not only a quick-start guide to take you through basic tables to choice modeling and latent class analysis in 60 pages, but also an instant-start guide which distills the basics into a single sheet. There is also integrated help and online training with show-me features that take over the software, select the right menu options and then undo it again, ready for you to do the work yourself.

What really differentiates Q from other survey data analysis tools is that it offers the researcher a blended approach to data analysis, combining straight crosstabs for primary reporting with advanced multivariate approaches to reveal hidden trends and connections in the data. So often, these connections remain undetected in most survey datasets simply because the researcher lacks either the tools or the time and budget to dig any deeper. Q can help move the task of analysis from superficial reporting of the numbers to telling the client something he or she really hadn’t realized – based on evidence and backed up with confidence scores.

For the more involved operations, such as multivariate mapping or latent class analysis, you always start from basic tables and analysis. It help keeps you grounded, letting you approach more advanced and possibly less familiar analytical techniques in a stepwise process, building on what you have already seen and verified.

One of the design principles Numbers applied to Q was to put users in front of the actual numbers as early as possible in the process. You always start in table view, looking at some of the data, but this view is highly dynamic and many of the options that you find tucked away in menus, pick-lists and property sheets in other analysis tools are achieved simply and elegantly by dragging and dropping. For example, just clicking, dragging and dropping will let you merge categories; create nets; and rename, reorder or even remove categories. Most functions or options are no more than a single context-sensitive click away.

Another difference is that the program works out the best way to analyze the questions you have selected – you use the same table option whether your question is numeric, categorical or grid. There are a lot of different options that help Q understand the kind of data it is dealing with, and from this it will also select the most appropriate significance tests to apply to the question. It makes appropriate adjustments according to whether the data are weighted or not, and also takes into account the effect of applying multiple significance tests that can otherwise lead to false positives.

In the tables, arrows and color-coding show not only which values are statistically significant but highlight the direction of the difference. As you generate tables and other outputs, these appear in a tree on the left. You can keep them or discard them and you can also organize them into subfolders. From this, you can create a package of a subset of the tables or models you have created. This creates a small e-mailable file which others can then view by downloading the free Q Reader.

The Reader can provide a very simple and inexpensive route to distributing interactive tables to clients. Numbers limits the options in the Q Reader version but clients and co-workers can still slice and dice the data in different ways that are relevant to them but have no direct recourse to the raw data.

Another impressive feature is its handling of conjoint analysis. Q lets you roll up an entire multilevel choice model into a single composite question, which you can then crosstabulate and filter with the same ease as a simple yes/no question. And with all of the built-in significance tests and other analytical techniques at your disposal, you can very quickly determine the real drivers in any choice-based model.

A little lackluster

Where the software is perhaps a little lackluster is in the quality and range of the options to finesse the outputs it provides. It makes little attempt to represent data graphically in histograms or pie charts. Charts are restricted to those associated with correspondence mapping or other such models. There is no integrated support for Excel or PowerPoint, either. If your point of reference is SPSS, then you may find its outputs a step up, but if you are coming to it from other market research data analysis tools, you may well be disappointed.

The full version of Q will also let you import and refresh your data, which provides some rudimentary support for trackers. However, the current version, although it contains very good support for time-series analysis, is poor at version control and reconciling differences in data formats between waves of the study. Perhaps surprisingly in these days of data integration, you can only have one study open at once, though Windows will let you have more than one instance of Q open.

Overall, these are relatively minor weaknesses in a highly-intelligent software product. They are largely indicative of a developer whose priorities lay in simplifying the most challenging problems, and in doing so allowing substance to triumph perhaps a little too much over style.

Gradual introduction

One company making extensive use of Q is Sweeney Research, also in Australia. Erik Heller is general manager of Sweeney’s Sydney office and an experienced researcher with a background in advanced quantitative methods. He has overseen the gradual introduction of Q as an analysis tool for researchers to use on data collected largely in-house from a broad range of telephone, online and in-person surveys. “One of the advantages of Q is that it is very easy for someone who is not that involved in the data analysis to go into the data and run some additional crosstabs,” Heller says.

“There is a huge efficiency gain if someone works up a hypothesis when writing his report and does not have to stop what he is doing, run downstairs or write an e-mail to the data analyst. It may not sound like much, but it is really quite disruptive and therefore desirable to streamline this from the business perspective. That is not something that is especially novel to Q, but where Q differentiates itself from other tools like SPSS is the extent to which it is intuitive and easy for people to immerse themselves in the data.”

Asked how long it might take a novice user to become familiar with the software, Heller says, “That really depends on the individual, but most people seem quite capable of using Q within a couple of hours: checking the data make sense, interrogating the tables and producing some additional basic tables.”

However, as Heller points out, this is just the start for most users in understanding what Q is capable of. “The statistical abilities of this program are the biggest reason I’ve been driving this internally to get people to use it. What’s really good about it is the sophistication of the tests and the foolproof way that they are applied. If you think about the purpose for which most of the traditional tests were originally designed, it is very different from the ways we analyze commercial research studies today.

“In a way, at the 95 percent confidence level, every twentieth test will be a false positive. And on a typical study, we run hundreds of these tests. Q uses testing approaches that aim to account for this and are therefore more appropriate for the huge number of tests that are done on a typical study. It means it is easy to cut across big data sets and see the important, statistically-significant effects. To do that in a software like SPSS is infinitely more cumbersome.”

Heller had also used Q to analyze conjoint studies, both before and after the conjoint analysis module was added to Q, where an entire choice-based experiment is simply treated as a single composite question. “I’ve use it on one project, so far” he says. “It is brilliant in its simplicity. Most people are probably used to having a bit more control over the data and this way you are putting a bit more trust into the software. Clearly, the approach they have taken is that they want to make it as simple as possible. It is ideal for someone who does not have the time to get involved with all the statistics behind it. I think it is a great feature and one I will use a lot more going forward.”

Another plus for Sweeney Research is the ability to export tables and charts in Q for clients to view in the free Q Reader. “For them to be able to look at the tables and merge categories without manipulating the original data is very beneficial and we find they are very keen to use it. It does not overload them because the free version has fewer options; it just allows them to check the little queries they might have. It is all about simplifying things. There is so much information out there these days and there is no shortage of data – the aim is to provide it in the most usable format you can.”

A version of this review first appeared in Quirk’s Marketing Research Review, August 2010, p. 20. Copyright © 2012, meaning ltd. Reproduction prohibited. All rights reserved.

Revelation reviewed

In Brief

What it does

Online qualitative research environment for asynchronous or bulletin-board style depth interviewing, discussions and auto-ethnography, allowing research to take place over several days or even weeks. No special software or plug-in is required to participate.

Supplier

Revelation Inc., Portland, OR

Our ratings

Score 5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4 out of 5Value for money

Cost

Single project licenses start at $1,500 US.  Discounts available for larger volumes, annual licenses and longer projects (3 months+).  Helping Hands project support and translations are costed on a per-project basis.

Pros

  • Daily activities can be set up in advance and launched automatically
  • Participants can blog, view stimulus material and upload content using any browser
  • Integrated translation service for discussion guides and transcripts
  • Full transcripts easily exported at any time

Cons

  • Discussion groups lack versatility
  • Can be difficult to analyse quail-quant type pre-coded questions
  • Can be overwhelmed by emails from a busy board
  • Currently cannot personalise the welcome email text

In Depth

Online qualitative research does not need to be a pale imitation of conventional face-to-face groups and depths, and it is better not to try. Revelation is a piece of web-based software that provides a rich environment for qualitative researchers to design Internet-age research projects that play to the strengths of the medium. Respondents tend to welcome the convenience of being able to participate whenever they choose from the comfort of their own home or office. Researchers and client may enjoy the same, but more importantly, by moving beyond the temporal and spatial constraint of the single-point-in-time group, they may find they get richer and more considered insights.

A Revelation group can take place over several days, or even weeks, with new questions or exercises being presented on a daily basis. Participants can be encouraged to contribute much more in the way of content, taking the focus group into the realms of auto-ethnography and co-creation. Revelation allows you to decide, for each question you ask, whether the responses are to be visible to others – before they respond (“influenced”), only after they respond (“uninfluenced”) or even withheld from all but the moderator and any client observers (“private”).

The way the software works as a researcher, is that you log into your account on the Revelation server and create a series of activities for your respondents to take, building different tasks from a toolbox of stimuli. You can cue them exercises to start on different days, and if you are planning some kind of diary activity, an exercise can be made to repeat. You can build an exercise very simply from a toolbox of components.

There’s everything you would expect there, from open questions and closed, pre-coded questions, places to provide descriptions, welcome texts and explanations, cues to present any multi-media stimulus material you may wish to display and questions where you request an upload of a photo, document or even a video. You could simply present a series of open questions each day or you could lead your participant through creating an daily blog illustrated with photos they have taken of their actual experiences. Either way, you can also probe away to your heart’s content, and even change the direction of the research part way through.

The software also lets you manage your participants, send out initial invites and get them to fill out a short profile survey, which you can customise. You can also import participant lists from Excel. Projects are divided into segments, which you can use in multiple ways: to divide your participants into smaller subgroups, assign them to different moderators or to assign different tasks to different subgroups.

The respondent interface presents some very simple tabbed areas to view – things to do, things already done and direct messages to or from the moderator. This and all the interfaces have a very pleasing “Facebook era” design which make them pretty much self-explanatory.

As a moderator, is very easy to track participation, view all the new content, add probes and send email reminders or messages to participants that don’t seem to be logging in.  You too will get emails whenever anyone completes one of your tasks – if you are running several large groups, their can be a tidal wave of emails coming your way.

Revelation also allows you to conduct discussion groups online. Here, I found the software to be a little less flexible. It forces you to divide each discussion topic into a series of different tasks. You follow up any point with a probe to that individual, but the tool currently lacks the ability to ask follow-up questions of the group as a whole, or simply open out a probe to everyone without setting up an entirely new discussion task.

In the current version, there are some other minor niggles, such as not being able to personalise the welcome email, and it not being very easy to output and analyse or present the answers to closed questions. However, in the piece of research I used this for (a group among six IT professionals), I was astonished with the quality and clarity of the responses I got. Going online clearly cuts out the waffle, as respondents draft their responses carefully, and consider what they are saying. The result is data that is relatively easy to analyse with very little padding to cut away.

It would be wrong to consider this method a replacement for all groups or depths, but it does provide a credible alternative, and this software certainly encourages creativity in the actual research design.

Client perspective: Claire Dally, GfK Automotive, London

Claire Dally is a Research Manager at GfK Automotive in London, and has recently completed a multi-country study of over 190 people across 15 participant groups using Revelation.

She describes her experiences: “Revelation is very intuitive, easy to use and has a visually appealing interface.  We took advantage of the Helping Hands package, where we were given a dedicated member of the Revelation team to guide us through this multi-market project and set up some of the scripts. They were extremely supportive throughout the whole process.  We also used their translation service and found the quality of the translation was excellent.

“A transcript created during a Revelation session will often be much longer than that of a focus group, because respondents generally have more time to consider their answers and to write down their opinions in detail.  With online qual I find you need to recruit double the number of respondents you actually need.  There are a number of reasons why you lose people: sometimes they are not available during the whole fieldwork session; others may lose interest along the way.

“You do lack some of the rapport you gain face-to-face.  Participants log on at different times, so you don’t necessarily get the chance to probe on things straight away, and therefore you can lose momentum.  However, there are ways of building rapport; you need to invest some time in the start of the fieldwork warming up respondents, and making sure you keep on top of each person’s response, so that they feel someone is reading their comments.  Participants can also upload photographs or videos to illustrate their ideas.

“Clients find the software easy to use too.  They are able to log on as observers, watch comments being made in real time and suggest probes via the moderator.   This means that clients feel really involved in the moderation process and that all of their questions are being addressed.  However, Revelation is not suitable for all types of projects.  If your client wants participants to view confidential stimuli, you have no guarantee that any material tested is not screen grabbed, copied down or viewed by others.

“Participants also find the experience very positive too; they are able to log on at a time suitable for them, and many have commented about how much they have enjoyed taking part.  You can be slightly less structured in your approach, allowing the topic guide to evolve over the fieldwork session, rather than relying on a pre-determined list of questions.  New questions can be loaded up on a daily basis if required.

“It is a cost-effective way of running online qual.  You can bring together participants from different locations without needing them to be in one place at the same time.  For less than the price of a UK focus group of eight respondents you could run a Revelation session of around 20 participants.  Our clients are becoming increasingly interested in online qual and we believe that this interest will only become stronger in the future.  We’ve been very happy with this software and what you can get out of it.”

A version of this review first appeared in Research, the magazine of the Market Research Society, April 2010, Issue 527



Marsc.net 1.1 reviewed

In Brief

What it does

Web-based panel and sample management tool, based on a subset of the most useful features in the desktop/server version of the MARSC sampling software.

Supplier

MARSC Ltd

Our ratings

Score 4.0 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Prices start from around £175 per month for a small-scale operation, plus hosting fees if required; £250-£300 per month for a mid-scale operator with multiple panels. Price determined by volumes.

Pros

  • Allows precise targeting of samples with extremely accurate incidence calculations and estimates
  • No limit on size, demographics or history kept
  • Can reuse sample jobs to draw fresh samples, create ‘favourites’  and define default sample templates
  • Provides resources for creating multiple panel members’ sites

Cons

  • Does not support non-interlocked (margin defined) quotas yet
  • Some reports rather cryptic and confusing
  • Windows and IE only for admin interface
  • Panellist portal module needs programming skills to configure

In Depth

MARSC has always been the heavyweight among the sampling and panel platforms, but now a new software-as-a-service version of the tool has emerged with the aim of lowering the bar to entry. It makes it easier to get up and running and offers something of a break on the price too, which should appeal to the smaller-scale operator.  Just how useful this is will depend on how sophisticated your requirements are, but if you need something with all the bells and whistles, the desktop and server-based MARSC is still available and being developed.

These days, most users tend to look on MARSC as a panel management tool but wasn’t always that way: MARSC started out as a sophisticated sampling tool to allow corporate research clients to draw balanced, quota-controlled samples directly from their own CRM databases.  It was a program that was in the right place at the right time when researchers realised that the most efficient way to do research online was to have a panel of pre-screened, actively managed respondents over which there was a known pedigree, both in terms of demographics and past participation.

MARSC maintains its own database of contacts and therefore, compiling and revealing all of this history is second nature to it. It is not an interviewing system – to use MARSC you will also need survey data collection platform, though it is agnostic about which one, and supports SPSS Data Collection and Confirmit directly, and many others via Triple-S.

The new .net version rationalises the process of sample selection by setting out all the options across six tabbed screens. You start by stating who you do want – referring to any of the profile variables in your panel, overall targets and incidence estimates. In the filters tab, you set exclusions, which can also be on demographics or past participation, such as people who have already received a survey invitation to another study in the last two days, or who have been interviewed on a previous wave of the same study in the last year.

In the third tab, you choose the variables you want to pass across the interviewing system, and in the fourth one you define your quotas – the quota targets you are aiming for. While you may also need to reinforce these with quotas applied in the interviewing module, a great advantage of MARSC is that, over time, it can build up very detailed response history and it will use this for each sample segment you are selecting, to predict just how much sample you need to quota target without over-inviting respondents or wasting sample.

The fifth screen handles notifications – who the reports of the sample jobs get emailed to, and last one ‘properties’ (misnamed in my view) which are not job options but the metadata for the job – the type of project, its name, who is the client and the exec, and also where you specify the reward points for participation.

All of this set-up is saved as a job, which you can give a name to and save in a folder structure that is always visible on the left of the screen. Once a job is saved it can be queued for execution, when it will draw a sample and mark them in the database as having been selected for a survey. You can also do a trial run, when it will simply report on what it would draw – a useful prior step in ensuring you do have enough sample to run with.

Saved jobs can also be stored as “favourites”– a nice web-like touch. Indeed, generally, the program has ported well to the web environment. However, the report displays could be improved as they present a mass of data and tend to use rather cryptic two-letter codes as column headers taken straight from the desktop version, whereas so much more is possible using dynamic HTML on the web.

The respondent portal is a vital part of the panel management tool, and MARSC provides a versatile portal module and set of tools for configuring it. The module is common to desktop and .net versions. In this, participants can update their profile, review what surveys are available for them to take, review and redeem incentive points and so on. The tool is designed for those with developer skills, however – sadly, there is no simple a point-and-click interface for creating or customising panellist portals, which the SaaS user is likely to expect and other systems now provide. Those without an in-house web programmer are likely to need to buy some consulting services from MARSC to get set up.

MARSC say that it is their intention in time to move all of the desktop functionality over to  the .net interface – at the moment, it lacks a handful of the more advanced features, the most serious one being the ability to interlock quotas, where an iterative model is used as a direct counterpart to using rim weighting on tables. MARSC.net won’t appeal to everyone yet, but it does go a long way to democratising efficient panel management by making it available to smaller operators without the expense of dedicated servers and teams of specialist programmers.

Client perspective: Robert Favini, Research Results, Massachusetts, USA

Research Results, a research company in Massachusetts, uses desktop MARSC to host a number of custom panels for clients in a number of consumer sectors incuding the entertainment industry.  Robert Favini, VP Client Services discusses some of the changes that MARSC has enabled.

“Originally, we had had our own internal systems with screener surveys attached which we used in a rudimentary way to pull sample, but it was a bit of a patchwork of things and wasn’t very sophisticated.  About three years ago we started to look at what other people were doing, and we came across MARSC. Shortly before that, we had also had decided to use SPSS as our main survey authoring environment, and as the two tools fit together really seamlessly, that was a big draw for us. As a result, our level of sophistication in what we can offer to our customers for managed panels has jumped up a lot.

“Our clients need something very robust because often they are doing quite a lot of analysis on the data that we provide. With our home-grown tools, the problem we were having was compatibility. This has a nice agnostic format that talks to everything.

The full implementation took place over a two-month period, including converting all of the data, though setup and training required only a few days.

“It was relatively easy to get it up and running: we had a couple of training sessions and someone from MARSC came over to work with our in-house developers. They are in the industry so they are aware of what we are trying to do and use the same terminology as us.

“About the time MARSC came along, I think the industry started using sample in a different way. Gone were the days when people would happily take surveys – we were having to use sample much more carefully. What we were looking for with MARSC was something that would use sample wisely, and let us treat it as a precious resource.”

“What we found was that as we used it, we appreciated it more and more. We like its ability to gain intelligence within the panel. We find we can target sample really precisely, and the incidence calculations it provides – the ‘guesstimates’ of how much sample to broadcast – have been phenomenally accurate. The bottom line is we are not over-using sample: we are basically being very efficient, which is where we were looking to be.”

Robert welcomes MARSC’s strategy to migrate the product to the web, although the lack of interlocked samples along with some other advanced features make it unviable for his firm yet.  “We’d still be interested in using a web-based product because of the portability it brings. Sometimes we have staff scattered all over the place and at the moment we have to use VPN to give them remote access. It would be useful for client users, but we too would like to have that bit of greater flexibility to be able to work out of the office.“

A version of this review first appeared in Research, the magazine of the Market Research Society, March 2010, Issue 526.

Rosetta Studio 3.3 reviewed

In Brief

What it does

Report automation platform which takes tabular output from almost any MR cross-tab program and transforms it into well-crafted PowerPoint presentations. Works with existing slide decks or will generate new ones selectively, directly from tables.

Supplier

ATP Canada

Our ratings

Score 4 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4 out of 5Value for money

Cost

Annual subscriptions from £5000 for a single user, £5,850 for 2 named users, £10,500 for 10. Concurrent pricing options also available. Prices include training, support and updates

Pros

  • Saves hours when converting tables into presentations
  • Greatly reduces the scope for errors when creating presentations
  • Shared templates reduce work and allow for a custom look for each client
  • Presentations can be updated with new data even after they have been modified in PowerPoint

Cons

  • No full preview without exporting to PowerPoint
  • No undo when editing or making changes
  • Windows only

In Depth

It’s a program that few profess to love, but PowerPoint shows little signs of yielding its iron fist over the boardroom presentation yet. Researchers often feel they are slaves to the god PowerPoint twice over, not just when presenting, but when preparing too due to the sheer monotonous drudgery of creating each slide by hand – slowly patting figures and graphs into shape, often against the pressure of the clock.

Rosetta Studio automates this process, and does it in a very efficient and comprehensive way. As a tool, it’s been around for over five years now. We reviewed version 1 back in 2005, when it was simple and straightforward to use, but fell short of doing all you needed it to. There wasn’t enough control over style and layout to create client-ready presentations, and this inevitably resulted in having to do a lot of finessing on the final output within PowerPoint to get the look right, and leaving you vulnerable to having to repeat all this work if you needed to re-run the data.

Improvements since then, culminating in version 3.3, have removed all these limitations. The range of capabilities has, quite simply, exploded. Pretty much any tabular input is now supported, either through built-in importers, or by using an upstream mapping utility. Within the tool, there is now fine control over every aspect of output, and a lot of attention has gone into providing ways to prevent anyone from every having to do anything more than once.

As an example, colouring, shading and chart options are all created and controlled within Rosetta and are not limited to what Excel or PowerPoint can produce. Colours can be linked with products or even applied automatically from the labels in your data for brand names, company names, countries and so on. It eliminates any fight you may have with PowerPoint showing different colours from the ones you had hoped for because of the clumsy way that PowerPoint templates and document themes interact. Instead. All of this is controlled, safely and predictably from within Rosetta, yet still it is standard PowerPoint that comes out of it.

A very powerful feature of the tool is the template. These takes a little time to master, but templates have the advantage that, once defined, they can be used across the whole organisation and shared easily among teams. Using templates, it takes just seconds to build charts from tables. Templates not only apply the styling, but work out what to pick out of the raw table – e.g. just the percentages or the frequencies, and not the total columns and rows.

Not everyone needs to become a template guru. It is not hard to modify them, or adapt them – but if you want to ensure a consistent look, and to control this, they can also be password protected against unwanted or unintentional changes.

There are now three modes of operation: generate, populate and update. In version 1 only generate was possible – this limited Rosetta to the “push” method of production, where you effectively created then exported a PowerPoint from scratch. Generate is ideal for ad hoc work, but not much help for any kind of continuous or large scale production.

Populate mode introduces the alternative “pull” method, where you can take an existing PowerPoint and link it to tables within Rosetta Studio by going through the PowerPoint document, stripping out the variable content and replacing it with a tag that will pull in the relevant data from Rosetta. Tags can pull rows, columns, individual cell, tables or sections of tables, and are to some extent smart – e.g. you can pull the first row, the second from last cell, or the column headed ‘females’. ATP’s implementation is delightfully simple, though it does take some effort to get your mind around the process. But it is ideal for large-scale report production on trackers, where many similar reports are produced on different cuts of the data, and the suite provides some batching and automation modules for power-users that go beyond the desktop interface.

Even more ingenious is the new “update mode” which achieves a kind of magic with PowerPoint. There is nothing to stop you from going in and making extensive changes to your presentation and still be able to update the PowerPoint, for example, because you had to remove a case and re-weight the data. Rosetta invisibly watermarks each chart it produces and uses these hidden tags to identify each value correctly and substitute the updated value. It’s very clever.

All this increased sophistication does come at a cost, however, as the price has nudged upwards, and so too is the investment you need to make in learning it.  This is not something you can pick up for the first time and guess your way through. ATP Canada encourages new all users to take their two-day training course before getting started, by including it ‘free’ within the licence fee. The program is reasonably intuitive, once you have got to grips with the fundamentals, though it’s a pity that you cannot get a better preview of what you are doing within the Rosetta interface – you only see it properly when you generate the PowerPoint.  If you find yourself going along the wrong track, it does not provide any real undo capability either, which is a pity.

Producing presentations is complex and without something like this, is very time-consuming. Speaking to actual users, it’s clear that users not only find learning it an investment work making, but that they soon wonder how they ever managed without it.

Customer viewpoint: Leger Marketing, Canada

Leger Marketing is a full service research company with nine offices across Canada and the United States. Here Christian Bourque, VP Research and Patrick Ryan, a Research Analyst at Leger Marketing, speak of their experiences with Rosetta Studio.

CB: “At the time we were looking for something, we felt most automation software was aimed at large trackers. About eighty per cent of our work is ad hoc. We needed something where the up-front time would be quite small. Rosetta Studio seemed to be better-designed for ad-hoc research, certainly at that time. “

PR: “Now, nearly every one of our quantitative projects runs through Rosetta at some stage.   We’d even use it for a four question omnibus study, where it is still faster than creating a report slide-by-slide. It means the bulk of our time is no longer taken up with the work of creating a report.  The focus is now on analysing and interpreting the data.”

CB: “Once you have spent a little bit of time devising your own templates, you will save 60 to 75 per cent of your time analysing the data. “

PR: “Something that would take four days or a week to put together is now taking us one or two days.   “

CB: “It not only saves the analyst time, but you also need to consider the quality review perspective. We used to do a line-by-line review. Now, because it is automated, this is no longer necessary. It’s a load off our shoulders. It means we can spend more time improving the quality of the insights.  We also find we can include more complex cuts of data in the report that we would not have had time to do, beforehand, like that little bit of extra multivariate analysis.”

PR: “Something we like a lot is the flexibility it gives you to try different things. You might be creating a set of graphs and you realise it could be better presented another way.  Now the hassle of changing your graphs or charts isn’t such a big deal. It takes you two seconds.

“It takes two days to learn, though the basics can be covered in a morning. It is fairly intuitive. We have a couple of reports where the analysis use the tagging. The interface is the same but the logic is different. You have get your mind around how to use and place tags, but once you have done one it is fine. It’s actually very simple.”

CB “We like the flexibility it provides from a look and feel point of view. We can have different templates for different companies. Many of our client have a corporate library of anything they generate, so when it circulates on the client side, it needs to look as if they it’s their document.

“This is something we introduced to add value, not to reduce staffing. It’s the nature of our business that you constantly have to be faster than the year before.  The demand on time is extreme. This is one of the ways we’ve been able to meet that challenge, while improving quality.  And the other major demand is for better insights, and this is one of the tools that allows us to do that.“

A version of this review first appeared in Research, the magazine of the Market Research Society, February 2010, Issue 524