The latest news from the meaning blog

 

Qi from Manthan Services reviewed

In Brief

Qi

Manthan Services, India
Date of review: August 2012

What it does

Online platform for creating advanced dashboards based on survey which delivers to the end user an online environment for data exploration, review and collaboration.

Our ratings

Score 3 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4 out of 5 Value for money

Cost

SaaS with annual subscription based on volumes. Example cost $8,000 for up to 5 projects (approx. 5,000 cases and 250 variables) with discounts available for higher volumes.

Pros

  • Very comprehensive offering
  • Understands the specifics of market research data
  • Focus on collaboration and knowledge sharing
  • Takes care of any complex web- and database programming

Cons

  • Works on IE8 and IE9 but some formatting experienced on other browsers
  • Online documentation/help is fairly basic
  • Set-up requires some skill

In Depth

Dashboards tend to be among the most advanced and also the most treacherous of deliverables for research companies to provide. Tucked away at the end of an RFP, an innocuous-sounding request for “dashboard-style reporting for managers and team leaders across the enterprise, with drill-down capabilities for self-service problem solving” will almost certainly mean something vastly more sprawling and costly to provide than anyone imagined.

Dashboard delivery can be a trap for the unwary. Many an online dashboard has become the constantly-leaking plughole in the project budget through which profits keep draining away.

What makes them difficult to control is they are usually tackled as custom developments, built using tools developed for corporate database systems and business intelligence (BI) tools. Any custom development is both costly and unpredictable and research companies often don’t have the skills in-house to manage a software development project effectively. Worse than that, survey data is difficult to handle with these BI tools. They aren’t designed to function smoothly with monthly waves of data, with new questions added or weighting or percentages that need to add to a constant respondent base. It’s not just a matter of generating the number of records returned from a SQL query.

Manthan Services, an India-based developer, noticed the opportunity to build on the dashboard and business information systems it was providing corporate customers and developed a research-friendly package called Qi (as in “chi” or energy). An online platform for creating advanced dashboards based on survey data, Qi delivers an online environment for data exploration, review and collaboration. It is a tool for building dashboards and an environment in which end-users can then access those dashboards, share, collaborate and even, if allowed to, create their own analyses and dashboards.

It is very smart software that aims to find the middle ground between typical BI dashboard tools like SAP Crystal Dashboard Design (the new name for Xcelsius) and Tableau, where the possibilities are infinite, given enough time and money, and the fairly restrictive kinds of online dashboard creation capabilities found in some of the more up-to-date MR analysis tools. If you really want to produce any kind of dashboard, or have a client that is highly prescriptive about presentation, then you may find Qi is just not flexible enough.

On the other hand, you may be able to use the horizons as a useful limiting factor in what you do provide to your client, as it is likely to do 99 percent of what they need – just not necessarily in the first way they thought of it. For the real advantage of using this product is that you really can produce portals packed with data with relatively little effort and no programming expertise required. Furthermore, when you add new waves of data, all of their derivative reports will be updated too.

There are also built-in modules within the Qi environment to set up different kinds of dashboards or portals for certain applications. There is one for employee research, for example, and another for mystery shopping, with reporting at an individual case level. In addition, there are models provided for performance management, scorecarding and benchmarking. There is also a tool for building an organization hierarchy and this can then ensure each user is given the relevant view of the data when they log in. These can be tied to “group filters” which reflect the organization’s hierarchical structure in the actual data that get displayed.

There is an integrated alerts publisher and a user’s portals can be configured with an alerts area or tab. You then define the exceptions or thresholds where alerts should be generated. These are then recalculated for each individual user’s view of the data so they are only alerted on what is relevant to them.

Elegant concepts

There are some very elegant concepts at the heart of Qi which help to give your work shape. Everything you create is based on one of three “assets” based on data: charts, dashboards and tables. Dashboards come in a variety of shapes with placeholders for you to populate with charts or tables. There is also the concept of a “portlet,” which can house a report, an alert, a chart, favorites or messages. You can then arrange your portlets into pages or publish them on their own.

There is a reasonable though not especially exotic selection of charts – pretty much what you might find in Excel. There are, however, some nice multidimensional bubble charts.

Behind the scenes is a SQL Server database. It can be loaded with survey data using the survey metadata provided by either SPSS or Triple-S. If you want to work with other kinds of data – which is possible – you may need to get help from Manthan Services in setting up an appropriate database schema, however, and also help with the database load process.

A particular snare to be found in many RFPs asking for dashboards is the request for drill-down capabilities. There is often an assumption that deciding what to drill down to is a trivial, automatic choice. It is not – there is often more than one level of detail a user is likely to want to see when a particular KPI turns red or a trend chart shows a worrying dip. In Qi, you have two tools to satisfy this: a drill-down tool that lets the user trace the antecedents or components of any item of data and a drill-across tool which lets you move up and across in your hierarchy of reporting.

End users are provided with a lot of options out of the box to personalize their dashboards – they can create favorites, apply sticky notes, customize the view of the data, create their own portlets (if you allow this) and republish or share these with others. It can make for a highly collaborative environment both within the enterprise, and equally, between enterprise and research agency.

Overall, this is an industrial-strength platform for research companies to use to create portals and dashboard systems with a dizzying array of functionality to pick from. The documentation could be made a lot more comprehensive – it is cryptic in places and tends to gloss over some quite advanced capabilities. I also experienced some issues viewing the portals I was given access to on any browser on IE8 or IE9, though Manthan claims it works with different browsers and tablets.

Same set of tools

Max Zeller is head of the retail insights division for a large global research company in Europe. (His name has been changed at the request of his employer.) His division introduced a white-label version of Qi last year, which it presents to its customers as one of its own branded services. “Many of our clients today require online reporting,” he says. “As a global company we wanted to offer the same set of tools to all clients and also leverage on the one investment across all our companies and for most of our studies. We also wanted something that you could implement quite quickly locally, to create portals and dashboards, which did not require any programming or special skills to run it. Also we wanted a tool that both researchers and users could modify and even create their own views or dashboards for themselves.

“We looked at many different products but eventually chose one from Manthan Services. On all criteria they were on top and they understood market research, which was very important.”

Though the software is very extensive, with quite a lot to learn, he says, in practice his firm’s research and DP teams have found it well within their capabilities to deploy it. “The people in contact with the client – the project managers supported by DP staff – do the technical and setup work. You need someone in the team that champions the product who can translate the requirements of the client in terms of how the software is going to work. Then it can be more junior DP people who do the implementation, because it is all menu-driven – which gives them a new opportunity as well.”

Zeller estimates that setting up a new portal for a client demonstration, comprising 25 different charts and allowing different levels of access, can be achieved in a day or so by his local teams – a pace that was new for the company. “Before this we had to go though IT and the process was not just longer but so much more expensive. It would have taken several days to a week with what we had before. We need to be as lean, as quick and as close to the client as possible – and that’s exactly what we have here. You can give the specs from the client directly to the team – you don’t really have to translate the requirements into a technical specification and that is what saves the time and delay.”

Zeller strongly advises allowing adequate time to learn to use the software, however. “This is not something you can jump into in an hour – it does take two intensive days of training. But overall, I think the trade-off between functionality and ease of use is good. Once you are accustomed to the software it is easy and productive to use.”

He also stresses that everyone, especially those setting client expectations, must be aware that this is a packaged solution. In other words, not all client requests may be achievable. “[When speaking with clients] you need to be aware of what you can and can’t do. Even though it is very flexible, it is working to a standardized framework. There are many things you find you have not thought of first and when you try, you discover there is a way to do it. But it is not fully customizable so there are some areas you cannot change.”

However, in these cost-conscious times, some imposed limits can be an advantage, as Zeller points out: “It is very difficult for research companies to earn money from these portals if what you are doing fully customized.”

Overall, he concludes, “We are quite happy with this software – and I am working with people who have a lot of experience. We think it is a good solution.”

A version of this review first appeared in Quirk’s Market Research Review, January 2013 (p. 28)

Q from Numbers reviewed

In Brief

Q

Numbers, Australia
Date of review: August 2010

What it does

Survey data analysis software for in-depth analysis, with built-in expert analysis features which will select the best analysis depending on context.

Our ratings

Score 4.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5 Value for money

Cost

Single-user annual license of Q Professional is $1,499 for single user. Q Basic with a reduced feature set is $849 annually. Multi-user and volume discounts available.

Pros

  • Easy graphical way to recode variables, merge categories and create filters
  • Makes applying sig tests and statistical models to research data very easy
  • Excellent range of use guides, help and online tutorials

Cons

  • Output styling a little lackluster
  • Limited support for tracking studies
  • USA support currently comes from Australia

In Depth

Q is a new data analysis program from Australia-based Numbers International that is designed to allow researchers to reveal hidden depths in their survey data using the power of statistical testing and modeling but without expecting researchers to become advanced statisticians. It’s perhaps fitting for software from Down Under that this tool can turn the process of analyzing market research data on its head. If it borrows from any school of data analysis, it is probably from the SPSS Base statistics approach, but in a way that is much more market research-savvy than the more general-purpose SPSS.

Q deals intelligently with every kind of survey question – single-coded, multi-coded, numeric value, even grids – in a consistent and even-handed way. Unlike the majority of conventional market research tabulation tools, it is not afraid of letting researchers – the primary audience for this software – get eyeball-close to the data: All the case data is only a mouse click away, on a dedicated data tab.

Q is offered as a desktop tool that works under Windows. You start by opening your Q file (with the file type “.Q”, which is the study’s database containing both case data and survey metadata) just as you would open a Word document or PowerPoint deck. The expectation is that your data provider or data processing department would set this up for you and even create some reports ready to work on. If you want to do it yourself, you can also create your own Q database by importing directly from SPSS SAV or SPS files or from Triple-S files. These will load in all the variables from your study and give them the right designations (single-coded, numeric, etc.), which are important to ensure Q knows the most appropriate models or tests to apply to each question. CSV import is also there as a fallback, though to get the best from the program you will then need to spend some time setting up appropriate question and category labels and ensuring the right question types are set.

Easy to learn and use

This software is very easy to learn and to use, though it is not necessarily intuitive at first sight – probably because there’s some unlearning to do for most experienced researchers. To make the point, Numbers provides not only a quick-start guide to take you through basic tables to choice modeling and latent class analysis in 60 pages, but also an instant-start guide which distills the basics into a single sheet. There is also integrated help and online training with show-me features that take over the software, select the right menu options and then undo it again, ready for you to do the work yourself.

What really differentiates Q from other survey data analysis tools is that it offers the researcher a blended approach to data analysis, combining straight crosstabs for primary reporting with advanced multivariate approaches to reveal hidden trends and connections in the data. So often, these connections remain undetected in most survey datasets simply because the researcher lacks either the tools or the time and budget to dig any deeper. Q can help move the task of analysis from superficial reporting of the numbers to telling the client something he or she really hadn’t realized – based on evidence and backed up with confidence scores.

For the more involved operations, such as multivariate mapping or latent class analysis, you always start from basic tables and analysis. It help keeps you grounded, letting you approach more advanced and possibly less familiar analytical techniques in a stepwise process, building on what you have already seen and verified.

One of the design principles Numbers applied to Q was to put users in front of the actual numbers as early as possible in the process. You always start in table view, looking at some of the data, but this view is highly dynamic and many of the options that you find tucked away in menus, pick-lists and property sheets in other analysis tools are achieved simply and elegantly by dragging and dropping. For example, just clicking, dragging and dropping will let you merge categories; create nets; and rename, reorder or even remove categories. Most functions or options are no more than a single context-sensitive click away.

Another difference is that the program works out the best way to analyze the questions you have selected – you use the same table option whether your question is numeric, categorical or grid. There are a lot of different options that help Q understand the kind of data it is dealing with, and from this it will also select the most appropriate significance tests to apply to the question. It makes appropriate adjustments according to whether the data are weighted or not, and also takes into account the effect of applying multiple significance tests that can otherwise lead to false positives.

In the tables, arrows and color-coding show not only which values are statistically significant but highlight the direction of the difference. As you generate tables and other outputs, these appear in a tree on the left. You can keep them or discard them and you can also organize them into subfolders. From this, you can create a package of a subset of the tables or models you have created. This creates a small e-mailable file which others can then view by downloading the free Q Reader.

The Reader can provide a very simple and inexpensive route to distributing interactive tables to clients. Numbers limits the options in the Q Reader version but clients and co-workers can still slice and dice the data in different ways that are relevant to them but have no direct recourse to the raw data.

Another impressive feature is its handling of conjoint analysis. Q lets you roll up an entire multilevel choice model into a single composite question, which you can then crosstabulate and filter with the same ease as a simple yes/no question. And with all of the built-in significance tests and other analytical techniques at your disposal, you can very quickly determine the real drivers in any choice-based model.

A little lackluster

Where the software is perhaps a little lackluster is in the quality and range of the options to finesse the outputs it provides. It makes little attempt to represent data graphically in histograms or pie charts. Charts are restricted to those associated with correspondence mapping or other such models. There is no integrated support for Excel or PowerPoint, either. If your point of reference is SPSS, then you may find its outputs a step up, but if you are coming to it from other market research data analysis tools, you may well be disappointed.

The full version of Q will also let you import and refresh your data, which provides some rudimentary support for trackers. However, the current version, although it contains very good support for time-series analysis, is poor at version control and reconciling differences in data formats between waves of the study. Perhaps surprisingly in these days of data integration, you can only have one study open at once, though Windows will let you have more than one instance of Q open.

Overall, these are relatively minor weaknesses in a highly-intelligent software product. They are largely indicative of a developer whose priorities lay in simplifying the most challenging problems, and in doing so allowing substance to triumph perhaps a little too much over style.

Gradual introduction

One company making extensive use of Q is Sweeney Research, also in Australia. Erik Heller is general manager of Sweeney’s Sydney office and an experienced researcher with a background in advanced quantitative methods. He has overseen the gradual introduction of Q as an analysis tool for researchers to use on data collected largely in-house from a broad range of telephone, online and in-person surveys. “One of the advantages of Q is that it is very easy for someone who is not that involved in the data analysis to go into the data and run some additional crosstabs,” Heller says.

“There is a huge efficiency gain if someone works up a hypothesis when writing his report and does not have to stop what he is doing, run downstairs or write an e-mail to the data analyst. It may not sound like much, but it is really quite disruptive and therefore desirable to streamline this from the business perspective. That is not something that is especially novel to Q, but where Q differentiates itself from other tools like SPSS is the extent to which it is intuitive and easy for people to immerse themselves in the data.”

Asked how long it might take a novice user to become familiar with the software, Heller says, “That really depends on the individual, but most people seem quite capable of using Q within a couple of hours: checking the data make sense, interrogating the tables and producing some additional basic tables.”

However, as Heller points out, this is just the start for most users in understanding what Q is capable of. “The statistical abilities of this program are the biggest reason I’ve been driving this internally to get people to use it. What’s really good about it is the sophistication of the tests and the foolproof way that they are applied. If you think about the purpose for which most of the traditional tests were originally designed, it is very different from the ways we analyze commercial research studies today.

“In a way, at the 95 percent confidence level, every twentieth test will be a false positive. And on a typical study, we run hundreds of these tests. Q uses testing approaches that aim to account for this and are therefore more appropriate for the huge number of tests that are done on a typical study. It means it is easy to cut across big data sets and see the important, statistically-significant effects. To do that in a software like SPSS is infinitely more cumbersome.”

Heller had also used Q to analyze conjoint studies, both before and after the conjoint analysis module was added to Q, where an entire choice-based experiment is simply treated as a single composite question. “I’ve use it on one project, so far” he says. “It is brilliant in its simplicity. Most people are probably used to having a bit more control over the data and this way you are putting a bit more trust into the software. Clearly, the approach they have taken is that they want to make it as simple as possible. It is ideal for someone who does not have the time to get involved with all the statistics behind it. I think it is a great feature and one I will use a lot more going forward.”

Another plus for Sweeney Research is the ability to export tables and charts in Q for clients to view in the free Q Reader. “For them to be able to look at the tables and merge categories without manipulating the original data is very beneficial and we find they are very keen to use it. It does not overload them because the free version has fewer options; it just allows them to check the little queries they might have. It is all about simplifying things. There is so much information out there these days and there is no shortage of data – the aim is to provide it in the most usable format you can.”

A version of this review first appeared in Quirk’s Marketing Research Review, August 2010, p. 20. Copyright © 2012, meaning ltd. Reproduction prohibited. All rights reserved.

Ruby Reviewed

In Brief

What it does

Modern, GUI-driven cross-tabulation, analysis and charting suite for market research data aimed at the tabulation specialist. Capable of handling large and complex data sets, trackers and other ‘difficult’ kinds of research project.

Supplier

Red Centre Software, Australia

Our ratings

Score 3 out of 5Ease of use

Score 5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Full version $4,800 (allows set-up); additional analyst versions $2,400. Annual costs; volume discounts available.

Pros

  • Cross-tabs and charts of every kind from large or complex datasets, and so much more
  • Quick and efficient to use for DP specialist, using a choice of GUI access and scripting
  • Push-pull integration with Excel and PowerPoint for report preparation and automation
  • Superb proprietary charting to visualize MR data more effectively than in Excel or PowerPoint
  • Excellent support for managing trackers

Cons

  • Interface is bewildering to beginners: a steep learning curve
  • No simple web-browser interface for end users or to provide clients with portal access to studies

In Depth

We always try to present something new in these software reviews, but this time, we think we are onto something that could break the mold: a new tabulation software package from an Australian producer, Red Centre Software, that leaves most of the existing choices looking decidedly dated. It’s refreshing, because for a while, most efforts in market research software seem to have gone into improving data collection and making it work across an ever-broadening spectrum of research channels. Innovation at the back-end seems to have focused on presentation, and has often left research companies and data processing operations with a mish-mash of technology and a few lash-ups along the way to transform survey data into the range of deliverables that research clients expect today.

Ruby could easily be mistaken for yet another end-user tabulation tool like Confirmit’s Pulsar Web or SPSS’s Desktop Reporter, with its GUI interface and drag-and-drop menus. The reality is that it is a fully-fledged tabulation and reporting system aimed squarely at the data processing professional. If you are looking for a Quantum replacement, this program deserves a test-drive.

As far as I could see, there were no limits on the data you could use. It will import data from most MR data formats, including Quantum, Triple S and SPSS. Internally, it works with flat ASCII files, but it is blisteringly fast, even when handling massive files. It will handle hierarchical data of any complexity, and offers the tools to analyse multi-level data throughout, which is something modern analysis tools often ignore.

It is equally at home dealing with textual data. The producers provided me with a series of charts and tables they had produced from analyzing Emily Brontë’s Wuthering Heights by treating the text as a data file. The same could be done for blogs, RSS feeds and the mass of other Web 2.0 content that many researchers feel is still beyond their grasp.

More conventionally, Ruby contains a broad range of tools specifically for handing trackers, so that you are not left having to automate the reconciliation of differences between waves due to variations in the question set and answer lists.

Ruby is a very intelligent tool to use when it comes to processing the data. The data in the tables reported or charted in MR have often gone through a long chain of transformations, and in the old tools, there could be yards of ‘spaghetti code’ supporting these transformations. Trying to work out why a particular row on a table is showing zeroes when it shouldn’t do can take an age in the old tools, as you trace back through this tangle of code, but Ruby will help you track back through the chain of definitions in seconds, and even let you see the values as you go. It is the kind of diagnostic tool that DP professionals deserve but rarely get.

In Ruby, you will probably make most of these data combinations and transformations visually, though it does also allow you to write your own syntax, or export the syntax, fiddle with it, and import it again (the combination that DP experts often find gives them the best of both worlds). However, Ruby keeps track of the provenance of every variable, and at any point, you can click on a variable and see exactly where the data came from, and even see the values at each stage.

The range of options for tabulation and data processing is immense, with a broad range of expressions that can be used to manipulate your data or columns and rows in tables. There is complete flexibility over percentaging and indexing values off other values, or basing one table on another, so it is great for producing all of those really difficult tables where every line seems to have a different definition

With charting, Ruby gives you the choice of using its own proprietary charting engine, or pushing the data out to PowerPoint or Excel charts. The native Ruby charts are a treat to work with, as the developers seem to have gone out of their way to redress the inadequacies of Excel and PowerPoint charts. For time-series charts, concepts such as smoothing and rolling periods are built-in. You can add trend lines and arbitrary annotations very easily. Charts can be astonishingly complex and can contain thousands of data points or periods, if you have the data. Yet it will always present the data clearly and without labels or points clashing, as so often happens in Excel.

Excel and PowerPoint charts are also dynamic, and the Ruby data source will be embedded in the chart, so that the charts can be refreshed and updated, if the underlying data changes.

Amy Lee is DP Manager at Inside Story, a market research and business insights consultancy based in Sydney, Australia, where she has been using Ruby for two years, alongside five other researchers and analysts. Ruby is used to analyze custom quantitative projects and a number of large-scale trackers.

Asked if the program really did allow a DP analyst to do everything they needed to, Amy responds: “We were able to move to Ruby a couple of years ago, and it is now the main program we use, because it can do everything we need to do. I find it is an extremely powerful and flexible tool. Whenever I need to do anything, I always feel I can do it with Ruby. Other tools can be quite restrictive, but Ruby is very powerful and completely flexible.”

Amy considered the program went beyond what more traditional DP cross-tab tools allowed her. She observes: “Compared with other programs I have used, Ruby allows me to filter and drill down into the data much more than I could with them. It’s especially good at exporting live charts and tables into documents.

“Once they are in PowerPoint or Word, trend charts can be opened up and adjusted as necessary.  When it is a live chart, it means you can update the data, and instead of having to go back to Ruby, open it up and try, find the chart and then read the data, you can just double click it inside PowerPoint, and you can see all the figures change.  And there is even an undo feature, which is good for any unintentional errors.”

Amy freely admits that this is not a program you can feel your way into using, without having some training, and allowing some time to get to understand it.  “It is really designed for a technical DP person,” she explains. “If you have someone with several years’ experience of another program they will have no problem picking this up as everything will be very familiar to them. But we also had a client who wanted to use it, someone with a research rather than a DP background, and they found it a bit overwhelming, because it can do so much, and it is not that simple. It looks complex, but once you get the hang of it, you can do what you need very quickly.”

Among the other distinguishing features Amy points to are the speed of  the software, which is very fast to process large amounts of data and produce large numbers of tables and charts; its in-built handling of time-series, allowing you to combine or suppress periods very easily,  and the range of charts offered, in particular the perceptual maps.

Some of the research companies I speak with are becoming uneasy that the legacy data processing tools they depend on have fallen so far behind, and are in some cases, dead products. They have endured because the GUI-based ‘replacements’ at the back of the more modern data collection tools just don’t cover the breadth of functionality that is needed. You get breadth and depth with Ruby – even in the sheer range of functionality it offers is bewildering to the newcomer.

A version of this review first appeared in Quirk’s Marketing Research Review, August 2009.

A frugal future is no bad thing

An interesting lunch with B, who is VP of a research software provider, visiting London. “So, what are the changes you see in research software” he asks, and I find myself answering the question at some length on the changes I don’t see happening, and how unambitious research companies are when it comes to using technology to move the research process on. We both agree that too many research firms are timid with their research software decisions: perhaps too many vested interests in retaining the status quo.

We have both been in the industry a long time, but we are both still surprised by how uninterested many rank-and-file researchers are with the data. So many seem content to allow others to push the buttons, rather than get their hands dirty with the actual data. We swap stories of surveys we have seen designed for the web which are just paper forms, with no understanding of the whole context of doing research online. Again, it is the technicians that are left to bridge the gap between intention and action. We wonder whether this goes to explain the ongoing reluctance of research companies to automate, through better use of technology – so many of the decision makers probably have only a hazy grasp of the actual wastefulness of many of the processes which are still commonplace. We think of the reality of coding, of cross-tab production, of chart preparation. I mention the reluctance we uncovered in many CATI centres still to introduce predictive dialling technology, where there can easily be a 6 month ROI, and a hike in profits thereafter (Confirmit MR Software survey).

I think back to the Online Research Conference the previous week: the subtitle of which was “cheaper, better, faster” in reference to what the research industry perceives as being the drivers from their clients (and the hope that the conference speakers might be able to provide some survival tips and thereby pull in an audience). The event was extremely well attended, yet speakers and questioners repeatedly challenged the placing of “cheaper” in the title. “Cheaper” should not be the goal, they asserted, even though there was constant pressure to bring down costs. “Better and more efficient” is the public ambition of the industry, according to the conference attendees.

But those at the conference are clearly not a representative sample of the research industry as a whole. Those fixed on cost don’t do conferences. Those fixed on cost seem content to keep cranking the same handle – squeezing out more product off the same tired production line. It was not a strategy that resulted in success for much of the automotive industry – it proved disastrous for GM, for instance.

It is not a perfect analogy. The automobile industry is not greatly threatened by customers going and building their own cars. Research is expensive and DIY survey tools are cheap, which makes professional research vulnerable at times like these. We do need to talk about cost, and we need to look to better technology to reduce cost by changing the process and making research inherently frugal. The problem is there are too many gas-guzzling SUVs being offered by the research industry at a time when customers are seeking more frugal hybrids. And what is a threat to some, is always an opportunity to others, especially those that get tricky with the technology.

A disappearing trick: Dimensions and SPSS

It’s ten years since SPSS announced its vision for the future of research software in 1999: its ‘Vision 2000’. Its dedicated MR division, SPSS MR was tasked with turning this vision into reality and the product was named Dimensions. A stream of products eventually started to appear for customers using the firm’s legacy products like Quancept, Surveycraft and In2quest.

In acquiring this family of products, SPSS had become the undisputed global number one supplier of MR software. There is no doubt that Dimensions was among the most technically advanced for MR when it emerged and the platform has allowed customers to build ingenious software solutions of their own in a way they only dreamed before. But the project has been dogged with problems too, with customers criticising the software for being over-complex – increasing, not decreasing the skill level of those required to run and manage the software – and for being slow. Some IT managers have needed to cluster unprecedented numbers of servers in order to deliver performance, while several rival packages still seem to operate satisfactorily as single-box solutions.

No happy anniversary celebrations have been announced by SPSS. Instead the Dimensions name is being dropped. SPSS Inc. wants to see its extensive product family of some 50 programs united under a new name: PASW. It stands for Predictive Analytics Software. The iconoclastic new product names seem to require exceptional powers of recall. mr Internview becomes ‘PASW Data Collection Interviewer Web’. mr Studio becomes ‘PASW Data Collection Base’. Even the venerable SPSS becomes ‘PASW Statistics Base’. SPSS will live on only as as company name.

The firm denies that this name change has any connection with the approach from SPSS founder Norman Nie last year, who has offered to sell the SPSS name to the firm for $20 million. It is also adamant that its commitment to the market research community is undiminished. However, ten years on, there is no longer a specialist MR division and the firm’s focus is clearly on predictive analytics and modelling based on business intelligence.

Let’s hope that the new SPSS remembers we do much more than that in market research. With all this focus on predictive analytics from corporate data, it’s worth remembering, there’s only so much you can learn by looking in the rear-view mirror, no matter how cleverly.

A version of this article was also published in research magazine, issue 517, June 2009, under the title “What’s in a name”