What it does
Modern, GUI-driven cross-tabulation, analysis and charting suite for market research data aimed at the tabulation specialist. Capable of handling large and complex data sets, trackers and other ‘difficult’ kinds of research project.
Red Centre Software, Australia
Ease of use
Compatibility with other software
Value for money
Full version $4,800 (allows set-up); additional analyst versions $2,400. Annual costs; volume discounts available.
- Cross-tabs and charts of every kind from large or complex datasets, and so much more
- Quick and efficient to use for DP specialist, using a choice of GUI access and scripting
- Push-pull integration with Excel and PowerPoint for report preparation and automation
- Superb proprietary charting to visualize MR data more effectively than in Excel or PowerPoint
- Excellent support for managing trackers
- Interface is bewildering to beginners: a steep learning curve
- No simple web-browser interface for end users or to provide clients with portal access to studies
We always try to present something new in these software reviews, but this time, we think we are onto something that could break the mold: a new tabulation software package from an Australian producer, Red Centre Software, that leaves most of the existing choices looking decidedly dated. It’s refreshing, because for a while, most efforts in market research software seem to have gone into improving data collection and making it work across an ever-broadening spectrum of research channels. Innovation at the back-end seems to have focused on presentation, and has often left research companies and data processing operations with a mish-mash of technology and a few lash-ups along the way to transform survey data into the range of deliverables that research clients expect today.
Ruby could easily be mistaken for yet another end-user tabulation tool like Confirmit’s Pulsar Web or SPSS’s Desktop Reporter, with its GUI interface and drag-and-drop menus. The reality is that it is a fully-fledged tabulation and reporting system aimed squarely at the data processing professional. If you are looking for a Quantum replacement, this program deserves a test-drive.
As far as I could see, there were no limits on the data you could use. It will import data from most MR data formats, including Quantum, Triple S and SPSS. Internally, it works with flat ASCII files, but it is blisteringly fast, even when handling massive files. It will handle hierarchical data of any complexity, and offers the tools to analyse multi-level data throughout, which is something modern analysis tools often ignore.
It is equally at home dealing with textual data. The producers provided me with a series of charts and tables they had produced from analyzing Emily Brontë’s Wuthering Heights by treating the text as a data file. The same could be done for blogs, RSS feeds and the mass of other Web 2.0 content that many researchers feel is still beyond their grasp.
More conventionally, Ruby contains a broad range of tools specifically for handing trackers, so that you are not left having to automate the reconciliation of differences between waves due to variations in the question set and answer lists.
Ruby is a very intelligent tool to use when it comes to processing the data. The data in the tables reported or charted in MR have often gone through a long chain of transformations, and in the old tools, there could be yards of ‘spaghetti code’ supporting these transformations. Trying to work out why a particular row on a table is showing zeroes when it shouldn’t do can take an age in the old tools, as you trace back through this tangle of code, but Ruby will help you track back through the chain of definitions in seconds, and even let you see the values as you go. It is the kind of diagnostic tool that DP professionals deserve but rarely get.
In Ruby, you will probably make most of these data combinations and transformations visually, though it does also allow you to write your own syntax, or export the syntax, fiddle with it, and import it again (the combination that DP experts often find gives them the best of both worlds). However, Ruby keeps track of the provenance of every variable, and at any point, you can click on a variable and see exactly where the data came from, and even see the values at each stage.
The range of options for tabulation and data processing is immense, with a broad range of expressions that can be used to manipulate your data or columns and rows in tables. There is complete flexibility over percentaging and indexing values off other values, or basing one table on another, so it is great for producing all of those really difficult tables where every line seems to have a different definition
With charting, Ruby gives you the choice of using its own proprietary charting engine, or pushing the data out to PowerPoint or Excel charts. The native Ruby charts are a treat to work with, as the developers seem to have gone out of their way to redress the inadequacies of Excel and PowerPoint charts. For time-series charts, concepts such as smoothing and rolling periods are built-in. You can add trend lines and arbitrary annotations very easily. Charts can be astonishingly complex and can contain thousands of data points or periods, if you have the data. Yet it will always present the data clearly and without labels or points clashing, as so often happens in Excel.
Excel and PowerPoint charts are also dynamic, and the Ruby data source will be embedded in the chart, so that the charts can be refreshed and updated, if the underlying data changes.
Amy Lee is DP Manager at Inside Story, a market research and business insights consultancy based in Sydney, Australia, where she has been using Ruby for two years, alongside five other researchers and analysts. Ruby is used to analyze custom quantitative projects and a number of large-scale trackers.
Asked if the program really did allow a DP analyst to do everything they needed to, Amy responds: “We were able to move to Ruby a couple of years ago, and it is now the main program we use, because it can do everything we need to do. I find it is an extremely powerful and flexible tool. Whenever I need to do anything, I always feel I can do it with Ruby. Other tools can be quite restrictive, but Ruby is very powerful and completely flexible.”
Amy considered the program went beyond what more traditional DP cross-tab tools allowed her. She observes: “Compared with other programs I have used, Ruby allows me to filter and drill down into the data much more than I could with them. It’s especially good at exporting live charts and tables into documents.
“Once they are in PowerPoint or Word, trend charts can be opened up and adjusted as necessary. When it is a live chart, it means you can update the data, and instead of having to go back to Ruby, open it up and try, find the chart and then read the data, you can just double click it inside PowerPoint, and you can see all the figures change. And there is even an undo feature, which is good for any unintentional errors.”
Amy freely admits that this is not a program you can feel your way into using, without having some training, and allowing some time to get to understand it. “It is really designed for a technical DP person,” she explains. “If you have someone with several years’ experience of another program they will have no problem picking this up as everything will be very familiar to them. But we also had a client who wanted to use it, someone with a research rather than a DP background, and they found it a bit overwhelming, because it can do so much, and it is not that simple. It looks complex, but once you get the hang of it, you can do what you need very quickly.”
Among the other distinguishing features Amy points to are the speed of the software, which is very fast to process large amounts of data and produce large numbers of tables and charts; its in-built handling of time-series, allowing you to combine or suppress periods very easily, and the range of charts offered, in particular the perceptual maps.
Some of the research companies I speak with are becoming uneasy that the legacy data processing tools they depend on have fallen so far behind, and are in some cases, dead products. They have endured because the GUI-based ‘replacements’ at the back of the more modern data collection tools just don’t cover the breadth of functionality that is needed. You get breadth and depth with Ruby – even in the sheer range of functionality it offers is bewildering to the newcomer.
A version of this review first appeared in Quirk’s Marketing Research Review, August 2009.
Marketsight version 7.3
Marketsight Inc., USA
Date of review: January 2009
What it does
Web-based research data reporting environment offered as a hosted solution and aimed at research data consumers, either to browse existing tables and charts, or to produce their own analysis. Offers capabilities for research agencies to publish results to clients through the software.
Ease of use
Compatibility with other software
Value for money
Professional $995, Enterprise (includes portal features) $1495. Academic licences at 90% discount. Charges priced in US dollars, per user per annum and include training and support. Reduced fees for agencies providing licences to end-users.
- Very easy to upload your own projects as either SPSS SAV files or Triple-S data
- Excellent support for charting both within the tool and when exporting to Excel or PowerPoint
- Can use simply as a means to distribute reports, or to interrogate data, or do bot
- Rich set of capabilities for recoding and transforming variables
- Though web-based, currently only works under Microsoft Windows with IE6 or IE7
- A little prescriptive in the kinds of reports it can produce – not necessarily for the power user
- Ignores any variable names in any data imported from Triple-S or SPSS: you have to work with the question text
The transformation that MarketSight has gone through since we last reviewed this web-based cross-tab tool two and a half years ago is a bit like getting a visit from the son of a friend who was a teenager the last time you saw him, and is now a confident and capable adult with a university degree who wants to come and work for you.
Back then, MarketSight was a simple end-user tab tool with a few nice touches, but quite a lot of limitations too. Though it was provided as a self-drive tool, it really relied on purchasing some consulting time in the background to get surveys set up, or to carry out the kinds of transformations you were likely to need on the data. Then, the product was developed and marketed by a division of the Monitor Group, a large business consulting firm based in Cambridge, Massachusetts. This provenance showed in the kinds of features the software had, or more importantly, did not have. It was very SPSS-like in its approach to tables and lacked support for filters and even multiple response data.
Since then, Monitor Group has spun off the original MarketSight team, which now owns and develops the software independently. Development is now strictly MR-focused and the result is a much more research-centric approach to data analysis. At heart it remains an easy to use cross-tabbing tool but with a new drag-and-drop interface. You can build reports and save them for re-use later, or if someone else has set up the report for you, you can simply open the tool and review the reports.
Gone are the irritations about not being able to define or apply filters, or create tabs with multiple response data: they are all in place now. You can also drag as many variables as you like into the rows and the columns of the table.
Charting has been integrated with the tables in a very practical way. Each one-by-one combination of variables in the cross tab is presented in the output display with its own small histogram icon. Click and a window opens to display the data graphically in a way that makes any interesting variations in the data immediately obvious to any lay user. A further button lets you tailor the chart, print it or export it.
There is better-than-typical support for ranking or sorting of answers in cross-tabs, and you can rank by any column, by the base or the mean. A simple arrow icon highlights which column has been used for ranking. Charts too are easily ranked.
You can also export whole groups of tables as charts, and post them directly to PowerPoint or Excel. Within Excel, the program will helpfully provide you with a tabbed worksheet containing the chart and another containing the table – and both look extremely presentable without any tweaking, which in my experience is an accomplishment in itself. The program will not produce a completely presentation-ready PowerPoint deck, but it will get you very close to it.
Other strong areas within the product are the creation of calculated variables and categories to combine variables or categories or break numbers into ranges, and a powerful way for end-users to create very similar transformations on a lot of variables, such as to add a top-two box to a rating scale. It means that researchers or end-users can be very self-sufficient, and avoid the need to keep going to their DP supplier. Whole sets of analyses can be copied from one dataset to another.
A big breakthrough is the importing: anyone can upload their own data and variables, if you have either a SAV file or a Triple-S data and metadata file – which means you can load in data for a very wide range of survey data collection tools. My only grumble is that, while it imports all the text, it does not import the variable names, and that can make identifying questions difficult in many surveys.
MarketSight is still a bit prescriptive in what it will allow you to present in a table, which could frustrate the power-user. It also lacks the means to examine cases individually, to check outliers or view verbatims. It does not handle duplicated datasets or files and reports saved from multiple users as well as it should – unaided, your report libraries could descend into chaos. Plus, it currently only works in a Microsoft Windows environment, under IE6 or IE7, which is not everyone’s browser of choice – though this is planned to change.
If you pay a bit more and get the enterprise edition, you also get a portal environment in which you can upload other files relating to a project, and use it to start building your own research library. The system also contains a full permission control system, so that different users can be given different access rights to surveys and also to have functionality turned on or off. It therefore makes the program an attractive proposition for research agencies wishing to provide a data portal to their clients.
MarketSight’s developers deserve praise for providing users with a wealth of online help, tips, tutorials and advice all through the product. It makes this web-based tool feel like a cross between program and a website: and what could be more appropriate for a product focused on providing information?
Customer Viewpoint: Renée Zakoor at KB Home, Los Angeles, USA
Renée Zakoor is the Director of Market Research at KB Home, a new home building company the operates in 15 states in the USA.
MarketSight is used across the business to distribute market research information. Renée explains: “We do a specific survey in each of our nine divisions and that data becomes the basis for major decisions each division has to take about what to build, where to build it and so on. We upload each survey onto MarketSight. My team works with it to do analysis, but it is also put there for the people in the divisions to make use of.
“What I love about MarketSight is that non-market researchers can easily go in and answer their own questions. Then the ability to export it into Excel so they work with it that way, and do graphics to PowerPoint is just great as well. We tend to give staff members an hour’s worth of training and usually they can run with it. I also have senior managers who find they can go in and answer their own questions. It is very user-friendly, which I think is critical.
“We have now started to work with using MarketSight as a repository for all kinds of files we want everyone out in the divisions to have access to. Previously we were using an intranet, which meant using another internal resource. Using MarketSight, this is easy for me to do for myself. You do not need a lot of sophisticated computer skills to be able to upload files to it.”
Another improvement that Renée welcomes is the ability to replicate sets of analyses for different regions or users, where the project is essentially the same, but different users in will each work their own dataset. “We can set up analysis for one market and it is then easy for us to copy it over to all the other markets without having to recreate it – so there are a lot of efficiencies for us in that.”
Asked about any anxieties Renée might have about making research data so widely available for non-researchers to run their own analyses, she is unequivocal: “Over the years, I have become less concerned [about this]. I feel the more transparency there is in the data and the more people you get using data, the better. The first step is trying to get people to use research to make decisions and this is a tool that will help them do that. I find it frees up a lot of researchers’ time to be more consultants to the non-researchers. If people have bought into the methodology, it can prevent a lot of misinterpretation. Ultimately, the research is just another tool, and it is down to the researcher to be the partner that will help business people make the most of those tools: MarketSight just helps make those tools more accessible.”
Published in Research, the magazine of the Market Research Society, January 2009, Issue 511
What it does
Web-based analysis software for end users with extensive capabilities for handling trackers and syndicated research.
PAI & Gamma Associates
Ease of use
Compatibility with other software
Value for money
Entry level cost around £5,000 for 5 users. Data conversion costs from £200 per project typically apply.
- Packs in a lot of functionality yet is extremely simple and highly intuitive to learn and use
- Effortless support for time series and grouping real numbers and dates sensibly for analysis
- Cross-platform Windows and Mac
- Set-up of surveys via Gamma or an affiliated DP bureau only
- Interface a little tired and dated in places
- Severely limited access control: can only set usage permissions at a survey level
The latest of a growing number of desktop data analysis tools to find itself reincarnated on the web is mTAB. mTABweb is a surprisingly faithful reproduction of what was, until now, a Windows-only analysis program, widely used in the specialist field of syndicated research. The online version is Java based, so it supports a wide range of browsers and platforms including Mac and Linux. Being web-based, it makes the task of distributing software and data to end-users very much simpler, as it is all controlled centrally.
Just as with the desktop version, mTABview’s interface hinges – quite literally – on simulated Filofax organiser, with two pages open in front of you and a series of tab-dividers on either side that let you choose which pages you wish to show side-by-side. You select the variables to tabulate from the Questions tab and by opening the Row tab or the Column tab, drag and drop them to build the table you want to view. There are other tabs for choosing filters, switching datasets or adding in a third level beyond the columns and rows. Percentage and respondent base selections are easily selected from dropdown menus.
When your table is assembled, you click a button to generate the table. In our tests, with some realistically large datasets, the table appeared within a second or two. A line of buttons at the top gives you access to other options, one of which takes you back to the Filofax view. The table looks disarmingly like an Excel spreadsheet, which gives the output window a very intuitive feel to users. Other buttons open up a wide range of options for finessing the output, from omitting columns or rows, to selectively adding shading or borders.
The simplicity of the interface (which is starting to look a bit dated now) and the ease by which you can move from data to tables belies this program’s actual level of sophistication as a serious survey analysis tool. Look at any of the features or options, and you will find an intelligent set of capabilities on offer. If you need statistics, there are means, standard deviations, medians, Chi-Square, t- and Z-test scores. It will also automatically create top-2 and bottom-2 box scores for any rating scale type question without requiring any recoding.
Filters and new variables are also easily created, using graphical editors. There is a range of built-in options for cutting numbers into categories or ranges, as well as intelligent handling of date fields. An interview date can be converted into a profiling variable based on calendar months, or fiscal quarters with surprising ease, which is particularly handy on trackers.
The support for trackers goes much further. You can combine different datasets, and there are tools for managing the differences between the waves of a tracker within the software.
Back in the analysis view, once you have viewed a table, you can save it, give it a name and come back to it later. You can also select a portion of it and turn it into a chart. There are a dozen chart styles to choose from, though the output styles are limited, compared to Excel or PowerPoint. However, you can also run correspondence analysis in the charting module and display these as maps. These too can be saved or pasted into presentations and reports.
mTABweb and mTABview are programs which appeal directly to the consumers of research data. Virtually any form of survey data can be transformed into an mTAB database (both programs use the same database format). The drawback for those that like to be self-sufficient, is that Gamma does not distribute the conversion programme: you have to send your data to Gamma for conversion. This inevitably adds delay and cost, though both may be modest. For those buying research from different research providers, however, there can be real advantage is in being able to use one tool regardless of the fieldwork provider, and the conversion stage can provide a valuable independent quality check on the data being provided.
A version of this review first appeared in Research, the magazine of the Market Research Society, August 2008, Issue 506
What it does
Comprehensive desktop analysis software for crosstabs, charts and statistics, with integrated data editing, data processing, presentation and publishing capabilities.
SPSS (An IBM Company)
Ease of use
Compatibility with other software
Value for money
Single user prices: Base SPSS system, £1072, standalone SPSS SmartViewer £132, add-on modules from £473. Annual maintenance and support from £214. Volume and educational discounts available.
- Now cross-platform – PC, Mac or Linux
- Clever data editing including anomaly detection
- Greatly improved charting
- Output directly to PDF
- Wide range of options can be confusing to novice users
- Output can look straggly and utilitarian
This year the statistical software SPSS is forty years old. While SPSS now heavily promotes this program in the so-called business and predictive analytics arena, MR users continue to be well served by the latest issue, SPSS 16. Indeed, there are several very handy new features for questionnaire-based data and the stuff market researchers tend to do.
The big change is that the software has now been re-written in Java. Going to Java has given the developers the opportunity to make a few changes to the dialogue windows – though (before any experienced users break out into a cold sweat) not to where things are or how they work, but in terms of being able to resize items dynamically, stretch windows and see more displayed as a result. It means, for example, that long labels no longer get truncated in selection menus, which has long been an irritation. However, practised users will probably be surprised just how similar SPSS 16 is to recent native Windows versions, considering the interface has effectively been rebuilt from scratch.
SPSS has always been strong in allowing you to edit and clean your data on a case-by-case basis. While there seems to be a recent trend among some researchers not to bother, especially online, those who take these matters seriously should be rather pleased to see this version introduces a heuristic anomaly detector in the data validation menu. Set it going on all the variables you think matter, and it will pull out any cases where the answers stick out from the rest. It uses a clustering, or rather an un-clustering algorithm, and looks for items that don’t cluster. More conventionally, there is also a complete rule-based validation routine, with several handy built-in rules to look for large number of missing variables or repeated answers (mainlining through grids, for example) and the option to set up your own cross-variable checks too.
There are some handy new tools in the data prep area, such as easy recodes that take date and time values and chop them up into discrete time intervals such as months and quarters, or let you group according to day of week, mornings and afternoons and so on. There is ‘visual binning’ which lets you create categories from numeric variables by showing you a histogram of your new categories, and lets you even them out using sliders on screen. A new ‘optimal binning’ function lets you do the same to values, using another variable to determine the fine-tuning of the slices, such as to split income with respect to age.
Version 16 also makes it easier to edit and clean up the metadata – the text labels and names. There is a find and replace feature and a spell checker too, with dictionaries for both UK and US English and for other major languages. The move to Java has made possible other languages and writing systems too, as SPSS 16 now fully supports the Unicode standard.
On the output side, greatly improved charting came in with version 14, and the improvements continue. The visual method for defining charts is one of the most elegant I have seen. Where many tools, like Excel, simplify chart building with a wizard, here the workflow all takes place in the one chart-building window. It avoids the tunnel mentality of the wizard, where you emerge blinking on the other side with no idea of how you got there.
Two items are of particular interest to market researchers. Top marks to SPSS for the ‘panel’ chart option on all the charts, which lets you add a categorical variable such as demographics. It produces neat, side-by-side charts for each category, all the same size and sharing one legend. ‘Favourites’ make it easy to store the chart outline for any chart you have perfected in a gallery for you to use again, saving time and helping you achieve consistency in your reporting.
Behind the scenes, there is also a full chart scripting language, which can be used to automate repetitive chart production. Also of interest to MR users is the new built-in support for going straight to PDF from the output viewer. It offers a fantastic alternative to producing PowerPoint decks merely to communicate data. You can output everything or a selection. Best of all, the complete heading and folder structure of the output viewer is replicated in the PDF as bookmarks, to make navigation easy.
Much of the power and versatility of SPSS has always derived from the ability to write SPSS syntax directly. When you use the graphical interface, the syntax needed to drive the SPSS processor and create your outputs are created for you and can be saved and reused. Advanced users and programmers who use syntax directly will find many more commands and options at their disposal – so it is often possible to create highly customised outputs using syntax. The chart scripting options are just one recent syntax extension. Another intriguing one is a new ‘begin program’ command, which lets you run other external applications and scripts written in the open source language Python. So if the hundreds of statistical tests and models available within SPSS turn out still to be not enough, it is possible to spawn out to ‘R’ (see r-project.org), the open source statistical initiative, and apply any of the hundreds offered in R, using your SPSS data, and presenting the results in your SPSS output.
I was hoping that SPSS 16 would make the program and data structures less disdainful of multiple-response data. In science, and in business, this kind of data is rare, but in market research, multi-coded data abounds. Alas, even in version 16 it is still handled in the same arms-length way through multiple-response sets created from dichotomies. Rather confusingly, there are different multiple sets in the tables and in the special multiple-response frequencies and cross-tabs area. Once you have set them up, there is still that trap for the unwary that they do not get saved in the data, or saved at all without some effort.
My other grumble is that, despite the output improvements, the overall look of the reports that come out is still very utilitarian and is full of irrelevant set-up detail. Cross-tabs in particular are wilfully straggly and unfinished in appearance.
It surely cannot be an issue for the core SPSS users, otherwise you imagine it would have changed long ago, but it is another deterrent to market researchers, where effective communication of results has to be a core strength.
But for the sheer range of statistical tests and models available from one desktop application, SPSS deserves a place in every MR department, agency or consulting practice.
A version of this review first appeared in Research, the magazine of the Market Research Society, March 2008, Issue 501
What it does
Desktop or web-based tabulation and charting tools for researchers or end-users with an integrated script-based data-processing module for data specialists. It can also be used to build data portals and dashboard reporting systems.
Intellex Dynamic Reporting
Ease of use
Compatibility with other software
Value for money
In euro (€): Offline tool €1100 per user per annum. Online: €1500 set-up fee, €1800 annual fee, plus €225 charge per person and per project.
- Easy import from SPSS .sav files or Triple-S
- Extendable gallery of output styles for both tabs and graphs
- Powerful editing and data preparation workflow
- Advanced machine-learning-based coding module for verbatim responses
- Restricted filtering within online and desktop tools
- Very limited range of statistics
- Some limits on dynamic links to PowerPoint
- No specific support for multi-language studies; interface is English only
DataDynamic is a new arrival on the MR software stage. But is there space for yet another tab program? And is there anything significantly different about this one? Actually, there is on both fronts. While a majority of MR analysis tools effectively take their cue from the Quantum/Quanvert model, with the online tool existing as an add-on stage to a data processing activity, and the end-user working on a closed database of results, DataDynamic takes the more open SPSS as its muse.
It is often overlooked just how many researchers around the world use SPSS to do all their analysis on their quantitative surveys. On the whole, SPSS does a decent job for the market researcher wanting to analyse their own data, but it has its downsides. There is a steepish learning curve and the problem of picking your way though a host of options that are either rarely or never used. It is also a struggle to produce report-ready output for Word reports or PowerPoint briefings and summaries. But SPSS does make the raw data readily available to the researcher for them to work on and even edit. As a means of distributing data to other users this can also be a liability.
DataDynamic can work as a desktop tool, like SPSS, or as an online, web browser-based tool like SPSS MR Tables or Confirmit’s Reportal. The desktop tool (but not the online tool) also carries with it a complete interactive suite for coding and editing your data, which is aimed at the researcher as much as the data-processing specialist. It gives this product appeal to those researchers who are or like to be self-contained in their data processing capabilities. Better still, it provides them with the means to publish results to clients in a variety of ways appropriate to the needs of different audiences for research data.
For those that just want a dashboard with a few KPIs every month, there is a disarmingly simple process to create and publish these as web reports for clients to access securely in a data portal. These offer a single scrolling page of side-by-side tables, charts and commentaries, which can then be refreshed automatically as each subsequent wave of data is added. Users can look and copy data into their own reports, but cannot change it.
For those who want to dig into the data, their online log-on can be made to unlock access to the online cross-tab and charting capabilities. These are more or less identical to the capabilities of the desktop cross-tab and charting tool which is the core of DataDynamic. In either version it uses a familiar drag and drop technique to allow you to build cross-tabs from a structured list of questions. It is quick to put together tables, and there are all the options you would expect there to vary percentages, rank answers in order and apply or remove filtering, weighting or presentation options. Strangely, it only offers one significance test at present – the greatly abused t-test, which can risk being a safety blanket lined with asbestos.
On the other hand it contains a marvellous tool for creating your own target groups or profiles, such as those derived from segmentation models (though you would need to use something else to produce segmentations). It also scores two out of three for weighting: you can apply respondent weighting, and you can apply projections to a population total. However, you can only calculate simple arithmetic weights – there is no iterative model for creating so-called rim or target weights. Filtering too, is an oddity. It is quick and easy to apply any answer as a filter and combine answers from the same question, but it assumes you would always want to ‘or’ answers from the same question or ‘and’ answers from different questions. And if you create a target group, you cannot apply this as a filter – which would be handy.
Several of these restrictions can be overcome by using scripting. A powerful hidden feature of DataDynamic is the Visual Basic scripts that drive it. End users are unaware that these are being created as they compose their tables and charts, but they can be captured and edited, or folded into larger scripts to automate report production. It is akin to syntax in SPSS.
Other strong points include clear, attractive charting capabilities, based on the Microsoft Office charting engine, with user-definable template galleries; a surprisingly sophisticated suite for coding data, which even includes a trainable coding engine which will then automatically code similar datasets on the basis of examples you have provided before, and a great range of selective and cumulative imports from either SPSS or Triple-S data.
There are several other areas where more depth of functionality is needed. There is currently no real support for presenting or publishing results in more than one language for users to select their preferred language, for instance, and there some difficulties in publishing dynamic reports with charts that will refresh automatically, due to oddities in the Microsoft charting engine.
What I find most tantalising is that Intellex have used this platform to build a number of bespoke enterprise dashboard and drill-down reporting systems. At the moment, DataDynamic as shipped does not have all the tools needed to create your own enterprise feedback system, particularly in the area of user and data permissions. But enterprise reporting is something Intellex are planning to develop further and, if so, they could possibly be the first to market with a dashboard or EFM product that will work with research data from any source.
Customer Viewpoint: Yumi Stamet, Intelligence Group, Rotterdam
Intelligence Group is a research and consultancy firm based in Rotterdam in the Netherlands, specialising in the employment and recruitment research. It publishes, quarterly, a rolling two-year survey of the Dutch labour market comprising some 32,000 interviews to a wide range of commercial and public sector clients. It is a substantial survey with a large number of variables which it now distributes very effectively using the offline version of DataDynamic. Yumi Stamet, Operations Manager, explains: “We have been providing the data to the customers for a number of years using another software package, but we were not very happy with the way this software forced us to work, so we wanted to find a new way to process and deliver our data. The software had to be quicker and better – what we were using before was very slow – and of course, it had to be more user friendly”.
An important obstacle to overcome was the processing of the data, prior to it being ready for analysis and distribution, including coding a large amount of unstructured data and also applying weighting to balance the data.
“Intellex were very helpful in brainstorming on how it could be made easier and faster,” Yumi continues. “When we got the data into DataDynamic, they were able to help us to automate the process with scripts.”
As the new data was released using DataDynamic, Yumi was concerned whether clients familiar with the previous software would warm to it. “When they saw it they were very enthusiastic, particularly because they were now able to run a complete analysis of the dataset in a matter of minutes. It is a lot faster, and it looks a lot better too. Our clients also liked being able to make their own charts – and many other things are easier too, like sorting items, and above everything else the ability to create target groups very easily. Creating a target group just takes a few clicks, and it is easy to go back and refine it if it is not exactly what you want. A big advantage to me is that the software is very stable. With other software programs they can tend to crash if you ask to much of them, but you seldom see that with DataDynamic.”
To any company considering using DataDynamic, she advises: “It is very important you have at least one person learn how to do scripting as it will save you a lot of time. You don’t need to be a programmer, but it should be someone familiar with the dataset and somewhat IT-minded. If you look at all the time we have saved – with the preparation, I think we have gone from almost 24 hours to 12, and most of that time is because of the coding we have to do. With producing the reports, we have gone from a day’s work to just a couple hours.”
A version of this review first appeared in Research, the magazine of the Market Research Society, February 2008, Issue 500