The latest news from the meaning blog

 

Instant Intelligence Archiving reviewed

In Brief

What it does

Secure document archiving from scanned images and additionally any electronic documents, offered as a hosted solution via a simple web-browser interface.

Supplier

Data Liberation

Our ratings

Score 5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 5 out of 5Value for money

Cost

Entry level £900 annually for 20GB storage and 5 named users. Other packages available.

Pros

  • Works with most browsers, Windows or Mac
  • Scan large volumes of paper documents very efficiently in batches
  • Scanned documents, Word or PDF documents are text searchable
  • All documents are encrypted and held at a highly secure UK data centre

Cons

  • Entry-level package assumes 5 users
  • Document retrieval is by batch process – retrieval may not be instantaneous
  • Does not provide a solution for email archiving

In Depth

A new and ingenious online document storage solution from Data Liberation could provide an easy way to get rid of all the paper cluttering up your filing cabinets or off-site secure warehousing as soon as it has been processed, and at a price that makes it cheaper than most warehouse charges. There are plenty of document management and archiving systems available on the market which allow you to scan in any paperwork – from contracts to invoices, job forms to manuscript notes. However, they tend to be very expensive to purchase, and they need a dedicated server. For security, this should be offsite, and that too adds to the cost.

Instant Intelligence Archiving is an inexpensive, self-service solution which you can sign up to with a credit card (or sign a contract and be invoiced). Your starter account will give you 20 gigabytes of storage and allow for up to 5 named users. The people at Data Liberation reckon that the scanned contents of a typical four-drawer in a filing cabinet will use up about half a gigabyte. What you don’t get is a full-blown document management system: but you can easily convert paper documents into electronic one, store them safely on the IIA secure server, and enjoy near instant access to anything you wish to retrieve.

Data Liberation are familiar with the issues of MR – they brought out a neat DIY Excel-based questionnaire design and scanning system in 2002 and they also offer bureau document capture services.
With IIA, there are just two parts to the system that matter – archiving and retrieval, and you use the same simple web interface for both.

The starting place is to plan out your filing structure, which is done online with a free-format file tree structure, with as many virtual filing cabinets at the top that you choose. Below these, you can create folders and subfolders in any structure you choose. In the slightly more expensive professional grade service, different users can be given access to different filing cabinets, which could be useful for, say, personnel records. With the structure in place, you can now populate it with documents.

To scan from scratch, a duplex sheet feeder scanner is essential. If you have a multi-function office copier/printer, this may offer duplex scanning too. However, as a duplex scanner can be purchased for as little as £370 these days. You can scan whole bundles of documents into a single file. If you wish to separate them, a barcoded separator sheet can be allocate a sheaf of pages into different files without having to stop and start the scanner. Once scanned, you give each file a sensible name, and upload them.

If you currently scan your questionnaires for OCR data capture, provided there are TIFF images available you would be able to upload these too – there would be no need to rescan.

Once uploaded, files can be renamed or moved, but not be altered or deleted. This is an important security feature – if a document such as a contract was subject to dispute, your timestamped scanned image would be accepted by courts in the UK as being as good as the paper version on the day it was scanned.

This is a highly secure system. The archive server is also in one of the UK’s most secure data centres run by BT at Cardiff and favoured by many of the big, security-conscious corporates. All documents uploaded are encrypted in transit and on the sever, so nobody except the account holder can see what the document contains.

Retrieval can be done by navigating through folder structure. Any document selected will be presented in a readable preview format on screen, which you can also print. If you want the document or entire folder back on your PC, you can request a download. To balance load on the server, this is a batch process, and there could be a delay while it is prepared. When it is ready, a link is emailed to you, and you have to log in again to download it.

When documents are scanned, OCR conversion also takes place, so there is electronic text to back up the image, and this text is also available for you to do text searches. The text can be a bit hit-and-miss, especially if the original document was in a poor condition, or used a hard-to-convert font.

If you need to convert space occupied by filing cabinets into extra desk-space, this technology is likely to pay for itself from day one, and it certainly makes getting documents out of the archive very easy.
The client view

Continental Research had already been using Instant Intelligence to scan survey questionnaires, before adopting Instant Intelligence Archiving this year, in a bid to reduce the amount of paper it was storing.

“We have probably cleared 40 to 50 filing cabinets so far”, claims Greg Berry, Technology Director at Continental Research. “Like most research companies, we have masses of filing cabinets everywhere containing everything from job sheets to personnel records.”

The company started by looking at document management and archiving systems, but was deterred by the high cost of ownership, not only for the software but also the servers and physical infrastructure needed too. “They were expensive and contained functionality we did not need,” observes Berry. “Essentially we need good quality electronic images stored in a format that we can retrieve quickly. “

Moving from physical files to electronic images has been more straightforward than Berry first imagined, as he was able to follow closely the existing filing structure, which was a structure that everyone understood. He notes: “We are still using paper for the live job, but when it is finished, we scan it, and we can then send the paper for storage, or, more and more now, we can send it straight for destruction.”

A dedicated scanner, with duplex capabilities is used, and three members of staff in the Quality Control department are tasked to look after the scanning. Unlike printers, scanners do require more supervision to keep them busy, though the scanner can be left for several minutes to process each batch.

“Because we have mimicked the structure of our filing system, it also means retrieval is very easy. We also use it for disaster recovery purposes. Holding all that paper on site is not ideal, for if we had had a fire or a flood, and the paper was damaged, it would have been virtually impossible to recreate those records. Now, it is offsite, it is in a secure data centre, and the images, as they are scanned, are encrypted and logged so they cannot be altered. That also means if we did have to relocate in a disaster, we would still have access.”

Several different departments have warmed to the system quickly. “Coding use it a lot to look up old codeframes: a lot of the notes they have by hand. Field use it to check on old jobs, when they think something has come up which is similar. And scanning questionnaires is very useful, even if you have already entered the data. We keep all paper questionnaires for two years. We have already cut down on our external storage too, so it has already saved money there.”

A version of this review first appeared in Research, the magazine of the Market Research Society, June 2008, Issue 504

NVivo 8 reviewed

In Brief

What it does

Analysis software for qualitative research data now with multimedia support, allowing integrated analysis of textual transcripts, native audio files, video recordings and other source materials. Offers a wide range of analytical and visualisation methods to support both rapid and in-depth analysis.

Supplier

QSR International

Our ratings

Score 3.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

Single user licence for commercial users £1155 plus optional annual support and maintenance for an extra £231. Upgrade from NVivo 7 £405. Volume discounts available. Substantial discounts for educational and public sector users.

Pros

  • Very flexible: offers many ways of analysing qualitative data in many formats
  • Excellent support for video and audio recordings of groups or depths
  • Offers visual as well as textual ways to handle and present data
  • Great help and tutorial material

Cons

  • Steep learning curve: not intuitive to the uninitiated
  • Limited multi-user capabilities

In Depth

NVivo, the stalwart of academic qualitative researchers, has suddenly embraced multimedia files with a passion, and in doing so, widened its appeal to a much broader spectrum of qual researchers. While video recordings of qualitative interviews and groups are now common place, handling video is often unwieldy and can force researchers to fall back on textual transcripts that fail to capture expression or nuance.

The breakthrough with NVivo 8 is its ability to import a wide range of source materials and make these easy for researchers to tag with comments and observations, including video and audio. You can also import Word files and even PDFs, and you can link them together if you have a full transcript and a video of your group.

At the heart of the tool is a multimedia player with a timeline of the video or audio. You can view and hear the recording, pause it or slow it down, or start it and stop it from any point simply by dragging a cursor to any position along a time track along the top of the window. Dragging also previews the video, giving you an extremely efficient way to cue in on the part you are interested in. As you add coding or annotations, you can apply these to the timeline. Each is then colour-coded as a band running parallel to the timeline, giving you a very useful graphic representation of the data and where themes and overlaps occur, or even simply the parts you have not yet reached.

Importing any of these file formats could not be easier – NVivo 8 recognises all the main audio and viedo files format and deals with them appropriately, including AVI, Quicktime, MPE, WMV, or for audio, MP3 and simple WAV files. And you can also output video or audio – the software will enable you to create a collage or summary for your client to view.

The power of NVivo as an analysis tool lies in its concept of nodes. Nodes let you bring together strands of data, observations or comments however you wish – used creatively, they become the essence of the analysis, mapping out the concepts and the relationships between them. For example, if working from a topic guide, each topic could be represented as a node, and nodes can be stacked within nodes to form a hierarchy. Nodes can be used more freely too, to ‘mind map’ the data in a post hoc way.

As you work through the data, you assign as many examples as you can find to each node, or attach your own observations or interpretations as you go, so that, ultimately, when you examine any node, you have a rich and relevant collection of examples and ideas for each one, as the basis of your report. Those examples coded directly in the transcript or video timeline will allow you to jump straight back to the source, so you can see the context, and in the case of video, will cue you directly to the segment where very words were being spoken. It is the moment that makes all the upstream preparation worthwhile.

NVivo 8 does support a degree of collaborative working, in that different members of a team can log into the project and any coding and annotations they make will be tagged by who did it. You can even analyse the variance in the use of coding between different users, to check for consistency. However, the software falls short of a true multi-user system, in that for users to work concurrently on, say, a large international project, you will have to provide users with their own separate versions of the project and merge the files together later.

The greatest obstacle in using the software, however, is likely to be its complexity. This is not a tool that you can easily figure out for yourself. It is one of the consequences of a design that offers a wide range of tools but does not seek to impose its own order by reducing the art of qualitative research to a series of wizards. Proper training and probably some coaching too is therefore essential.

To get the best out of many of the tools within NVivo, you really need to spend time coding and tagging your data first – which can easily take a day or two for a couple of groups.

If this level of rigour is considered overkill, or there just is not the time to achieve it, NVivo’s sister program, XSight, is more amenable to quick turnaround jobs where analysis does not have to go into such depth. But at present, XSight cannot handle audio or video. However, NVivo does not force you to do coding, and the ease by which you can analyse audio and video material makes NVivo 8 much more appealing to researchers with short deadlines to meet, as it can actually save time over other methods.

The client view

Silvana di Gregorio, owner of SdG Associates, is an independent qualitative analysis consultant with a specialty in software support and integration. NVivo is one of several qualitative analysis packages she uses to analyse data for both social policy and commercial market research projects.

“I first used the software in 1995 when I was working as an academic. At that time, academics had similar reservations to market researchers today about using software for analysis of qualitative research – but this is based on a fundamental misunderstanding of how this software will support the analysis and a fear that it will reduce everything to numbers.

“Nvivo 8 has made an extraordinary leap forward, with the ability to analyse videos, audio and graphics. I think it can revolutionise ways of analysis. For example, if you code the video, NVivo adds coding stripes along the top and suddenly you have an entirely different picture of the data. It offers a new ways to analyse and also to present your data which may be more attractive to the commercial market researcher.

“With NVivo 8, you don’t have to transcribe everything, you can import the audio or video and then you can just write notes as you work through it. I have found coding directly onto the video timeline works well. With video it is quite easy to do as you can see where you want to stop: it is harder to do with audio. If there are parts that interest you, you can then do partial transcripts just on those parts.

Silvana is also impressed with the data visualisation and charting that have been introduced with NVivo 8. “These are quite straightforward to use. With a focus group, for example, you can instantly get a visual picture to show you if anyone is dominant in the group. Recently, I created a matrix query between the different speakers in two focus groups. I had coded for the different types of statements made. I simply turned the matrix query into a radar chart, with each spoke representing one of the speakers. You could quickly see there were two people who made no balanced comparisons. It offers another quick visual feel for the data. NVivo support a lot of ways of analyzing data – and I am still discovering more.”

A new book, ‘Qualitative Research Design for Software Users’ by Silvana di Gregorio and Judith Davidson, will be published by Open University Press in October.

A version of this review first appeared in Research, the magazine of the Market Research Society,  May 2008, Issue 503

DatStat Illume 4.5 reviewed

In Brief

What it does

Complete end-to-end web-based interviewing solution with a database and object-based approach to allow greater freedom in working across surveys and incorporating other applications or sources of data into the research process.

Supplier

DatStat

Our ratings

Score 4 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4 out of 5Value for money

Cost

One-off licence fee starts at $25,000 for entry-level user. All licensees pay a variable transaction charge per interview, e.g. 85 cents when purchased in volume. One-off hosted surveys from $5,000.

Pros

  • Intelligent, intuitive user interface
  • Database approach makes it easy to link responses across surveys
  • Repository provides an managed library of reusable questions and objects
  • Comprehensive SDK (programmer interface) for easy linkage to other applications or to create your own objects

Cons

  • User interface, though web-based, only operates under Windows
  • More advanced features require some direct HTML coding
  • Online reporting not as well developed
  • Support and training from West Coast USA only

In Depth

Stand back from research, and you can see that the vehicle of the survey is as arbitrary in defining the boundaries of most research endeavours as is the nation state in defining language, culture and markets. Most of today’s research technology reinforces the silo effect of the survey, with the data collected and isolated from other surveys or other processes that may be taking place in parallel.

DatStat Illume, on the other hand, offers a way to break through these barriers by making it easy to build bridges between surveys, borrow questions or data from one to use in another, or link surveys to other activities taking place simultaneously online.

Not that Illume aggressively confronts you with this radical difference. The software presents very much like other online data collection and analysis products. It still has the concept of questionnaires or projects and these you create online by building up questions in a pleasantly attractive point-and-click authoring environment. There is the typical tree structure on the left, and space on the right to write your questions and answers, select options and tweak their appearance. Furthermore, you can work online, or offline, as you prefer. If you wish to work offline, you simply book your project out to you, so you receive a local copy to work on, which also prevents others from updating it online in the meantime.

You could happily switch to Illume and remain unaware of just how subversively permeable the software is underneath, with its rigorous and imaginative application of true database technology and object oriented architecture to the business of market research.

In Illume, each question is considered a self-contained object, as are folders or groups of questions called collections. Any object in the database is accessible, subject to permissions, from any other object. This brings tremendous flexibility. You can add your own objects too, such as a different kind of question, or a resource, such as a data feed from a CRM system or a gateway to panel provider to request sample top-ups on demand and in real time.

An entire survey is also considered an object, and any survey can refer to data from any other survey during interview or analysis, either in aggregate or though respondent linkage, where this can be achieved. This can bring all sorts of benefits to longitudinal research or cohort studies, to panels or as a means of generating new samples.

Creating objects is a task for a programmer, but it does not require a software change from the software manufacturer. Once created, objects can be re-used in all your future work too.

The object-driven architecture means the developers have been able to incorporate much better logic diagnostics into the survey authoring process than is typical. For example, if you update a survey and move a question on which branch logic is dependent, you will get a warning if the logic check now occurs before the question has been asked. It will also check logic to ensure you have not accidentally created any orphaned questions in your survey that will never get asked.

The object approach also helps explain how the software has one of the best library capabilities I have seen in a research package. This can contain questions, answer lists, questionnaire sections or entire model surveys. There is a permissions-based workflow covering who is allowed to submit new repository content, and who may approve it, and even diagnostics on the effects of changes. When you use something from the repository, it remains under the control of the repository. So, for example, a list of car makes can be updated once, and all surveys using that repository item will be refreshed automatically – unless you choose to sever the link and convert the instance in your survey into a local copy.

This is a highly accomplished interviewing platform, with all the customary built-in options to support everything from the simplest to the most complex survey. If there is a criticism in this area it is that some of the more advanced stunts you might wish to play require some tricky HTML coding within your survey texts – though some might just view this as additional flexibility.

Survey deployment is a doddle, with a highly intuitive survey administration tool that handles samples, invitations and reminders, and allows you to work with other survey modes such as offline mobile interviewing and an intelligent set of real-time response reports.

On the back end is what looks as if it is going to develop into a very promising researcher-driven online reporting and data portal module with the ability to create multi-user enterprise dashboards. However, to build such dashboards or portals at present requires expert help from DatStat. As a result, the tabulation and reporting elements in the package appear not to be as well advanced as the data collection tools.

The software’s underlying dependence on Microsoft technologies also means that, though largely web-based software, survey authors and administrators need to be running Windows. And though all support comes from Seattle, there are vast online support resources, including an efficient and responsive support issue logging and tracking, and excellent documentation.

Customer Perspective: Mindwave Research, Austin, Texas, USA

Mike Skrapits is vice president of research at Mindwave Research, a full-service research company based in Austin, Texas. Mindwave uses DatStat Illume for the large volumes of online research it carries out, which includes several large and complex tracking studies and numerous international projects. The company switched over to Illume seven months ago, having found it had reached the capacity of another web-browser based survey package.

Mike Skrapits explains: “When we did our evaluation, DatStat offered a little something different and this really had to do with the architecture of the software. Scalability had become an issue for us. The architecture of DatStat Illume means it is highly efficient – and the performance gain we have experienced from that is enormous. We had needed five quad core servers with the previous system, and we still had performance issues. We are now running everything on a single server and enjoy better performance too. It is also very stable: we have experienced no unplanned downtime over the seven months we have been running it.

“We are very pleased with the flexibility of the software when designing questionnaires. I like the look and feel of the software and many of our clients have commented on this and how much they like it. From the perspective of the end user, it is visually appealing and from the perspective of the programmer, it is very flexible.

The system’s modular, object-oriented architecture has also encouraged Mindwave to take advantage of the savings that can be achieved by re-using components and also to develop some novel research methods of its own, as these in turn become components that can used on any project whenever required. “The database architecture allows us to effectively create libraries that we can borrow from,” says Mike, “and that has absolutely been beneficial. We have also created some code of our own to create our own specialised question types. The flexibility is there in the architecture to let you do this.”

A version of this review first appeared in Research, the magazine of the Market Research Society, April 2008, Issue 502

SPSS 16 reviewed

In Brief

What it does

Comprehensive desktop analysis software for crosstabs, charts and statistics, with integrated data editing, data processing, presentation and publishing capabilities.

Supplier

SPSS (An IBM Company)

Our ratings

Score 4 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 3 out of 5Value for money

Cost

Single user prices: Base SPSS system, £1072, standalone SPSS SmartViewer £132, add-on modules from £473. Annual maintenance and support from £214. Volume and educational discounts available.

Pros

  • Now cross-platform – PC, Mac or Linux
  • Clever data editing including anomaly detection
  • Greatly improved charting
  • Output directly to PDF

Cons

  • Wide range of options can be confusing to novice users
  • Output can look straggly and utilitarian

In Depth

This year the statistical software SPSS is forty years old. While SPSS now heavily promotes this program in the so-called business and predictive analytics arena, MR users continue to be well served by the latest issue, SPSS 16. Indeed, there are several very handy new features for questionnaire-based data and the stuff market researchers tend to do.

The big change is that the software has now been re-written in Java. Going to Java has given the developers the opportunity to make a few changes to the dialogue windows – though (before any experienced users break out into a cold sweat) not to where things are or how they work, but in terms of being able to resize items dynamically, stretch windows and see more displayed as a result. It means, for example, that long labels no longer get truncated in selection menus, which has long been an irritation. However, practised users will probably be surprised just how similar SPSS 16 is to recent native Windows versions, considering the interface has effectively been rebuilt from scratch.

SPSS has always been strong in allowing you to edit and clean your data on a case-by-case basis. While there seems to be a recent trend among some researchers not to bother, especially online, those who take these matters seriously should be rather pleased to see this version introduces a heuristic anomaly detector in the data validation menu. Set it going on all the variables you think matter, and it will pull out any cases where the answers stick out from the rest. It uses a clustering, or rather an un-clustering algorithm, and looks for items that don’t cluster. More conventionally, there is also a complete rule-based validation routine, with several handy built-in rules to look for large number of missing variables or repeated answers (mainlining through grids, for example) and the option to set up your own cross-variable checks too.

There are some handy new tools in the data prep area, such as easy recodes that take date and time values and chop them up into discrete time intervals such as months and quarters, or let you group according to day of week, mornings and afternoons and so on. There is ‘visual binning’ which lets you create categories from numeric variables by showing you a histogram of your new categories, and lets you even them out using sliders on screen. A new ‘optimal binning’ function lets you do the same to values, using another variable to determine the fine-tuning of the slices, such as to split income with respect to age.

Version 16 also makes it easier to edit and clean up the metadata – the text labels and names. There is a find and replace feature and a spell checker too, with dictionaries for both UK and US English and for other major languages. The move to Java has made possible other languages and writing systems too, as SPSS 16 now fully supports the Unicode standard.

On the output side, greatly improved charting came in with version 14, and the improvements continue. The visual method for defining charts is one of the most elegant I have seen. Where many tools, like Excel, simplify chart building with a wizard, here the workflow all takes place in the one chart-building window. It avoids the tunnel mentality of the wizard, where you emerge blinking on the other side with no idea of how you got there.

Two items are of particular interest to market researchers. Top marks to SPSS for the ‘panel’ chart option on all the charts, which lets you add a categorical variable such as demographics. It produces neat, side-by-side charts for each category, all the same size and sharing one legend. ‘Favourites’ make it easy to store the chart outline for any chart you have perfected in a gallery for you to use again, saving time and helping you achieve consistency in your reporting.

Behind the scenes, there is also a full chart scripting language, which can be used to automate repetitive chart production. Also of interest to MR users is the new built-in support for going straight to PDF from the output viewer. It offers a fantastic alternative to producing PowerPoint decks merely to communicate data. You can output everything or a selection. Best of all, the complete heading and folder structure of the output viewer is replicated in the PDF as bookmarks, to make navigation easy.

Much of the power and versatility of SPSS has always derived from the ability to write SPSS syntax directly. When you use the graphical interface, the syntax needed to drive the SPSS processor and create your outputs are created for you and can be saved and reused. Advanced users and programmers who use syntax directly will find many more commands and options at their disposal – so it is often possible to create highly customised outputs using syntax. The chart scripting options are just one recent syntax extension. Another intriguing one is a new ‘begin program’ command, which lets you run other external applications and scripts written in the open source language Python. So if the hundreds of statistical tests and models available within SPSS turn out still to be not enough, it is possible to spawn out to ‘R’ (see r-project.org), the open source statistical initiative, and apply any of the hundreds offered in R, using your SPSS data, and presenting the results in your SPSS output.

I was hoping that SPSS 16 would make the program and data structures less disdainful of multiple-response data. In science, and in business, this kind of data is rare, but in market research, multi-coded data abounds. Alas, even in version 16 it is still handled in the same arms-length way through multiple-response sets created from dichotomies. Rather confusingly, there are different multiple sets in the tables and in the special multiple-response frequencies and cross-tabs area. Once you have set them up, there is still that trap for the unwary that they do not get saved in the data, or saved at all without some effort.

My other grumble is that, despite the output improvements, the overall look of the reports that come out is still very utilitarian and is full of irrelevant set-up detail. Cross-tabs in particular are wilfully straggly and unfinished in appearance.

It surely cannot be an issue for the core SPSS users, otherwise you imagine it would have changed long ago, but it is another deterrent to market researchers, where effective communication of results has to be a core strength.

But for the sheer range of statistical tests and models available from one desktop application, SPSS deserves a place in every MR department, agency or consulting practice.

A version of this review first appeared in Research, the magazine of the Market Research Society, March 2008, Issue 501

DataDynamic reviewed

In Brief

What it does

Desktop or web-based tabulation and charting tools for researchers or end-users with an integrated script-based data-processing module for data specialists. It can also be used to build data portals and dashboard reporting systems.

Supplier

Intellex Dynamic Reporting

Our ratings

Score 3.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 4.5 out of 5Value for money

Cost

In euro (€): Offline tool €1100 per user per annum. Online: €1500 set-up fee, €1800 annual fee, plus €225 charge per person and per project.

Pros

  • Easy import from SPSS .sav files or Triple-S
  • Extendable gallery of output styles for both tabs and graphs
  • Powerful editing and data preparation workflow
  • Advanced machine-learning-based coding module for verbatim responses

Cons

  • Restricted filtering within online and desktop tools
  • Very limited range of statistics
  • Some limits on dynamic links to PowerPoint
  • No specific support for multi-language studies; interface is English only

In Depth

DataDynamic is a new arrival on the MR software stage. But is there space for yet another tab program? And is there anything significantly different about this one? Actually, there is on both fronts. While a majority of MR analysis tools effectively take their cue from the Quantum/Quanvert model, with the online tool existing as an add-on stage to a data processing activity, and the end-user working on a closed database of results, DataDynamic takes the more open SPSS as its muse.

It is often overlooked just how many researchers around the world use SPSS to do all their analysis on their quantitative surveys. On the whole, SPSS does a decent job for the market researcher wanting to analyse their own data, but it has its downsides. There is a steepish learning curve and the problem of picking your way though a host of options that are either rarely or never used. It is also a struggle to produce report-ready output for Word reports or PowerPoint briefings and summaries. But SPSS does make the raw data readily available to the researcher for them to work on and even edit. As a means of distributing data to other users this can also be a liability.

DataDynamic can work as a desktop tool, like SPSS, or as an online, web browser-based tool like SPSS MR Tables or Confirmit’s Reportal. The desktop tool (but not the online tool) also carries with it a complete interactive suite for coding and editing your data, which is aimed at the researcher as much as the data-processing specialist. It gives this product appeal to those researchers who are or like to be self-contained in their data processing capabilities. Better still, it provides them with the means to publish results to clients in a variety of ways appropriate to the needs of different audiences for research data.

For those that just want a dashboard with a few KPIs every month, there is a disarmingly simple process to create and publish these as web reports for clients to access securely in a data portal. These offer a single scrolling page of side-by-side tables, charts and commentaries, which can then be refreshed automatically as each subsequent wave of data is added. Users can look and copy data into their own reports, but cannot change it.

For those who want to dig into the data, their online log-on can be made to unlock access to the online cross-tab and charting capabilities. These are more or less identical to the capabilities of the desktop cross-tab and charting tool which is the core of DataDynamic. In either version it uses a familiar drag and drop technique to allow you to build cross-tabs from a structured list of questions. It is quick to put together tables, and there are all the options you would expect there to vary percentages, rank answers in order and apply or remove filtering, weighting or presentation options. Strangely, it only offers one significance test at present – the greatly abused t-test, which can risk being a safety blanket lined with asbestos.

On the other hand it contains a marvellous tool for creating your own target groups or profiles, such as those derived from segmentation models (though you would need to use something else to produce segmentations). It also scores two out of three for weighting: you can apply respondent weighting, and you can apply projections to a population total. However, you can only calculate simple arithmetic weights – there is no iterative model for creating so-called rim or target weights. Filtering too, is an oddity. It is quick and easy to apply any answer as a filter and combine answers from the same question, but it assumes you would always want to ‘or’ answers from the same question or ‘and’ answers from different questions. And if you create a target group, you cannot apply this as a filter – which would be handy.

Several of these restrictions can be overcome by using scripting. A powerful hidden feature of DataDynamic is the Visual Basic scripts that drive it. End users are unaware that these are being created as they compose their tables and charts, but they can be captured and edited, or folded into larger scripts to automate report production. It is akin to syntax in SPSS.

Other strong points include clear, attractive charting capabilities, based on the Microsoft Office charting engine, with user-definable template galleries; a surprisingly sophisticated suite for coding data, which even includes a trainable coding engine which will then automatically code similar datasets on the basis of examples you have provided before, and a great range of selective and cumulative imports from either SPSS or Triple-S data.

There are several other areas where more depth of functionality is needed. There is currently no real support for presenting or publishing results in more than one language for users to select their preferred language, for instance, and there some difficulties in publishing dynamic reports with charts that will refresh automatically, due to oddities in the Microsoft charting engine.

What I find most tantalising is that Intellex have used this platform to build a number of bespoke enterprise dashboard and drill-down reporting systems. At the moment, DataDynamic as shipped does not have all the tools needed to create your own enterprise feedback system, particularly in the area of user and data permissions. But enterprise reporting is something Intellex are planning to develop further and, if so, they could possibly be the first to market with a dashboard or EFM product that will work with research data from any source.

Customer Viewpoint: Yumi Stamet, Intelligence Group, Rotterdam

Intelligence Group is a research and consultancy firm based in Rotterdam in the Netherlands, specialising in the employment and recruitment research. It publishes, quarterly, a rolling two-year survey of the Dutch labour market comprising some 32,000 interviews to a wide range of commercial and public sector clients. It is a substantial survey with a large number of variables which it now distributes very effectively using the offline version of DataDynamic. Yumi Stamet, Operations Manager, explains: “We have been providing the data to the customers for a number of years using another software package, but we were not very happy with the way this software forced us to work, so we wanted to find a new way to process and deliver our data. The software had to be quicker and better – what we were using before was very slow – and of course, it had to be more user friendly”.

An important obstacle to overcome was the processing of the data, prior to it being ready for analysis and distribution, including coding a large amount of unstructured data and also applying weighting to balance the data.

“Intellex were very helpful in brainstorming on how it could be made easier and faster,” Yumi continues. “When we got the data into DataDynamic, they were able to help us to automate the process with scripts.”

As the new data was released using DataDynamic, Yumi was concerned whether clients familiar with the previous software would warm to it. “When they saw it they were very enthusiastic, particularly because they were now able to run a complete analysis of the dataset in a matter of minutes. It is a lot faster, and it looks a lot better too. Our clients also liked being able to make their own charts – and many other things are easier too, like sorting items, and above everything else the ability to create target groups very easily. Creating a target group just takes a few clicks, and it is easy to go back and refine it if it is not exactly what you want. A big advantage to me is that the software is very stable. With other software programs they can tend to crash if you ask to much of them, but you seldom see that with DataDynamic.”

To any company considering using DataDynamic, she advises: “It is very important you have at least one person learn how to do scripting as it will save you a lot of time. You don’t need to be a programmer, but it should be someone familiar with the dataset and somewhat IT-minded. If you look at all the time we have saved – with the preparation, I think we have gone from almost 24 hours to 12, and most of that time is because of the coding we have to do. With producing the reports, we have gone from a day’s work to just a couple hours.”

A version of this review first appeared in Research, the magazine of the Market Research Society, February 2008, Issue 500