The latest news from the meaning blog


Optimus reviewed

In Brief

What it does

Web-based suite of interview fraud detection measures for online surveys which can be applied to any online panel source, including panel providers or your own samples.


Peanut Labs

Our ratings

Score 4.5 out of 5Ease of use Score 4 out of 5

Compatibility with other software
Score 3.5 out of 5

Value for money


From $2,500 to scan 5,000 completes, with discounts for higher volumes


  • Highly accurate detection of the most common types of internet fraud
  • User can determine the level of policing
  • Interfaces directly with Confirmit, Market Tools Ztelligence and SPSS Dimensions
  • Works with most browsers, Windows or Mac


  • Some programming involved if using an unsupported interviewing package
  • Does not detect all kinds of fraud, such as straightlining and ‘satisficing’
  • Rules are system-wide: cannot vary them by project or client
  • Fraud not detected during scheduled or unscheduled downtime of the Optimus server

In Depth

Optimus is a standalone software-as-a-service or ASP solution for tackling fraudulent respondents that will work with any sample source and, effectively, any internet interviewing system. It comes from Peanut Labs, an online sample provider, though the service is not in any way tied to their samples.

If you happen to use Confirmit, SPSS Dimensions or Ztelligence – then it is easy to a set a command at the beginning and end of your interview to link your survey to the Optimus service. If you use other software, you will need to do a small amount of ad hoc web programming to link it in each time. Essentially, the link is achieved using a ‘redirect’, where the survey momentarily hands control over to the Optimus server, which then probes the respondent’s browser, gathers some information and then hands back to the server running the survey. None of this to-and-fro is visible to the respondent. Neither is any personally identifiable data involved. All that Optimus holds on your behalf is your respondent ID, so you can later identify problem respondents. It does not use email addresses or cookies.

The real strength of the software, and single reason you wish to use it, is the firm’s proprietary digital fingerprinting technology though which it is able to build up a database of individual PCs which it has ever encountered for your sample and for anyone else’s too. It relies on the fact that any web-browser will reveal a large amount of information about the configuration and resources available on the PC – and there is enough variation for this to be as good as being able to get the manufacturer’s serial number. None of this information is personally identifiable. But once logged against a panellist ID, Optimus is able to start pointing the finger at some respondents for various reasons.

Optimus collects two other factual measures: interview completion times and IP location. Speed is detected as the time taken to complete against the anticipated time, set by the researcher, and short interviews are logged as potential speeding violations.

The IP address of the ISP or company network the respondent uses to access the internet contains some useful high-level geographical information, which will pin the respondent down to a country, if not to a city. This can then be used or ignored as you choose. A panellist on a consumer survey in France is unlikely to be using an ISP in the Philippines, for example, though a business executive could be, if using the wireless network in their hotel bedroom, which could as easily be in Manila as Manchester.

From this raw data, Peanut Labs deduces six measures of suspect behavour: duplicates, Geo-IP violators, hyperactive respondents, respondents belonging to multiple panels, speeding and a sin-bin category of ‘repeat offenders’, where the respondent has repeatedly transgressed in the past.

When you log into the system, you have options to register new surveys and also the different panel sources or companies you wish to use. The ‘controls’ area is where you define your own rules of what constitutes suspect behaviour. You can switch on or off any of the rules for your own samples, and also you have considerable flexibility over adjusting the threshold for each one. For example, for hyperactive respondents, you can set an absolute limit on how much multiple participation is acceptable to you, set a period, and choose whether you restrict this just to your projects or across all projects by all users of the service. It is a pity that you can only have one set of rules for all your projects: the rules for a B2B survey could be very different to what you allow in consumer research, for example.

There are two principal outputs from the system: reports and files containing the IDs of violators, determined by your rules, together with the type of violation recorded, either to update your own panel database or to seek replacements and refunds from sample providers.

A range of largely graphical reports are well presented. The main ones chart each type of violation every day, which you can filter by project or sample source. But reporting choices are limited, and there really there need to be more options available – for example to allow comparisons between different surveys or between different sample sources.

It is also worth considering the effect of scheduled maintenance on the service, which though minimal tends to be scheduled for prime-time Monday morning in Europe, and when it is down, your interviewing will be unprotected.

Ultimately, the success of the solution will depend on the volume of traffic passing through it, so it achieves the critical mass of fingerprinted PCs to be able to differentiate clearly between the responsible and the abusive survey-taker.

Customer Viewpoint : Kristin Luck, Decipher Inc

Decipher started to use Optimus in April of this year, to control sample quality when using sample from multiple sources on client projects.

“The system is designed to track respondents from any sample source. Where it really comes in handy is where you are using a multiple source sample approach and you want track people who are trying to enter the survey multiple times, either from a single source or from multiple sources.”

“Some of the other solutions on the market are tied to a particular sample provider. What was appealing to us about Optimus was that it was a technology we could use even if we were not working with Peanut Labs for sample on a particular study.”

Decipher uses Optimus with its own in-house web interviewing solution. Although this means Decipher does not benefit from a direct program interface, as with some mainstream packages, linking a new survey in takes very little time “We currently have to use a programmer to connect into Optimus.” Kristin explains, “and the first time it was about an hour’s work, but it is a pretty short learning curve, and we now have it down to about 15 minutes on a new project. In the future we will be able to implement without the use of a programmer.”

Another attraction was that the web-based interface can provide controlled access to the data to their clients, so that the entire quality control process is transparent to everyone. “It is really easy to use” says Kristin. By using the service, Decipher has identified and removed around 11% of the sample from multiple sources.

“We have found some panel providers have 21% or more of their sample has a problem and we have others where it is 8% or less,” Kristin states. “We tend to see lower percentages from the companies that have been making a lot of noise about panel quality, and higher percentages from those that have been largely silent about this.”

Being able to specify their own rules to determine fraud is another advantage for Kristin, as Decipher tend not to exclude hyperactive respondents. However, Kristin would like more granularity in how rules are applied, so that a client or a project can have its own particular rules applied- currently this is not possible without a manual programming process.

A version of this review first appeared in Research, the magazine of the Market Research Society, July 2008, Issue 505

Instant Intelligence Archiving reviewed

In Brief

What it does

Secure document archiving from scanned images and additionally any electronic documents, offered as a hosted solution via a simple web-browser interface.


Data Liberation

Our ratings

Score 5 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 5 out of 5Value for money


Entry level £900 annually for 20GB storage and 5 named users. Other packages available.


  • Works with most browsers, Windows or Mac
  • Scan large volumes of paper documents very efficiently in batches
  • Scanned documents, Word or PDF documents are text searchable
  • All documents are encrypted and held at a highly secure UK data centre


  • Entry-level package assumes 5 users
  • Document retrieval is by batch process – retrieval may not be instantaneous
  • Does not provide a solution for email archiving

In Depth

A new and ingenious online document storage solution from Data Liberation could provide an easy way to get rid of all the paper cluttering up your filing cabinets or off-site secure warehousing as soon as it has been processed, and at a price that makes it cheaper than most warehouse charges. There are plenty of document management and archiving systems available on the market which allow you to scan in any paperwork – from contracts to invoices, job forms to manuscript notes. However, they tend to be very expensive to purchase, and they need a dedicated server. For security, this should be offsite, and that too adds to the cost.

Instant Intelligence Archiving is an inexpensive, self-service solution which you can sign up to with a credit card (or sign a contract and be invoiced). Your starter account will give you 20 gigabytes of storage and allow for up to 5 named users. The people at Data Liberation reckon that the scanned contents of a typical four-drawer in a filing cabinet will use up about half a gigabyte. What you don’t get is a full-blown document management system: but you can easily convert paper documents into electronic one, store them safely on the IIA secure server, and enjoy near instant access to anything you wish to retrieve.

Data Liberation are familiar with the issues of MR – they brought out a neat DIY Excel-based questionnaire design and scanning system in 2002 and they also offer bureau document capture services.
With IIA, there are just two parts to the system that matter – archiving and retrieval, and you use the same simple web interface for both.

The starting place is to plan out your filing structure, which is done online with a free-format file tree structure, with as many virtual filing cabinets at the top that you choose. Below these, you can create folders and subfolders in any structure you choose. In the slightly more expensive professional grade service, different users can be given access to different filing cabinets, which could be useful for, say, personnel records. With the structure in place, you can now populate it with documents.

To scan from scratch, a duplex sheet feeder scanner is essential. If you have a multi-function office copier/printer, this may offer duplex scanning too. However, as a duplex scanner can be purchased for as little as £370 these days. You can scan whole bundles of documents into a single file. If you wish to separate them, a barcoded separator sheet can be allocate a sheaf of pages into different files without having to stop and start the scanner. Once scanned, you give each file a sensible name, and upload them.

If you currently scan your questionnaires for OCR data capture, provided there are TIFF images available you would be able to upload these too – there would be no need to rescan.

Once uploaded, files can be renamed or moved, but not be altered or deleted. This is an important security feature – if a document such as a contract was subject to dispute, your timestamped scanned image would be accepted by courts in the UK as being as good as the paper version on the day it was scanned.

This is a highly secure system. The archive server is also in one of the UK’s most secure data centres run by BT at Cardiff and favoured by many of the big, security-conscious corporates. All documents uploaded are encrypted in transit and on the sever, so nobody except the account holder can see what the document contains.

Retrieval can be done by navigating through folder structure. Any document selected will be presented in a readable preview format on screen, which you can also print. If you want the document or entire folder back on your PC, you can request a download. To balance load on the server, this is a batch process, and there could be a delay while it is prepared. When it is ready, a link is emailed to you, and you have to log in again to download it.

When documents are scanned, OCR conversion also takes place, so there is electronic text to back up the image, and this text is also available for you to do text searches. The text can be a bit hit-and-miss, especially if the original document was in a poor condition, or used a hard-to-convert font.

If you need to convert space occupied by filing cabinets into extra desk-space, this technology is likely to pay for itself from day one, and it certainly makes getting documents out of the archive very easy.
The client view

Continental Research had already been using Instant Intelligence to scan survey questionnaires, before adopting Instant Intelligence Archiving this year, in a bid to reduce the amount of paper it was storing.

“We have probably cleared 40 to 50 filing cabinets so far”, claims Greg Berry, Technology Director at Continental Research. “Like most research companies, we have masses of filing cabinets everywhere containing everything from job sheets to personnel records.”

The company started by looking at document management and archiving systems, but was deterred by the high cost of ownership, not only for the software but also the servers and physical infrastructure needed too. “They were expensive and contained functionality we did not need,” observes Berry. “Essentially we need good quality electronic images stored in a format that we can retrieve quickly. “

Moving from physical files to electronic images has been more straightforward than Berry first imagined, as he was able to follow closely the existing filing structure, which was a structure that everyone understood. He notes: “We are still using paper for the live job, but when it is finished, we scan it, and we can then send the paper for storage, or, more and more now, we can send it straight for destruction.”

A dedicated scanner, with duplex capabilities is used, and three members of staff in the Quality Control department are tasked to look after the scanning. Unlike printers, scanners do require more supervision to keep them busy, though the scanner can be left for several minutes to process each batch.

“Because we have mimicked the structure of our filing system, it also means retrieval is very easy. We also use it for disaster recovery purposes. Holding all that paper on site is not ideal, for if we had had a fire or a flood, and the paper was damaged, it would have been virtually impossible to recreate those records. Now, it is offsite, it is in a secure data centre, and the images, as they are scanned, are encrypted and logged so they cannot be altered. That also means if we did have to relocate in a disaster, we would still have access.”

Several different departments have warmed to the system quickly. “Coding use it a lot to look up old codeframes: a lot of the notes they have by hand. Field use it to check on old jobs, when they think something has come up which is similar. And scanning questionnaires is very useful, even if you have already entered the data. We keep all paper questionnaires for two years. We have already cut down on our external storage too, so it has already saved money there.”

A version of this review first appeared in Research, the magazine of the Market Research Society, June 2008, Issue 504

Converso Enterprise reviewed

In Brief

What it does

Platform-independent Java-based multi-modal interviewing and analysis platform with an integrated portal-style front-end


Conversoft, France

Our ratings

Score 3.5 out of 5Ease of use

Score 4 out of 5Compatibility with other software

Score 3 out of 5Value for money


In euro (€): Most modules €3,000 per user, plus €20,000 for Enterprise platform and €6,000 for web server module: all one-off costs. Maintenance: 18% of licence cost annually, or 25% for ‘gold’ support.


  • Extremely easy to use for moderators and participants
  • Can present a wide range of stimulus material
  • Offers several novel research techniques
  • Provides a complete transcript for analysis at the end


  • Only supports Windows for both moderator and participants
  • Not completely DIY yet: management module to be developed
  • Real time groups only – no support for asynchronous participation

In Depth

Converso Enterprise is an ambitious redevelopment project which deserves much praise for embracing what Web 2.0 technology has to offer head on. The portal-building and alert capabilities are excellent, and the main data collection platform is robust and sophisticated. But as an end-to-end solution it is still very much a work in progress. Substantial chunks are ready for production work now, but the gaps within and between these modules are just a bit too wide for comfort at this point. Given Conversoft’s recent rate of development, It is likely to look much more complete in as little as six month’s time, so for anyone planning to upscale their software platform next year, this is definitely one for the short list.

Converso Enterprise follows an entirely different architectural principal to most of the other new-generation research platforms on the market. Conversoft rejected developing in Microsoft’s .NET framework in favour of using Java, both J2EE, for desktops, laptops and servers, and J2ME (the flavour of Java for mobile devices). This approach does not on its own give the product a Web 2.0 pedigree, but it is a good start. It means that the software is totally platform-independent, so all users – researchers, respondents, technicians or end-clients – can use the browser or the operating system they want – Apple, Linux or any of the Windows varieties. This technical agnosticism extends to the relational database at the heart of the product, for survey data and panels, if used, which could be any of the modern database platforms – Microsoft, Oracle, or open source databases like MySQL or Postgres.

Conversoft also intends to create an open-source development platform to allow customers to extend the capabilities of Converso for themselves, but this does not exist yet.

What does exist is a wonderful portal-building tool that lets you snap into place any of the components of the Enterprise toolkit. You can create your own portal just for you or for entire groups of users – and then you can selectively switch on controls that will allow them to tailor the portal you gave them, to add in their own favourite things.

It could be the survey editing tool, a summary report showing the latest set of KPIs, an RSS news feed from the BBC or a link to Google Maps. This is where it gets exciting, because, once the missing developer tools have been developed, the techie people would be able to build whatever components you wanted to create so called ‘mash-ups’ of data from different sources on the internet, alongside your survey data – for example, to present geodemographic data in map form. What is more, Converso Enterprise components can be used as applets in other portals – so you could broadcast your poll results to other sites, or even Facebook.

Already, there is a rich library of components to choose from, particularly in the reporting area – which was never a strength for Converso in the past. It is relatively straightforward to create client data portals and dashboards that will present data graphically or as cross-tabs, or use intelligent reporting methods to highlight exceptions and provide alerts. Alerts are defined as triggers – really rather like dynamic filters that operate against the data and present a message. It all works fine with published data, but at the moment, you would struggle to show any real-time data from live surveys – such as to track response or get a live snapshot in a topline report. For these you need to resort to some of the legacy modules still.

Similarly, you can deploy new surveys through the portal, define your sample, and even use the very comprehensive access rights management tool from the portal – all of these are java programs. But the survey authoring tool is still a Windows program, and uses the old and rather complicated Converso scripting interface. That is promised for later this year, although nothing was available for Interface to obtain a preview.

These are not the only gaps waiting to be plugged. These are being addressed – and they need to be – though given Conversoft’s recent track record, the current feeling of being on a new highway where the cones are still in place, should have gone by the middle of 2008.

On the plus side, there is true multimodal interviewing with CATI. Web CATI is an integrated and very versatile handheld interviewing capability that will work on a very broad range of smartphones and BlackBerrys. The mobile interviewing is a new development and is impressive. There seems to be complete backwards compatibility with the old Windows-based CATI too.

Panel management exists but is not fully developed yet – the main panel management and respondent selection capabilities are there, but the panellist recruitment and community part is still missing. When it comes, it will offer integration with CRM systems, to use customers as a sample source, or to create customer panels.

The analytical tools are starting to look impressive too. A range of tables and charts can be created and presented directly in Word, Excel or Powerpoint, and it will populate native Excel or PowerPoint objects with data, to permit dynamic linkage. But if you wish to move data out to other MR analysis tools, then you are stuck until the planned Dimensions and Triple-S links are ready.

Converso Enterprise is an ambitious redevelopment project which deserves much praise for embracing what Web 2.0 technology has to offer. The portal-building and alert capabilities are excellent, and the main data collection platform is robust and sophisticated. But as an end-to-end solution it is still very much a work in progress. Substantial chunks are ready for production work now, but the gaps within and between these modules are just a bit too wide for comfort at this point. Given Conversoft’s recent rate of development, it is likely to look much more complete in as little as six months’ time, so for anyone planning to upscale their software platform next year, this is definitely one for the shortlist.

A version of this review first appeared in Research, the magazine of the Market Research Society, November 2007, Issue 498