The latest news from the meaning blog 1.1 reviewed

In Brief

What it does

Web-based panel and sample management tool, based on a subset of the most useful features in the desktop/server version of the MARSC sampling software.



Our ratings

Score 4.0 out of 5Ease of use

Score 4.5 out of 5Compatibility with other software

Score 4.5 out of 5Value for money


Prices start from around £175 per month for a small-scale operation, plus hosting fees if required; £250-£300 per month for a mid-scale operator with multiple panels. Price determined by volumes.


  • Allows precise targeting of samples with extremely accurate incidence calculations and estimates
  • No limit on size, demographics or history kept
  • Can reuse sample jobs to draw fresh samples, create ‘favourites’  and define default sample templates
  • Provides resources for creating multiple panel members’ sites


  • Does not support non-interlocked (margin defined) quotas yet
  • Some reports rather cryptic and confusing
  • Windows and IE only for admin interface
  • Panellist portal module needs programming skills to configure

In Depth

MARSC has always been the heavyweight among the sampling and panel platforms, but now a new software-as-a-service version of the tool has emerged with the aim of lowering the bar to entry. It makes it easier to get up and running and offers something of a break on the price too, which should appeal to the smaller-scale operator.  Just how useful this is will depend on how sophisticated your requirements are, but if you need something with all the bells and whistles, the desktop and server-based MARSC is still available and being developed.

These days, most users tend to look on MARSC as a panel management tool but wasn’t always that way: MARSC started out as a sophisticated sampling tool to allow corporate research clients to draw balanced, quota-controlled samples directly from their own CRM databases.  It was a program that was in the right place at the right time when researchers realised that the most efficient way to do research online was to have a panel of pre-screened, actively managed respondents over which there was a known pedigree, both in terms of demographics and past participation.

MARSC maintains its own database of contacts and therefore, compiling and revealing all of this history is second nature to it. It is not an interviewing system – to use MARSC you will also need survey data collection platform, though it is agnostic about which one, and supports SPSS Data Collection and Confirmit directly, and many others via Triple-S.

The new .net version rationalises the process of sample selection by setting out all the options across six tabbed screens. You start by stating who you do want – referring to any of the profile variables in your panel, overall targets and incidence estimates. In the filters tab, you set exclusions, which can also be on demographics or past participation, such as people who have already received a survey invitation to another study in the last two days, or who have been interviewed on a previous wave of the same study in the last year.

In the third tab, you choose the variables you want to pass across the interviewing system, and in the fourth one you define your quotas – the quota targets you are aiming for. While you may also need to reinforce these with quotas applied in the interviewing module, a great advantage of MARSC is that, over time, it can build up very detailed response history and it will use this for each sample segment you are selecting, to predict just how much sample you need to quota target without over-inviting respondents or wasting sample.

The fifth screen handles notifications – who the reports of the sample jobs get emailed to, and last one ‘properties’ (misnamed in my view) which are not job options but the metadata for the job – the type of project, its name, who is the client and the exec, and also where you specify the reward points for participation.

All of this set-up is saved as a job, which you can give a name to and save in a folder structure that is always visible on the left of the screen. Once a job is saved it can be queued for execution, when it will draw a sample and mark them in the database as having been selected for a survey. You can also do a trial run, when it will simply report on what it would draw – a useful prior step in ensuring you do have enough sample to run with.

Saved jobs can also be stored as “favourites”– a nice web-like touch. Indeed, generally, the program has ported well to the web environment. However, the report displays could be improved as they present a mass of data and tend to use rather cryptic two-letter codes as column headers taken straight from the desktop version, whereas so much more is possible using dynamic HTML on the web.

The respondent portal is a vital part of the panel management tool, and MARSC provides a versatile portal module and set of tools for configuring it. The module is common to desktop and .net versions. In this, participants can update their profile, review what surveys are available for them to take, review and redeem incentive points and so on. The tool is designed for those with developer skills, however – sadly, there is no simple a point-and-click interface for creating or customising panellist portals, which the SaaS user is likely to expect and other systems now provide. Those without an in-house web programmer are likely to need to buy some consulting services from MARSC to get set up.

MARSC say that it is their intention in time to move all of the desktop functionality over to  the .net interface – at the moment, it lacks a handful of the more advanced features, the most serious one being the ability to interlock quotas, where an iterative model is used as a direct counterpart to using rim weighting on tables. won’t appeal to everyone yet, but it does go a long way to democratising efficient panel management by making it available to smaller operators without the expense of dedicated servers and teams of specialist programmers.

Client perspective: Robert Favini, Research Results, Massachusetts, USA

Research Results, a research company in Massachusetts, uses desktop MARSC to host a number of custom panels for clients in a number of consumer sectors incuding the entertainment industry.  Robert Favini, VP Client Services discusses some of the changes that MARSC has enabled.

“Originally, we had had our own internal systems with screener surveys attached which we used in a rudimentary way to pull sample, but it was a bit of a patchwork of things and wasn’t very sophisticated.  About three years ago we started to look at what other people were doing, and we came across MARSC. Shortly before that, we had also had decided to use SPSS as our main survey authoring environment, and as the two tools fit together really seamlessly, that was a big draw for us. As a result, our level of sophistication in what we can offer to our customers for managed panels has jumped up a lot.

“Our clients need something very robust because often they are doing quite a lot of analysis on the data that we provide. With our home-grown tools, the problem we were having was compatibility. This has a nice agnostic format that talks to everything.

The full implementation took place over a two-month period, including converting all of the data, though setup and training required only a few days.

“It was relatively easy to get it up and running: we had a couple of training sessions and someone from MARSC came over to work with our in-house developers. They are in the industry so they are aware of what we are trying to do and use the same terminology as us.

“About the time MARSC came along, I think the industry started using sample in a different way. Gone were the days when people would happily take surveys – we were having to use sample much more carefully. What we were looking for with MARSC was something that would use sample wisely, and let us treat it as a precious resource.”

“What we found was that as we used it, we appreciated it more and more. We like its ability to gain intelligence within the panel. We find we can target sample really precisely, and the incidence calculations it provides – the ‘guesstimates’ of how much sample to broadcast – have been phenomenally accurate. The bottom line is we are not over-using sample: we are basically being very efficient, which is where we were looking to be.”

Robert welcomes MARSC’s strategy to migrate the product to the web, although the lack of interlocked samples along with some other advanced features make it unviable for his firm yet.  “We’d still be interested in using a web-based product because of the portability it brings. Sometimes we have staff scattered all over the place and at the moment we have to use VPN to give them remote access. It would be useful for client users, but we too would like to have that bit of greater flexibility to be able to work out of the office.“

A version of this review first appeared in Research, the magazine of the Market Research Society, March 2010, Issue 526.

Mobile fallout – to be ignored at your peril

Hands holding out a collection of mobile phonesFrom the Mobile Research Conference 2010, Globalpark, London, 8-9 March 2010.

Mobile research, as a method, may still be in its infancy, but researchers already need to be aware of the fallout from the growing phenomenon of mobile communications, both in telephony and in data communications and the mobile web. The effects cannot be avoided and need to be understood. It was clear from last week’s Mobile Research Conference, organised by software firm Globalpark, that respondents are already taking online surveys designed for conventional PCs and laptops on their web-enabled smartphones in small but significant numbers. The response to simply exclude them by closing the survey when an iPhone is detected is not a neutral decision from the sampling perspective.

But neither is it smart to exclude them simply because the survey then behaves in a way that does not let them continue, or which makes it difficult for them to select some responses. It is no longer safe to assume survey participants are using a conventional browser as their preferred means of accessing the Internet, and that trend will accelerate as other portable devices, such as Apple’s iPad and the imitators it will spawn, start to emerge.

The same is the case with mobiles replacing landlines – the figures I found were that 20% of households in the USA were mobile only last year, and that is likely to be 25% now – so a quarter of the population will now fall outside any RDD sampling frame in the USA. Marek Fuchs from the Technical University of Darmstadt, in one of the sessions at the event that I was chairing, presented some astonishing figures on the extent to which people were giving up their landlines in many other European countries at an even faster rate. He presented a Europe-wide average from Eurobarometer data in late 2009 of 30%. It is even higher in some counties, notably 66% of people in Finland and 75% in the Czech Republic who have only a cellphone to answer.

Mobile web may not quite be mothholing sampling frames to the extent mobile voice is to CATI, but the greater cause for concern here is just what these respondents do who do participate online. Mick Couper, who knows more about interview effects than anyone, warned that the effect on completion is barely understood yet, but one thing is clear – making assumptions from web surveys would be very risky. Even if survey tools are set up to convert full screen surveys for gracefully to the smaller format, as  Google’s Mario Callegaro said, a concern for him would be to know on what basis this conversion was being done and what lay behind the decision-making process adopted by web survey developers or software providers.

The uncomfortable truth is that we just don’t know the answers to many of these questions yet.