The latest news from the meaning blog

 

Where are the tools to enable Web 2.0 research?

Researchers cannot afford to ignore Web 2.0 approaches to research, as Forrester’s analyst Tamara Barber makes clear in a persuasive article on Research Live, in which she settles on market research online communities (MROCs) as being the most effective way to achieve this.  How to do Web 2.0 research, from a methodological point of view, is engaging a great deal of discussion at MR events this year.

In her piece, Ms Barber has focused on social or participatory characteristics of  Web 2.0, where there is obvious value to research. But the other characteristics of Web 2.0 lie in the technological changes that have emerged from its 1.0 antecedents – that the Internet becomes a platform for software, rather than a delivery channel for information. Indeed it is technology – using Ajax, Web services, content integration and powerful server-side applications – that are as much the hallmarks of Web 2.0 as the outward manifestations of the social web. It’s on the technology side that there is a lot of catching up to do, in the world of market research, and until this gets sorted out, Web 2.0 research will remain an activity for the few – for patient clients with deep pockets.

The specialist tools we use in research are starting to incorporate some Web 2.0 features, but nowhere does this yet approach a fully integrated platform for Research 2.0 – far from it. Panel management software is morphing into community management software, but the Web survey tools they link to don’t make it easy yet to create the kind of fluid and interactive surveys the Web 2.0 researcher dreams of. Neither are the tools to analyse all of the rich textual data that come out of these new kinds of research truly optimised for all forms of Web 2.0 research data. There are pockets of innovation, but multi-channel content integration – a key feature of Web 2.0 sites – is still difficult, so researchers are still drowning in data and left running to catch up on the analytical side.

Another problem arises too as more ambitious interactive activities and research methods emerge: the demands on both the respondent and the respondent’s technology increase, and some are getting left behind. Participants find themselves excluded because their PC at home or at work won’t let them run the Java or other components needed to complete the activity – whether it’s a survey, a trip into virtual reality or a co-creation exercise, and their PC won’t let them upload what you are asking them to upload. Even relatively modest innovations such as presenting an interactive sort board in the context of an online survey or  focus group will exclude some participants because their browser or their bandwidth won’t handle it. Others simply get lost because they don’t understand the exercise – there is a growing body of studies emerging into the extent to which respondents fail to understand the research activities they are being asked to engage in.

New Scientist recently reported on innovations taking place in gaming technology where the game learns from the level of competence demonstrated by the player and uses this to adjust the game’s behaviour. It’s the kind of approach that could help considerably in research. Unlike online gamers, we can’t ask participants to spend more than a few seconds in learning a new task and we can’t afford to lose respondents because of the obvious bias that introduces into our samples.

For Web 2.0 research to move beyond its current early-adopter phase, not only do researchers need to take on these new methods, but research software developers also need to be encouraged to take a Web 2.0-centric approach to their developments too.

Translation on the fly (or on the sly)

World Wide Lexicon Toolbar is a new plug-in to the Firefox web browser that promises to take webpages in any unfamiliar language and, as you browse, simply present the pages in English (or for non-English speakers, the language of their choice). My preparations for the trip I am about to make to Korea have focussed my mind on the frustrations of being unable read webpages. But I was also curious to see how useful this would be to Web 2.0 researchers that are analysing social media content and the like.

It is a very smart add-on: if you browse a page, and it isn’t in the language you understand, the page will be machine-translated and presented to you. If a human translation has been made, it will show this instead. It surpasses the Google option to machine-translate pages in a couple of other ways, too: more languages are covered and the translated version is presented in the format and style of the original page. There is even an option to double up the text so you can see the original and the translation. Of course, the translated text may still disrupt the layout, but it gives you a much better idea of the context of the text,  which aids understanding considerably.

The software is currently in beta, and can be installed free-of-charge from the Mozilla Firefox add-ons page. Reports from early adopters are that it is extremely useful, provided that you are willing to put up with the limitations of machine translations. The human translations it shows are those that have been entered by volunteer contributors to the World Wide Lexicon community. It’s a fantastic idea and is another example of the wisdom of the crowd at work on the Web. Yet the reality for any social Web researcher is that the blogs and community forums you are likely to visit will not have attracted the attention of a community-minded translator, and you will still need to endure the inadequacies of the machine translation.

Machine translations are not bad with well-constructed texts that have been written in a stylistically neutral way, but the more colloquial and idiomatic the text is, the more bizarre and worthless the translation becomes. I don’t have the means to try this out, but I suspect this tool may be more useful when doing Web-based desk research into more authoritative sources than the general Web 2.0 free-for-all. For that, we need machine translations to get smarter.

Why on the sly? You need to login and register to use the service, and the server must, by definition, be aware of all of the pages you visit – so you are giving to the plug-in owner a complete trail of all your browsing activity. This is not made clear when you sign up. If it bothers you, you could only use Firefox when you wish to translate something, and another browser for what you wish to keep private.

Tim is at the First International Workshop on the Internet Survey this week, organised by Kostat, the Korean National Statistics service, and will be posting highlights from the event.

Our new website launches

meaning-website-mini1

Wait a long time for a bus and when it arrives, how often is there another one right behind? So too it seems with websites. Not only did research magazine launch its splendid new website last week, but following right on, so did we. And the two are not unconnected. What we launched this week is the first phase of our complete overhaul of our web presence. All the goodies and resources of old are there – the software reviews, the software directory, our research reports, papers, presentations and articles. There are four big changes, though.

  1. At the centre of our site is now our blog stream. We will fill this regularly with contributions from me and the others at meaning.
  2. We want to make the site part of a two-way dialogue. Yes, we’re being very Web 2.0 and we are proud of it. So there is opportunity for you to register on the site, and then you will be able to add comments and provide feedback on many of the pages on the site as well as, we hope, react to and contribute to our discussions in the blog.
  3. We have provided an RSS feed to the content, so you can be kept up to date as we add new content, if that interests you.
  4. We have also taken the opportunity to rationalise the order of the content on the site and made it even easier to navigate, we hope.

Behind the scenes, we have moved our site from a static website managed with increasing difficulty in Dreamweaver, to an up-to-date, content management system. We are using WordPress and MySQL, having looked at and discarded Joomla and Drupal, and we are very happy with the result. While many think of WordPress as being a blogging tool, we were very pleased to find that it is also a mean and highly efficient CMS, with a lot of extensions available for managing really quite large, content rich sites flexibly and relatively easily.

But this is only phase 1. Shortly we will be announcing an completely new and vastly extended replacement for Research Software Central, our software database, which we are doing jointly in association with Research magazine. And at the same time, Research magazine are going to be featuring highlights from our blog. We hope you like the changes. We hope you will tell us what you think – using our new comment form down below.

CASRO Tech 09 and a field of poppies

It was great to be at CASRO’s annual Technology conference again. They have changed the format since I was last there in 2005 and instead of a roster of invited speakers, they now follow the call-for-presentations model, which has upped the standard considerably as the organising committee has been able to pick from the best. The CASRO Tech presentations have always tended to be pretty grounded, focusing on methodology as well as technology, and with speakers unafraid of saying what does not work along with what does. You are often hearing from people who are speaking from a well of practical experience, which I find a tonic from the customary highly choreographed statements of the obvious that seem to dominate the showpiece research conferences. (more…)