The latest news from the meaning blog

8 common mistakes we avoid when writing about technology

8 common mistakes we avoid when writing about technology

Good marketing copy needs to work hard for you, increasing the conversion rate on your website, emails and newsletters. Here we have some pointers to help you get your text spot on:

  1. Technology product websites are often full of confusing jargon, like API, SaaS, Parallax design. Use it sparingly, unless you are deliberately writing for a technical audience. Even then, make sure you still reach out to non-technical readers with some helpful explainers.
  2. Be familiar with the industry jargon of your target audience and introduce some of it into your text. It will give your readership confidence that your company understands them.
  3. Technology marketers love clichés and hyperbole such as “world’s most powerful solution” and “cutting edge”. It fills the space but tells you nothing. Instead, try to differentiate your company by showing what’s different and how much it benefits the user. It’s harder to do, but worth the effort!
  4. To be more credible, incorporate metrics and customers quotes – ideally both at the same time! For example: “The software enabled us to deliver the project a week earlier than usual and we were so happy that we won an even larger contract on the back of it,” said John Doe, Research VP at XYZ Corp.
  5. Put yourself on the customer’s side of the table. Instead of writing “Our solution will save your company hours every week,” write “this will save your company hours every week,” or, even better “our customers tell us this feature saves them between 5 and 20 hours when setting up a new project.”
  6. Don’t overwhelm your readers. You want your words to get read. Respect that they are busy. Write short, and write just enough to get them interested. Provide links so they can find out more details when they need to.
  7. Get the tone right. For a B2B audience, your writing should be professional, but still warm and friendly. The tone is right when nobody notices it.
  8. Always have a call to action. Make your article work as a step on a journey. So think of what the next steps are, and present them, making them visually obvious. Always remember when you get to the end, what comes next?

Learn more about how to make technology marketing copy more effective: Copywriting that focuses on the facts


How to create convincing key messages

How to create convincing key messages

Developing a set of key messages helps you achieve clarity in your marketing by ensuring that a consistent thread runs through all your company’s communication. Here we discuss how to create compelling key messaging, which is particularly important and useful if you have a complex technology product.

It is important to decide on a few basic messages that all your staff and promotional materials consistently reiterate. Hearing the same points repeatedly from different sources is more memorable and credible. The main thing to remember about your key messages is that they must be true!

You should aim to create around eight to twelve key messages, or key differentiators about your company or product. Any more and your message will get diluted; any less and you’ll sound lacking in substance.

Key differentiators are short factual statements, such as “business partner that truly understands clients needs”. These are backed up with qualifying information that supports your case and adds credibility. For example: you could explain that most of your staff used to work in the same industry as your clients, you have 25 people in Support who cover all global time zones 24/7, and you work with 15 of the 20 largest companies in the industry etc.

To come up with a set of key messages, set up a group of people – usually staff – who can brainstorm about your company. The group should represent a cross-section – different departments, levels of seniority as well as new and old employees. Each person should write down what they think is special about the company or product. Eventually, a marketing expert needs to arrange the notes into eight to twelve groups and summarise each group with a short key differentiator, such as the example given above. This stage is the tricky bit!

Having written your key differentiators, the next step is to review them, considering:

  • Do they fit in with your company strategy and goals?
  • Do they focus on what differentiates your company from the competition?
  • Are they broad enough in scope?
  • Are they compelling enough?

Bear in mind that the key differentiators should not be set in stone. Over the months and years, you should review them regularly. Things change!

Finally, remember that the key differentiator document is strictly for internal use only. It is a crib sheet, which will be primarily used by Marketing but can (and should) be used for any type of communications with the outside world. When creating marketing materials, the language and content should be benefit-led, consider the needs of the audience and should fit in with your corporate style. Never ever regurgitate the exact words from the crib sheet!

Is it really worth writing a customer newsletter?

Is it really worth writing a customer newsletter?

A newsletter for customers and prospects can form an effective and – if managed properly – an economical part of your marketing effort, especially if you have a complex or little-understood technology product.

A good newsletter should avoid empty boasting about the “best tool in the world” and instead provide information that is helpful, relevant, informative or even educational for both existing and potential new customers. The types of stories you should be considering are real-life but inspirational case studies of how existing customers are using your technology, expert opinion on trends or ideas in the industry, tips and tricks on using your product or even some company news. By focusing on these sorts of articles you will build credibility for your company and brand by showing there is substance and depth to what you do. Your aim should always be to demonstrate your understanding of the industry your customers are working in, and how your business provides solutions for them. A newsletter like this will build confidence in the marketplace of you as a supplier.

You can choose to distribute your newsletter by social media or email – or ideally both. The good thing about email marketing is that you are only sending your newsletter to people who have given you their email address, which is a sign they are receptive to your message. Although email is old hat compared with social media, most people check their email inboxes multiple times a day and, unlike social media, it is possible to personalise the message. For example, you can include recipients’ names in emails or send different content to different target groups.

Announcing a newsletter via email still has several plus points. We normally recommend the very briefest email containing short teasers that link to the full story on your website. This also has the advantage of driving traffic to your website and, if you use an email marketing tool to send the emails, you will have useful measures such as who has opened your email and which links they clicked. However, you obviously get more bang for your marketing buck if you also post your stories on social media or include them in a blog which you can Tweet, so it is worth ensuring that each story is capable of standing alone.

Writing a newsletter does take some effort, but we have always found having a monthly or quarterly deadline is a spur to creativity and the task becomes easier in practice. A newsletter done well can be one of the most powerful yet simple tools to help drive technology sales.

Copywriting that focuses on the facts

Copywriting that focuses on the facts

If you are writing marketing copy for market research technology, our advice to you is always to write in an evidence-based style. For a consumer product, humour and charm often work best, but in business people also need to justify their buying decisions using facts and figures, especially when they are big-ticket items.

Evidence-based writing includes:

  • Facts
  • Customer quotes
  • Customer names
  • Examples
  • Metrics
  • Explanations of how your product works
  • Other information, such as research data

Evidence-based writing avoids:
Fluff, puff, waffle, clichés, hyperbole and unsubstantiated claims

Consider the type of evidence that will convince your reader. There’s no point in writing “the world’s leading online interviewing platform”. This is an unfounded claim. Try instead something more factual, such as “12 of the top 20 firms listed in the Honomichl 2020 Top 50 Report use our product”.

You also need to consider who your reader is – so the evidence you give to senior management is likely to be different from that which you aim at potential users of your product. Senior executives will be interested in top level cost savings or efficiently gains or how your product enables their company to provide a new service, whereas users will also be interested in the nuts and bolts of how the product works. So, a named customer in the same industry saying “we managed to deliver our project in a day instead of a week and to a higher standard than before” is very powerful. For the user, you might describe each feature with a benefit, a brief description of how it works and a screenshot. The screenshot provides powerful evidence for your claims.

Of course, you have to remember that adding some emotion or fun to your writing is valuable because, like all of us, business people are human beings and want to be inspired and have less stressful and more enjoyable days. A lighter touch can be a useful way of drawing your readers in – see below an example of the top section of the Asana home page, which is factual but low on details yet tugs on our emotions using an image of a diverse and happy group of people. Total focus on profit margins and efficiency gains can get a little too much for all of us eventually!

Screenshot of Asana homepage

The bump and grind of MR’s tectonic plates

The bump and grind of MR’s tectonic plates

Technology platforms are merging: will we see earthquakes, new mountain ranges or find familiar landscapes heading for the bottom of the ocean?

Is it a good thing or a bad thing when major players in the world of MR technology merge or get acquired? It really depends who’s asking the question, as the answer is invariably a bit of both. But it’s a question that is alive right now.

Last February, Anglo-French software provider Askia announced that IPSOS, had bought a controlling interest in them. Days later, Confirmit announced it was merging with dashboard provider Dapresy. Yet before there were any real announcements about product directions, news broke this year that Confirmit was now in merger talks with with FocusVision, a head-to-head competitor.

Mergers and acquisitions can be good if they mean that that development investment can be intensified by unlocking new capital or pooling resources. After all, it generally takes two continents to collide before a mountain range can be born! We need our technology to reach higher and further, and much of what is out there shows signs of stunted growth because of limited investment. This, in turn, creates a productivity issue for the research industry, because the technology it depends on has developed piecemeal. Fewer, more intensively developed products should boost productivity – but that relies on the focus of the acquisition being to deliver growth, not to achieve economies of scale and extract value. Only time can tell on that one.

But there are other more immediate downsides. The people who are already “no longer working for the company” can tell you about that. It also creates anxiety for customers. Where rival products come under the same ownership, it is clear than not all the products will survive. The developer will have to pick winners, and hard luck if it’s the tools you rely on that are destined for obsolescence.

Sometimes companies spend a year or two figuring out how to integrate the incompatible, and sometimes reaching the conclusion they are better off starting again. This is what happened twenty years ago when SPSS went on an acquisition spree. That resulted in a development trajectory that saw existing products stand still for several years before their replacement ‘Dimensions’ product line started to emerge, piece by piece. But the replacements were slow to appear and when they did, they did not completely fill the gap. That hiatus probably encouraged rivals such as Confirmit (then FIRM), Askia and subsequently Decipher (later acquired by FocusVision) to get going.

Confirmit seem to have done a better job of folding the products it acquired into its own platform when it bought Pulse Train over a decade ago. It is still there, at least, and the lineage from Pulse Train with its CATI and data processing capabilities can all be seen now in Confirmit. Perhaps that bodes well for their acquisition of Dapresy. Confirmit has long had its own dashboard tool. Named Reportal, it had a cute name but not much else to commend it. It was a beast that was hard to tame. MR clients appeared to be defecting to generalist business intelligence tools like Tableau and Power-BI, which are a pain to use for an entirely different set of reasons.

But the path is much less clear for Confirmit versus FocusVision’s Decipher, both sitting in the centre of their own constellation of different – and inevitably incompatible products. Research firms using these tools can ill afford another lengthy development hiatus while these continents grind together.

At the moment, the merger has been referred to the competition regulators in US on monopoly or ‘antitrust’ grounds. If it is eventually allowed, let’s hope the lessons of SPSS have been learned, and that they figure out a way for all their customers to scale the mountain range that will start to emerge in a year or two’s time.


Breaking the link – does MR have a problem with testing its surveys?

Breaking the link – does MR have a problem with testing its surveys?

It’s not a good look: a grammatical error in the intro, a spelling mistake at Q2, no obvious place in the list of options for the answer you want to give for Q4.

Too many of the surveys I see from either side of the fence, as a pro or as a punter, contain the kinds of mistakes that should not have got through testing.

Some wording errors might just make you look stupid, but others introduce needless ambiguity and bias. All research firms have QA processes, so what is going wrong? My hunch is that it all comes down to a combination of factors in that twilight zone at the interface of technology and people.

First, it may not always be clear who is responsible for catching these errors. I’ve met researchers who think it’s the responsibility of the scripter to test the survey. I associate it with what I call the “link checking” mentality. The survey is ready to test. The technician scripting the survey provides the researcher with a test link or URL to access the survey and asks them to feed back any problems. The researcher clicks the link, looks at the first few questions and reports back that it seems to be working fine. The technician has deadlines and other projects to work on, so has rushed it through, anticipating the researcher will pick up anything really dreadful. The researcher is busy working on another presentation and has limited time to go all forensic.  A few quick glances at the first two or three questions confirms everything seems to be up and running.

So everyone sees what they are looking for – that the survey is ready to go live on time, even though it isn’t. I hasten to add – this isn’t everyone, always, but it does happen.

I have also heard researchers who delegate (or is it abdicate?) responsibility for taking a really good look to their panel provider. Of course, good panel providers have many reasons for not wanting surveys that are badly worded, or that take 20 minutes to complete rather the ten that were agreed in advance. But they won´t necessarily notice the brand that is missing, or even the entire question that is missing, or sort out minor typos. That´s not the deal.

What this does illustrate though is that there isn´t one ideal person to do the testing. Testing is a team effort and those involved need to be selected for their different perspectives and skills. The researcher who designed the survey must always view it to check what has been created delivers on what they had designed. Ideally, a research assistant should run several tests, and with the questionnaire in hand (or open in another window) check that all the wording is correct and complete. It also makes a lot of sense to involve an external tester not previously associated with the project who will run through the survey a few times to check all the wording is complete and makes sense. In multi-language surveys this is vital: ideally a native speaker.

The technology can help too. “Dummy data” from randomly generated questions can provide a very useful way to intercept logic and routing errors and this should also be a routine part of the testing process. The survey tool may incorporate spell checkers too, to allow the scripters to find or even prevent minor typos.

Many survey platforms also provide a suite of tools to help with survey checking and flag errors that need correcting directly to the scripter. Yet it seems the tool most often used for feeding back corrections is still the humble email.

Researchers are used to thinking about motivation for survey participants. But in this situation, we need to think about motivation in the researcher. Repeatedly testing a survey is boring and takes up more time than researchers often feel they can afford. I suspect this explains more than anything why so many errors in surveys get through.

It comes as a surprise to many that in the field of book publishing there are actually professional proofreaders who love finding the errors in written copy. In software too, there are professional testers who get a kick out of finding what doesn’t work.  We don’t yet have a professional association of survey testers, but maybe we should. I’ve never found it hard to line up external people to test surveys for a very modest reward. They invariably find things that no-one else has noticed. And they also invariably say thank you, that was interesting.

Typos are embarrassing and make us look unprofessional. But there are much more expensive mistakes that occur due to inadequate testing. Is your testing fit for purpose? If it relies on you and some hope-for-the-best link checking the answer is very likely yes.