The Big Three Challenges in Market Research Translation
Updated: Oct 9
Speed, formatting, and cultural relevance
We are translating market research (MR) surveys every day for a variety of languages, markets, and research companies. It’s a significant area of expertise for us, but that expertise doesn’t mean we underestimate the challenges involved, not only on our side of the fence, but also for the survey designer. Technology is changing these workflows as we speak, and that technology must anticipate some unique aspects of MR creation, localization, and distribution.
Outdated data doesn’t cut it, but automation only helps so much
Perhaps the principal challenge in translating survey content is speed. Things like political polls and product research are very time sensitive. The clients ordering the research are using it to drive their planning on a daily, even hourly basis. As we’ve written about previously, even with emerging technology, turning translations around in hours not days while maintaining quality is still not here. Basic machine translation (MT) can help survey analysts quickly assess responses in open comment fields but MT’s effectiveness falls off quickly when faced with the less common languages. We use it in our workflows, but human editing and review are still a requirement.
Survey development platforms and ‘automated’ translation workflows
As any market researcher understands, survey creation and formatting is an art, part of the attempt to create surveys that don’t slant results or miss insights. The leading survey authoring and delivery platforms like Decipher and Confirmit make building surveys easier and faster but you don’t want to extract survey content and responses out of context for translation and localization. That’s why we typically work in the desired platforms and try to always deliver properly formatted translations so they can be quickly distributed and assessed.
The technology is here to automate the delivery of survey files to us with a push of a button, have them translated and reviewed, and then have the translated and formatted surveys returned to the MR company with another click. This may seem like true automation but the quality issues with machine translation mentioned above still require humans doing the translations and/or editing the MT output. Anyone who says otherwise is gilding the lily.
Quality is also related to localization, especially when it comes to delivering culturally respectful content.
As we’ve outlined in a series of country-specific articles, each country and society has their own unique things that must be taken into consideration when designing and translating surveys. Why is this critical? Because it materially affects responses and completion rates. If a question offends, makes a laughable faux pas, or just sounds stupid, you’re going to lose your audience quickly. This is why there is so much emphasis on the localization aspect of the translation processes. This is where we can bring native speaking subject matter expertise to bear and work with your survey designers to align your content with local mores.
Again, this is where claims of end to end automation of MR translation workflows fall apart without human assessment.
The limits of workflow automation
Automation does help with certain kinds of workflow issues, including:
Some file formatting
Elimination of redundant translations with translation memory software
File transfers between the MR company, the LSP (us), the translator(s), and the reviewers
Centralizing review processes in an application like our cloud-based intellireview application to manage version control
These automated steps save money and time but ultimately there must be humans pushing the buttons, doing the translations and reviews, and quality checking the end results for accuracy and cultural relevance.
We’re on the automation train but we’re pragmatists
We are fully committed to automating as many of these processes as is practical, without compromising quality. We develop software to fill in any functionality that can be improved by automation, including APIs and connectors to connect us directly to our MR clients’ workflows. We’re seeing many claims of full automation, and while it is true that MR surveys do lend themselves to a high degree of automation, you still can’t rule out the human factor. It is not going away in the foreseeable future. But turnaround times are getting faster, reformatting of content shouldn’t be a factor, and the review processes combined with expertise helps keep your survey quality high. It’s a constant refinement process.