Best Practices: Translation of Articulate Storyline Content
Updated: Sep 25, 2020
I imported my translated content into Storyline and everything is screwed up!
Anyone in the eLearning industry who’s used Storyline, and who’s had their courses localized into different languages, knows about Storyline’s translation feature:
Articulate Button > Translation > Export.
You send the .doc or .xliff file over to your translation provider and a couple days or weeks later you get back the translation file, which you then import directly into your course:
Articulate Button > Translation > Import.
Next thing you know, all the content in your course slides appears in a different language.
Simple right? Not really.
It’s not just a matter of replacing content with their translated versions. You may have audio files whose spoken words need to be synchronized to their corresponding animations; you might have interactive elements that users are prompted to interact with; and in some cases it might make more sense to forgo audio altogether and use subtitles instead.
These things, and others, are dictated by the requirements and parameters of your eLearning project, as much as they’re dictated by the target language and culture you need the course localized for. But one principle seems to ring true for all eLearning localization projects: the more complex you design your course slides, the more complex, and thereby costly, the translation process will be.
We know that some of you instructional designers wind up facing tight deadlines and stringent budgets, so we’ve compiled some best practices for developing courses in Storyline 2 for translation so that you can be proactive in ensuring that once you send your online courses to a translation provider, you’ll get back the translated files without delay and at an affordable price.
As we’ve pointed out in our series on the software localization process—a process that shares some similarities with eLearning localization—written text tends to expand when you translate it into certain languages. Going from English into either German or French results in an average text expansion of 20% or 30%. The word “plane,” for example, becomes “Flugzeug” in German—an increase of three letters. “We eat cheese” becomes “nous mangeons du fromage” in French. Not only is there an increase of letters, but there’s an additional word thrown into the sentence due to how French grammar works.
When you design the visual layout of your slides, it’s important to design any fields, boxes, or buttons to be large enough to account for any text expansion. That is, if you know from that beginning that you’re going to need your course localized. Storyline 2 won’t automatically adjust the sizes and lengths of such visual elements, so when you go to import the translated file with Storyline’s import feature, text which is now expanded will trail off the edges of your course’s visual elements and “break” the design. Now you’ll have to go back to square one and rework the design to accommodate any expanded text, resulting in delays.
Timelines in Edit Mode and audio synchronization
When you edit slides in Storyline 2 with multiple elements that appear at different time intervals, edit mode will reveal all the elements of the slide at once, making it a challenge to sync audio files with the specific elements that will, by necessity, have to appear at different time intervals than the original. This challenge, which is unique to Storyline 2, supports the argument that reducing the complexity of slides will make the localization process a lot more straightforward—there will be less required post-production, thereby speeding up the project over all.
When post-production engineers work on your translated slides, they have to shift your slides’ elements on the timeline. That’s because, just like text expansion—where written words expand, breaking buttons, boxes, fields, etc.—audio translation leads to syllable expansion. “We eat cheese,” to use the example from above, is three syllables. “Nous mangeons du fromage,” in French, is six. That aside, foreign language voice talents may speak either faster or slower, depending on a number of different circumstances.
It may not always be possible to reduce the number of elements on a slide and/or add additional slides to a project. Another technique that can help with audio synchronization in these cases is to use Slide Layers triggered by Cue points. Each Slide Layer will only show the elements contained in that layer’s timeline, which helps reduce the complexity of that portion of the slide. The base layer can be tagged with Cue points, which display each sequential Slide layer in turn, all synced up to a master audio narration track on the base layer’s timeline. Alternatively, each slide layer can have its own independent audio narration track, so that it truly functions as a sub-slide. While this technique is more work to setup initially, it dramatically simplifies updates down the road in addition to easing the translation process.
Whereas with text expansion you can be more proactive by designing elements and features to be large enough to accommodate longer sentences or words, adjusting elements on Storyline 2 timelines will mostly be up to your translation provider’s post-production team. So the more elements there are to sync on a given slide, the more time the project will take, since post-production engineers will have to manually comb the timeline for the specific corresponding elements, since they all appear in edit mode at the same time.
Where subtitles may or may not be cost effective
Okay, so audio synchronization is one thing. What about just having subtitles translated instead? Subtitles are a great tool because overall because they’re versatile and relatively easy to localize into other languages. Plus, they’re typically cheaper than foreign language voiceover recordings. But there’s one hitch: if you haven’t already created subtitles or captions in the original language version, going the subtitle route may wind up being more expensive for you than voiceover.
When there aren’t already subtitles available, your translation provider will have to first create English language subtitles—a master file, in other words—that can then be translated into the various languages that you need your course localized into. But to create subtitles from scratch is a lot more time- and cost-intensive than translating subtitles that already exist.
That’s because, just like text and syllable expansion, there are limitations that a subtitle designer will have to take into consideration, such as slide durations, appropriate sentence languages for different target audiences, the fact that subtitles should only consist, at most, of two lines, and that the subtitles don’t bleed over into other slide elements.
If you are only localizing your project into one or two languages and have not created captions from the start, you may find the cost of retrofitting subtitles into your course on par or even more than recording localized narration. Translating into three or more languages, however, amortizes the cost of creating the subtitle master file, favoring the subtitle approach as the more cost-effective solution.
When planning a course or program with a significant amount of narration and where subtitled translation is acceptable, it may be worth developing in Articulate 360, which offers a native closed captioning feature. With 360, you can import an industry-standard SRT or other subtitle format file with audio or video embedded in your course.
Setting realistic expectations
While Storyline’s import/export feature is a very convenient tool for streamlining the handling of the source and translation files, it fails to account for a number of aspects to the localization process that your translation provider—and their post-production engineers—will have to account for themselves. By being aware of things such as text expansion, syllable expansion, and subtitle limitations, however, you’ll be able to be more proactive in designing your course for localization, thereby saving yourself hassles down the line.