TAUS Blog

From full review to selective review

Written by Ruben de la Fuente | May 9, 2016 2:00:00 PM

I was recently involved in a project to clean up TM pollution, where the target included a lot of translations from a different language variant. I had to work with the internal linguist to prepare a plan to do it in the most efficient way. No matter how many automation tricks I pulled from my hat, the linguist had reservations for them all and it seemed that, quality assurance-wise, nothing could beat running content through a pair of eyes. And yet the cost of manual review was prohibitive; plus one could also argue about the effectiveness of this approach, considering the amount of errors that keep showing up at every stage of a typical translation workflow, where manual review is present everywhere: translation-editing-proofing, client validation, QA/testing.

So we had to come up with a way of efficiently combining manual review for the most important content and automation methods to sample the rest. I believe some of the lessons learned in this project can provide good insights to transition from a model where everything gets reviewed to another in which we apply manual review selectively.

Step 1: leverage text analytics for identifying frequent error patterns. There are several tools that can be applied to comparing two versions of the same text and identify recurring edits. In my presentation at the next TAUS QE, I intend to present a very simple UNIX pipeline that serves this purpose. There are other more sophisticated toolkits from the realm of machine translation that might be also helpful in this regard (Addicter, Hjerson and QuEst). The reports from these tools will require some manual analysis and clean up, but allow to work with larger collections of text.

Step 2: apply customized automated QA checklists based on the findings from step 1. Tools like CheckMate allow to define lists of automated checks to be performed on bilingual files. This can be used to filter offending segments and focus manual review on those, instead of reviewing the whole content.

Step 3: leverage KPIs to target your selective review. There’s a wealth of data available at the enterprise level. The challenge is in connecting the dots. For example, you can mine and aggregate data from bug tracking systems or translation scorecards to see the amount, nature and distribution of errors, and focus your review where it is more needed. Or you can take it one step further and leverage KPIs related to the user experience or the business performance, e.g. use web analytics to focus review on pages with the most traffic or where the conversion rate needs to be improved. You could also dig sentiment analysis reports to see where the customer is most dissatisfied with your product or service and see if anything can be improved in translation. Ultimately, stepping away from purely linguistic KPIs and being more guided by business ones is going to make your QA efforts much more meaningful.

 

4 minute read