How closely should the translation ecosystem pay attention to the wider orbit of government action and global power struggles in achieving its goals?
There are threats to thinking “global” on all sides. After a long period of post-Cold War expansion, bringing improved health, lifestyle, education and above all communication to most of the planet, enabled by the spread of mobile and the internet as a world-wide knowledge sharer, there are distinct signs that the doors are closing on inclusive values and projects that we once held dear or were optimistic about.
As we begin to map out the future of the translation ecosystem, therefore, we need to pay attention to the kinds of forces that might work against positive advances for our industry. We suggest there are three particular policy-related issues that will impact us in the near future - data protection, innovation funding, and the destiny of the internet. Here are some initial pointers to very complex and divisive subjects.
In the world we are creating, data dominance means dominance on the world political and economic stage and a strong competitive edge in the business arena. Policies and regulations around language data, therefore, can have a tremendous impact on how the global translation ecosystem takes shape. Will we have a ‘divide and conquer’ internet with a stronger role for China, or will we be able to create a globally level playing field?
Looking at the policies around language data we see clear continental differences. China has a a very proactive and productive policy providing funding to companies to acquire as much data as they can. North America has been managing well with the fair-use ruling which essentially gives companies carte blanche to use language data for research purposes. And Europe (EU) is embracing a much more upfront yet conservative policy with, for instance, the recent launch of the GDPR on personal data privacy. We think this is a correct move for the world as a whole. But there is a high risk that it will cause confusion in people’s minds, and make them overly concerned and cautious specifically about sharing translation data. If so, how can we improve things?
Ideally the United Nations could step in to provide governance, and endorse the concept of a global exchange for language data, specifically in order to safeguard open global communications in the emerging algorithmic age, where data is the fuel for learning power.
A second best option would be for the European Commission to step up and endorse an open marketplace for the exchange of language data that plays by the rules, but which is not necessarily specified by GDPR clauses alone.
Fixing the translation ecosystem will inevitably require wide-ranging innovation. How should this be best funded and managed? Governments certainly fund innovation, but they are largely letting private enterprise take over much of the task in the “non-strategic” domains of international development, especially translation. This heightens the lottery-like nature of technology advance by letting venture capital and vested interest largely determine which innovations (and innovators) get precedence. This could mean, for example, that small-population languages do not get sufficient tech innovation support as their larger-population brethren, which in turn, reduces the overall communication footprint for content, and could arguably amputate a unique source of human knowledge.
Translation technology policy, however, is a rare bird on the international stage. The European Union and South Africa alone have explicit programs for ensuring language parity across their constituent communities, resulting for example in technology projects to develop MT solutions to address the inevitable communication bottleneck. India too is grappling with solutions to serve a highly multilingual population. But it is unclear how much the sharing of information, data and training from any of these initiatives is really helping to build a community of learning from innovation.
The EU has funded extensive translation research over the past 40 years, but recently has shown less inclination to address full-on the critical step of funding market-oriented innovation. Although there is a new EU bid in circulation, hoping for a billion euros for 10-years research into “Deep Natural Language Understanding”, current innovation efforts in translation are focused on a scatter of smaller projects, often associated with the needs of EC institutions. As a result, Europe’s “multilingual digital single market” languishes in snooze mode.
Yet if we are to fix the ecosystem in the ways suggested in part 1 of this series, we shall eventually need to take research findings from the new disciplines of AI and transform them into better solutions to power up tomorrow’s translation industry. Some ideas work, others fail; so we need a vibrant and well-funded culture of trial and error to learn fast and move forward equitably. The more languages that are folded into our data markets, delivery systems, and ambient intelligence culture, the more likely we are to experience a world of open flows and shareable values.
In a similar vein, some of that innovative drive will need to address new technical standards for AI. Standards are already vital for the seamless sharing of text and term data, But in future, we shall require a new vocabulary of tags for emerging categories of cognition management. These ontologies will address such cognitive domains as inference type, implicature, ambiguity, aspect, and other subtle dimensions of human mental and emotional experience expressible in language and soon to be negotiated by AI systems.
What is particularly interesting is that this future exploration of the fundamentals of cognition, communication and psychology as potential data categories will form a technological extension of the innate skills of translators, whose everyday job it is to negotiate the cross-cultural differences between languages as “ways of thinking.”
The internet has been under threat of a massive attack every since it began life as a gift of government. Once hailed as an anarchic “temporary autonomous zone” and then celebrated as a new “electronic frontier” where information could finally be free, it has evolved into a complex but single patchwork of territorial claims, commercial playgrounds and political jurisdictions in which innovation, communication, knowledge and fun coexist alongside their deep, dark cyber-opposites. Some governments and lobbies are doubting this can continue.
The worst-case scenario is that this internet fractures into subnets serving different stakeholder communities. On the one hand, the relentless march of cyber-threats, fake news and security breaches of all sorts is causing certain governments to have second thoughts about the first two ‘Ws’ in the WWW. On the other, the idea that richer users might have privileged access to a faster, better internet, while second class citizens labor under slower, third-class conditions is often rumored by companies that control the networks. The internet as a battlefield where soft- and hard-power activists wage war for minds and money may be more than just a metaphor.
From the point of view of communication, the multiple-net hypothesis conjures up a radical loss of shareable knowledge, and a spate of increasingly exclusive dystopian communities ready to fight their way either to supremacy or total isolation. And probably removing any cogent business case for translation as we know it. Leviathan devours Eden.
The only worthwhile reaction to such an alien vision of the future must be a commitment to a mission for more exchanges between languages and minds for more of the world’s communities. Translation could lead to transformation. Another good reason to fix the ecosystem!
<<< Read Part 1: Fixing the Translation Ecosystem
7 minute read