When I attended translation courses, I was assigned to write a commentary on George F. Will’s column Reading, Writing and Rationality on the Newsweek issue ofMarch 17, 1986.
That day, green activists were giving a demonstration of solar energy applications in a public park near the school, and our professor opened his lesson with a witty comment about the experiment he had witnessed during his lunch break. He said he would never be in favor of solar energy because he liked well-cooked eggs...
The history of innovation is full of inventors and manufacturers unable to understand the impact and actual use of their own work. Similarly, most innovations do not necessarily use the most recent and sophisticated technology, with their makers showing an outstanding capacity of interpreting and accelerating the transformations that are already underway. Therefore, the introduction and possibly widespread use of a tool does not necessarily involve a deminutio capitis.
Now imagine you are driving on a highway when a big-rig truck closes in behind you until you see in your rear mirror that there is no driver in the cockpit: The truck is driving itself.
This scenario is nearer than you may think, since a bunch of former Google’s self-driving car, map and robotics engineers have launched Otto, a startup aimed at making driverless trucks.
Automotive singularity is now envisaged by 2018. And it is eagerly awaited for the expected huge benefits. First of all, a sharp reduction of street accidents and, consequently, of insurance, health and welfare costs. Indeed, side effects are expected too, starting from a sharp decrease of jobs for traumatologists, nurses, loss adjusters, surveyors, etc., not to mention taxi, bus, and truck drivers.
Another teacher of mine used to say that translation is the second oldest job in the world. Maybe it is as old as transportation. Indeed, in the original XIV century significance, to translate meant to bring across. This is to say that certain basic activities will continue to exist, and only the tools and methods to run them can change.
Most people think they know what AI is, but most computer scientists would find the question a complicated one. As Jesse Emspak plainly explains, to be such, an AI must act and think rationally and the way a human being does. In this respect, both AlphaGo and Watson are not AIs.
In the last few months, I have run a little experiment. I have configured Google Alert to receive daily updates about artificial intelligence, machine learning, deep learning and machine translation. The number of updates has been consistent with this order, thus confirming a great popular interest for a fascinating and fancy topic like artificial intelligence, and much less interest for something apparently highly specific and challenging, or — as for MT — now seen as a utility. Indeed, the updates on machine learning and deep learning pertain to the commercial application field for the former and the research field for the latter.
The event that recently, linked machine learning and deep learning and triggered a new interest about artificial intelligence was AlphaGo’s triumph over Korean go legend Lee Sedol.
Google deployed a great deal of engineering work and a huge hardware infrastructure, which could make people think this was easier and less than expected, but the general consensus was that it would be years before a computer could defeat a human at go. In fact, the saying goes that chess is a battle while go is war, because in go each player must have the whole goban under control. Then, AlphaGo’s victory is much more important than Deep Blue’s one over Kasparov, and when eventually some version of the Turing test will mark the singularity of humans and machine, March 15, 2016 will be remembered as a milestone.
Nevertheless, Yale computer science professor David Gelernter recently reminded us that artificial intelligence is still in its infancy. A number of cognitive tasks that people do almost unconsciously, like vision and natural language understanding, what generally falls under common sense, are still extremely hard to model.
Nevertheless, the time for a computer to contemplate dreaming seems to come closer and closer.
At the EmTech Digital conference in May, one thing emerged clearly: No one in any industry can afford to ignore artificial intelligence. In fact, despite the many extraordinary new technologies, true mastery of language remains very much out of reach for software and there is still much room for improvement in the realm of machine translation with artificial intelligence techniques being applied.
Therefore, even though Ray Kurzweil were right, andmachines should reach human levels of translation quality by the year 2029, this does not mean that translation will vanish. Anyway, singularity in translation is eagerly awaited too, possibly for the same reasons as transportation: When deemed necessary, any activity is required to be helpful, inexpensive, undamaging, prompt, and easy.
Almost two years ago, a Google team reported that a simpler approach powered by deep learning and long short-term memory (LSTM) yielded better translations compared to current statistical machine translation engines. Later on, Google confirmed its plan to improve Google Translate’s accuracy through deep learning and applied for a patent for neural machine translation.
Last February, Microsoft revealed that it is leveraging deep learning for the Translator Hub, and a month before, Chinese giant Baidu revealed its interest in additional investments in machine translation and deep learning. Also Facebook plans to roll out a new translation system based on artificial neural networks in the quest for more natural-sounding translations.
Not surprisingly, DARPA is running a research program involving 33 universities and organizations worldwide to build a universal speech translator based on machine-learning to primarily help the military communicate with locals on foreign soil.
On a much smaller scale, researchers at the University of Liverpool have developed algorithms that are held to give computers a human-like touch by referencing dictionary services like WordNet and then weighing the correlation of words when building a sentence based on a scoring mechanism. This could be the first real application of the knowledge-based approach to machine translation conceived in the late 1980’s.
Other applications are seeing the light, like Writefull, a light-weight app that uses Natural Language Processing and the language databases of Google Books, Google Web, Google Scholar and Google News to provide feedback on a user’s writing by checking his text against databases of correct language, and telling the user how often a chunk appears in the selected database.
Other interesting developments are ABBY’s InfoExtractor SDK and Smart Classifier. InfoExtractor SDK uses facts and events to reconstruct the story lines in documents, thus providing insights to support business decisions. Smart Classifier combines text-based and semantic-based classification algorithms with a graphical model builder to classify documents based on their content.
The most interesting innovation in the realm of translation seems Dave Landan’s StyleScorer, developed for one of the top 10 LSPs in CSA special ranking, to meet the chronic red-pen syndrome of the translation world. The most interesting thing in StyleScorer is the weighted linear combination used to generate a final score between zero and four:
a (PPL score) + b (Dissimilarity score) + g (OSVM score) + d (NN score)
Applications like StyleScorer can only benefit from better and cheaper machine learning and deep learning platforms. At the moment, StyleScorer could be used to score new documents against the style of established documents, thus helping customers reasonably predict the expected quality of translations.
On the academic side, BabelNet could have been one of the few to reverse the discouraging trend of EU’s 7th Framework Program, which, despite the unstoppable flood of funding, rarely has produced any result that citizens could actually benefit of. The project started (as usual) from the very ambitious goal of enabling multilingual text understanding from the integration of existing large-scale lexical resources. Now, it seems to have been downsized to the acquisition of multilingual concept lexicalizations through machine translation and, after more than a lustrum, ‘would,’ ‘could,’ and ‘potential’ still abound. A by-product of BabelNet should have been an algorithm comparing matching scores and edit distances of source and target texts to tell how good a translation is.
In the meantime, following the innumerable industry events, a wide consensus seems to have been reached on a few topics that could lead to an impressive breakthrough:
Wow!
Even academics seem increasingly inclined towards embracing entrepreneurship and work with industry, as well as collaborative translation and new tools.
With eyes wide open, though, in case it were too much, all at once...
In Zhuangzi, one of the greatest literary works in all of Chinese history, the Chinese philosopher Zhuang Zhou told the famous story of himself who “did not know whether he was Zhou, who had dreamed of being a butterfly, or a butterfly dreaming that he was Zhou.”
In a recent essay, George Musser reminded us that human consciousness involves specific faculties and that we ascribe consciousness to other people because they look like us, act like us, and talk like us. From this premise, he wondered whether machines already have minds, possibly taking unexpected forms, such as networks. After all, although popularly investigated, artificial intelligence is so unfamiliar that we could have a hard time recognizing it, and machines could have become self-aware without us even knowing. Maybe, as Elon Musk told the crowd at the Code Conference 2016, it’s entirely possible, if not likely, that our existence is really a simulation being run by a highly advanced civilization and that we could all be living in a video game.
Once again, uncertainty, unpredictability, errors can only be human.
8 minute read