Pages

Saturday, December 30, 2017

Artificial Intelligence: And You, How Will You Raise Your AI?

This is the final post for the 2017 year, a guest post by Jean Senellart who has been a serious MT practitioner for around 40 years, with deep expertise in all the technology paradigms that have been used to do machine translation. SYSTRAN has recently been running tests building MT systems with different datasets and parameters to evaluate how data and parameter variation affect MT output quality. As Jean said:

" We are continuously feeding data to a collection of models with different parameters – and at each iteration, we change the parameters. We have systems that are being evaluated in this setup for about 2 months and we see that they continue to learn."

This is more of a vision statement about the future evolution of this (MT) technology, where they continue to learn and improve, rather than a direct reporting of experimental results, and I think is a fitting way to end the year in this blog.

It is very clear to most of us that deep learning based approaches are the way forward for continued MT technology evolution. However, skill with this technology will come with experimentation and understanding of data quality and control parameters. Babies learn by exploration and experimentation, and maybe we need to approach our continued learning, in the same way, learning from purposeful play. Is this not the way that intelligence evolves? Many experts say that AI is going to be driving learning and evolution in business practices in almost every sphere of business.




 ===================


Artificial Intelligence is the subject of all conversations nowadays. But do we really know what we are talking about? And if instead of looking at AI as a kind of software that is ready to use and potentially threatening for our jobs, what if it was thought to be an evolutionary digital entity, with an exceptional faculty of learning? Therefore, breaking with the current industrial scheme of the traditional software that requires code to be frozen until the next update of the system. AI could then disrupt not only technology applications but also economic models.

Artificial Intelligence does not have any difficulty to quickly handle exponential growing volumes of data, with exceptional precision and quality. It thus frees valuable time for employees to communicate internally with customers, and to invest in innovative projects. By allowing an analysis of all the information available for rapid decision-making, AI is truly the corollary of the Internet era, which is also the result of all threats, whether virtual or physical.




Deep Learning and Artificial Neural Networks: an AI that is constantly evolving


Deep Learning and artificial neural networks offer infinite potential and a unique ability to continually evolve in learning. By breaking with previous approaches such as the statistical data analysis approach which demonstrates a formidable, but trivial, memorization and calculation capacity, like databases and computers, the neuronal approach gives a new dimension to artificial intelligence. For example, in the field of automatic translation, artificial neural networks allow the "machine" to learn languages as we do when we are in an immersion program abroad. Thus, these neural networks have never finished learning, after their initial learning, they can then continue to evolve independently.


A Quasi "Genetic Selection"


Training a neural model is, therefore, more akin to a mechanism of genetic selection, such as those practiced in the agro-food industry, than to a deterministic programming process: in all the sectors used by neural networks, the AI is selected to keep only those learnings that progress best and the fastest or most adaptable to a given task. These AI techniques that are used for automatic translation are even customized according to customer needs - business, industry and specific vocabulary. Over time, some AI techniques will grow in use, and others will disappear because they will not demonstrate enough learning and will not be sufficiently efficient. DeepMind illustrates this ability to the extreme. It was at the origin of AlphaGo, the first algorithm to beat the human. AlphaGo had learned thousands of games played by human experts. The company then announced the birth of an even more powerful generation of AI. It managed to learn the game of Go without playing games played by humans, but "simply" by discovering, all alone and in practice, the strategies and subtleties of this game in a fraction of the time of the original process. Surprising, isn’t it?



Machines and Self-Learning Software


The next generation of neural translation engines will exploit this intrinsic ability of neural models to learn continuously. It will also build on the ability of two different networks to have unique pathways of progress. Specifically, from the same data, like two students, these models can improve by working together or competing against each other. This second generation of AI  is very different because not only does it have models taught from existing repositories (existing translations), but just like newborn ones, they also learn to… learn over time, placing them in a long perspective. This is the lifelong learning: once installed in production, for example in the customer information system, AI continue to learn and improve.



To each his own AI tomorrow?


Potentially, tomorrow's computer systems may be built, like a seed planted on your computer, or in everyday objects. They will evolve to better meet your needs. Even if current technologies, especially software, are customized, these technologies remain 90% similar from one user to another because they are built into bundled and standard products. They cost very dearly because their cycles of development are long. The new AI, which tends to tailored solutions, is the opposite of this. In the end, it is the technology that will adapt to man and not the opposite, as it is at present. Each company will have the specific technical means, "hand-sew" and you may have an AI at home that you can raise yourself!

Towards a new industrial revolution?


What will be the impact of this evolution on software vendors or on IT services companies? And beyond, over the entire industry? Will businesses need to reinvent themselves to bring value to one of the stages of the AI process, whether it be adaptation or quality of service? Will we see the emergence of a new profession, the "AI breeders"?


In any case, until the AI is seen as a total paradigm shift, it will continue to be seen as software 2.0 or 3.0. A vision that hinders innovation and could make us miss all of its promises, especially to free ourselves from repetitive and repetitive tasks to restore meaning and pleasure to work.


Jean Senellart, CTO, SYSTRAN SA


4 comments:

  1. I beg your pardon, Monsieur Senellart, but AFAIK, the first algorithm that beat humans was the one developed for Deep Blue that beat Kasparov, probably the greatest chess master of all times. Then came that of Watson who beat two human champions at Jeopardy!
    Maybe you make a distinction between ML and DL, but they're both AI applications.
    Anyway, I agree with Kirti that, as neural networks improve, technology requires more and more expertise, and in the end MT will be even harder to grasp than it is now. Right now, we cannot say whether a DNN has been learning, what it possibly learns, how and from what, and we can hardly alter the learning pattern exactly because we cannot tell what happens in hidden layers.
    We are in a grey area, in a dense mist, following what seems to be a light. What if it is not?

    ReplyDelete
  2. Machines need to learn from our errors not from our references. What Jean formalises with the different incremental training cycles, I believe is to trigger knowledge by projecting massively errors and at the same time excluding them in understanding, writing, recognizing and translating. Not only machine learns but also knows how to identify and avoid errors under different input circumstances. The linguistic backbone of the 50-year old translation system SYSTRAN allows the formalization of the linguistic knowledge in parrallel; rendering possible different types of transplants. Statistical backbone can not reach that level. That is why the approach is unique.
    Elsa

    ReplyDelete
    Replies
    1. Excellent. A big bravo to Systran teams believing in and re:inveting since 1968. From moon to earth and from earth to mars !

      Delete
    2. Should be right all that.
      Recent recognition of 50 years in industry: Harvard nlpgroup collaboration with Systran and their open source machine learning protocol OpenNMT. Primed 26 in the most amazing machine learning projects in the world
      https://medium.mybridge.co/30-amazing-machine-learning-projects-for-the-past-year-v-2018-b853b8621ac7
      Congrats to teams for believing in. Being exposed to errors and inconsistencies for 50 years, work on them and excel offering to all for free a historically-patented know-how.
      That's a high-tech partner who keeps on writing history.
      Can't say anything but thanks.
      ES

      Delete