12/13/17 · Institutional

"Artificial intelligence professionals are in high demand but very few are properly trained"

Photo: UOC

Photo: UOC

Ramon López de Mántaras , Director of the Spanish National Research Council's (CSIC) Artificial Intelligence Research Institute (IIIA)

 

His name is an international reference in the field of artificial intelligence. In fact, last summer he became the first researcher to have received awards from the three most important international organizations in this field. With a doctoral degree in Physics and Computer Science, Ramon López de Mántaras directs the Spanish National Research Council's (CSIC) Artificial Intelligence Research Institute (IIIA). In early November he was one of the guests at the seminar on "Competent professionals for smart organizations", organized by UOC Corporate.

 

 

His name is an international reference in the field of artificial intelligence. In fact, last summer he became the first researcher to have received awards from the three most important international organizations in this field. With a doctoral degree in Physics and Computer Science, Ramon López de Mántaras directs the Spanish National Research Council's (CSIC) Artificial Intelligence Research Institute (IIIA). In early November he was one of the guests at the seminar on "Competent professionals for smart organizations", organized by UOC Corporate.

 

What companies or organizations are already using the most sophisticated artificial intelligence in their everyday activities?

Thousands and thousands of companies in everything related to decision-making. For example, when you want to buy from Amazon, artificial intelligence recommends products. The financial sector also recommends financial products. Or there are systems to speed up the response to clients, such as in the case of those complaining to an airline because they have lost their luggage. Or the entertainment world: there is a lot of artificial intelligence behind computer games, where it controls the characters that you do not. In manufacturing companies, it helps to make buying decisions depending on your needs, diversifying suppliers. Or in transport: it helps to send trucks all over Europe to transport or collect products while trying to reduce distances and fuel consumption and to ensure that the truck returns full and not empty, bearing in mind that drivers need rest. And also in medicine, or in quality control with computer vision. There are many!
And in manufacturing?
It has been used for many years by industrial robotics to assemble or paint cars, but these are robots that would paint the air if the car wasn't there [laughs]. These types of assembly line robots are one of the few examples of artificial intelligence currently replacing people. In most cases it helps people. But look at what Mercedes has done: it has returned people to the assembly lines because it no longer makes one thousand cars that are exactly the same. Now there are so many variations of personalized options on the cars that they have decided that "a robot is more expensive than a person". Because the machine does not have our capacity to adapt, to be immediate.
So, at the moment, what are the main limitations of artificial intelligence?
It lacks the capacity to generalize or understand the world. We don't know how it could do this. A very interesting example: imagine that in a hypothetical future, in about ten years, you have a domestic robot that helps you with the housework. It does many things. It can even prepare meals. Before you leave in the morning, you give it orders, telling it that when you come back in the evening you want a dinner rich in proteins. You return home greeted by the lovely smell from the oven of stewed meat. Wonderful, but where is the cat? The robot is very intelligent but, if it does not have common sense like humans and there is no meat in the fridge, it cooks the cat for you. If the robot is unaware that the sentimental value of a cat is more important than its nutritional value, at least in our culture, it might cook the cat. Because it will do almost anything to carry out your orders.
As in Asimov's laws of robotics, that a robot must obey the orders of human beings without harming them?
These laws are not enough.
In any case, is progress in artificial intelligence currently driven by economic profit?
Not in the academic world but in the business world, yes, because it seeks to optimize the decision-making process: better and faster, if possible.
What risks does it involve?
The problem is that companies that are currently highly active are attracting hundreds of people from the academic world. But we must acknowledge that the purpose is for these people to carry out research. Microsoft, which has around 3,000 people researching artificial intelligence, Google, Uber, Amazon, and so on, are employing people to invent the future. They are developing the future. They need full professors, very brilliant people, people who are offered stratospheric salaries, to do this research.
But doesn't the fact that this is being done by big corporations involve risks?
Of course, because as businesses they are motivated by profit. In the academic world, in public research and also at private universities it is assumed that artificial intelligence researchers do what they really want, although this is also determined by available funds. For example, when public financing agencies prioritize certain areas, they are influencing research; they are making the academic world submit projects in the areas that are more likely to be funded. We do not have absolute freedom to research what we like. However, hopefully the academic research world has more freedom than the business world.
Scientists from Australia and Canada this month have urged their governments to ask the UN for stands similar to that of non-nuclear proliferation for autonomous weapons based on artificial intelligence.
Because of its nature, this type of research is not made public. You attend the main artificial intelligence congress and there is no one working for an American R&D defence centre who tells you: "We are making this autonomous tank or this autonomous drone, capable of bearing enough weight to launch missiles, and this is how we are doing it." This exists and is not published. It is confidential. So we do not know exactly what point they have reached. I don't think they have the magic formula either, and therefore the problems are the same as in the academic world for non-military applications. But the risk exists. An autonomous tank will, one day, be made. And autonomous drones, that will decide when to shoot and what to shoot, will exist in the end. In the USA they speak of the Terminator Dilemma.
Can you explain?
They say: "Let's imagine that we, as a country ruled according to higher law, democratic and socially aware, want the good of humanity and decide not to develop it, to abandon the production of autonomous weapons. Our enemy won't do this. This is the dilemma. Will we be unprotected?” This is a dilemma, I agree. But it is important to have regulations and prohibitions so that whoever does use it can be held responsible.
What can governments and philanthropists do to foster the use of artificial intelligence to reduce inequalities?
Invest in research in positive and socially responsible applications. But, again, it is difficult because of the Terminator Dilemma, for example. It would be ideal to prioritize research in artificial intelligence for issues such as ageing, energy and the environment. This is already done but perhaps it should be funded more, to compensate.
In terms of education, in the roundtable you mentioned that ethical and humanistic values can be key factors.
Perhaps it is a little naive because the more humanistic a technologist is, the better. Perhaps it takes away time from technological training. But, if throughout your training you have endeavoured to solve complex problems critically, and you are curious and want to work, then this is what is needed. So, I think a more humanistic training would be a very interesting added value for future technologists.
If you were from the UOC and wished to play an active role in this revolution and not play catch-up, in what direction would you develop your students' training?
I don't have the formula. But lifelong education must be the key to everything. At least, students should be made to realize that they won't do the same job for the whole of their working life, not just in the same company but the same kind of work.
A consultancy has recently forecast that by 2020, 1.8 million jobs will be destroyed by artificial intelligence but 2.3 million will be created.
Some people believe this and others don't. It seems likely to me. At a world level, it is not an enormous number of people. Just as the car or telephone sectors not only give work to their employees but also to millions of suppliers, the same may happen in other physical systems, such as robots, and also in non-physical systems, like software. In fact, we now have a significant lack of experts in computer science, not only in Spain but in general.
What kind of experts?
Computer or telecommunications engineers, for instance. All over the world, people highly trained in artificial intelligence number only in the tens of thousands. There are not hundreds of thousands. There is a very high demand at the highest levels. The ideal profile is for computer scientists with a master's degree and with a doctoral degree in artificial intelligence.
In fact, if we are to understand the human brain, we still have a lot to do.
But artificial intelligence does not try to imitate how the human brain works at a microscopic level. If there is progress in neuroscience in understanding how the brain works, this can be useful. But in artificial intelligence it is not the most important thing. Just as planes don't move their wings to fly. The important thing is to enable machines to do things a person does, even if this is achieved using completely different means. It is not about imitating in the smallest detail how neurons fire in the brain, but about being inspired by this.
What excites you most right now?
The systems that can exploit what they have learnt by doing a simple task, so they can learn more quickly to do other related tasks that are more complex. This is called transfer learning, and humans and animals do it continuously.

Press contact

You may also be interested in…

Most popular

See more on Institutional