Josep Curto, director of the UOC's master's degree programme in Business Intelligence and Big Data Analytics
Josep Curto is a professor at the UOC's Faculty of Computer Science, Multimedia and Telecommunications. He is an expert in macrodata, business intelligence and business analytics. He is also the director of the UOC's master's degree programme in Business Intelligence and Big Data Analytics. He believes that artificial intelligence (AI) is set to profoundly transform society and redefine existing occupations and professions. This will require a change of mentality in order to meet the ethical challenges involved in this technological revolution. We are being presented with technologies and techniques with immense social and productive applications that open the door to multiple applications. And Curto has been clear on this point: the responsibility for whether these applications are positive for society falls on the people who create and use them. Algorithms only do what they are programmed to do.
To what extent do artificial intelligence technologies improve people's lives? Can they be converted into tools for eliminating discrimination and inequality between social groups, implanting common habits and disregarding differences?
Artificial intelligence, and mainly machine learning, along with many other technologies, can have positive as well as negative impacts. On the one hand, these technologies can be used by doctors, managers and other professions to perform their duties better. Let's look at the area of medical attention, for example. Today we have people of many different nationalities living in Europe (where there is a generalized mobility of citizens), and these people are not always able to express themselves correctly when they need medical attention. Having a system capable of translating any language into Spanish in this field would undoubtedly be a great help.
On the other hand, some applications present important biases, in both data capture and design, which can lead to scenarios of discrimination and inequality. For this reason, it is extremely important for every professional involved with these techniques to be aware of the existing biases, understand the risks we are introducing, and include monitoring capabilities with the deployment of these techniques to ensure that any biases are detected, and their impact measured and reduced.
We must endeavour to imagine and create applications that actually generate a positive impact on society at the same time.
Robotics is one of the main allies of companies committed to innovation. To what extent are robotics becoming a driver of the future instead of an instrument for the destruction of jobs?
Robotics is not a new technology. We have been transforming production lines for many years. The benefits are obvious: reduced risk, for example, or greater precision in the execution of repetitive tasks. As we continue to improve machine learning techniques and explore other, broader applications, fear of the mass destruction of jobs has returned. What is happening is that, once again, we are faced with a transformation of the employment scenario. For example, when the motor car replaced the horse as a means of transport, the occupation of blacksmith was replaced with the occupation of mechanic.
Today, many occupations will be transformed and boosted by the use of these technologies; at the same time, complementary and new professions will appear. Robots will need maintenance, for example, and the "robot mechanic" will be the natural response to that. What is clear, and this is a challenge faced by the country as a whole, is that the competences we are going to need in this new age will be different, and people will need to train for these new professions which, in many cases, we find difficult to imagine.
In this respect, how would you define social robotics at this stage of the 21st century?
Although we have prototypes such as Hanson Robotics' Sophia, social robotics is still at an early stage. We live under the illusion that the technology is much more advanced than it actually is.
In this application, a much more sophisticated combination of elements is required, especially in the case of anthropomorphic robots. We're talking about facial expressions, verbal communication, specific domain knowledge and even movement coordination. Part of the improvement in social robotics is due to its interaction in real-life contexts, where emotional intelligence is one aspect that is frequently lacking. This area of research is of singular importance for improving social robotics.
Artificial intelligence is changing the way we shop and purchase goods. But could it also change our consumer habits and bring down the quality of the goods we purchase?
We are experiencing a scenario in which these techniques are already affecting consumer habits. Just look at how, after a period of using a streaming service that includes a recommendation system, the results presented are encapsulated in predetermined profiles (within a data bubble), presenting a series of specific contents, and even repeating recommendations. We are subjected to a continuous barrage of displays by companies competing for our attention and seeking to influence our shopping and consumer habits.
You might think this phenomenon is beneficial for the company rather than the consumer, but in fact it is counterproductive. We are creatures of habit, but we like novelty – albeit without any risk – and we soon get bored if we always receive the same recommendations. We can cease to show interest in what is recommended to us and even systematically ignore notifications, as if they were mere noise. Think about, for example, the number of adverts that we, as consumers, systematically ignore, whether consciously, using ad blocking software, or simply by paying only limited attention.
To what extent can the application of new technologies in the area of health care lead to public health care playing second fiddle to private health care and, at the same time, reduce the quality of medical attention, given that the costs in this area can only be met by the state?
There is significant interest in the application of all data- and analysis-related technologies in both public and private branches of the health care system. Technology can play a positive role in improving health care, hospital management, screening, forecasting demand, etc.
We must be realistic, however, technology being only one of the factors that condition the evolution of this sector. The main effect we are observing in the evolution of the market (such as public-private sector balance) is increasingly frequently associated with social policies and the interests of the government of the day. Just look at how austerity policies have affected public health care: they have limited resources and boosted the growth of private health care. In the UK, Brexit is accentuating the crisis in the British NHS (National Health Service), which has been created by cost cutting combined with a talent drain from the sector.
In the area of security, does artificial intelligence mean a gradual reduction of privacy and increased control, to the benefit of the state and the companies who use our data to raise profits?
We will have to see where the lines will be drawn. This is an issue which must, of course, be discussed countrywide and be hooked up to a series of regulations and responsibilities. If we wish to live in a society in which the values of freedom and equality are important under national law, we must once again review the social contract. We should perhaps include a code of behaviour – similar to that used in the context of medicine – in which we define the values that must govern the use and exploitation of data.
In recent months, journalists have once again been posing an old but persistent question: Will software be created that is capable of replacing journalists and producing moderately complex digital content, beyond finance and sports data journalism? Will AI change the depth at which we read the news?
As in other professions, journalism can use these technologies to improve the way they work, for example, in editorial style validation, creating news articles that combine data and a specific knowledge base – such as sports – or detecting fake news. But it also enables the focus to be concentrated on more complex journalism, such as investigative journalism. The first task of these technologies is to automate minimum-value repetitive tasks: showing the latest result of a football match, for example. In this respect, all professionals should reflect on the following question: do we really want our work to continue to be little more than a series of repetitive tasks?
As for whether AI will change how we read, that question arrives a little late. The younger generations live in synchronized environments of fast, brief communication, with the focus often on images, meaning that they consume content in a different way. These technologies help to determine relevant content with respect to our preferences, eventually generating data bubbles.
The use of artificial intelligence in our day-to-day lives and in social relationships themselves involves ethical conflicts. What action should society take? What major ethical challenges are associated with the use of this technological intelligence?
We must educate everybody about the ethical conflicts generated by new technologies, about awareness of our own biases and how these are magnified, and about detection and reduction mechanisms. Unfortunately, a change of mentality is required by numerous actors, from governments and the business community down to individuals. In this respect, there are similarities with the issue of climate change. In spite of the evidence that has existed for a long time, the problem has been minimized and underestimated, and no hard-hitting decisions have been taken.
In the field of education, to what extent can artificial intelligence help to reduce academic failure? Will AI move us in the direction of more personalized learning, or will it impose more common, homologous content, irrespective of differences?
In the context of education, the challenge is twofold: to educate professionals in the skills relevant for the coming years, and adopt these techniques into our processes. Part of our academic research at the UOC is concentrated on what we call Learning Analytics. By that we mean we are interested in understanding and improving the learning process in different environments.
One of the most salient issues is, without a doubt, academic failure in its various manifestations: habitual, circumstantial, etc. As we use more digital channels in the education process – and this is very clear at the UOC – we have the opportunity to capture more signals for understanding our students' learning processes. This is not just understanding the reasons for failure, which is extremely important in its own right. It is also about helping teachers to identify when a student is at risk of failure and to take action to prevent it.
With the personalization of learning, we have a golden opportunity. Many of the professions of the future will combine existing and new skills, creating unique study paths. Here we have the opportunity to convert the contents into digital assets and create assessment systems based on the knowledge, interests and learning mechanisms of the student. On this point, however, we have to redefine the formats in which these skills are marketed.
How can artificial intelligence help human beings to make better use of their time without their participation in production processes being reduced to that of a mere observer?
Existing occupations need to be redefined. Some jobs will cease to exist as they are automated or become obsolete. The challenge is that we are wholly immersed in this change, making it more difficult for us to reflect and identify the next steps. Without a doubt, the education system can and must play a fundamental role in this transformation.
What will be the advantages of AI as a driver of development in the coming decades, from the economic, social and educational point of view?
AI is set to profoundly transform society and be a pillar of numerous sectors, so it would be logical to design a nationwide strategy and turn it into a transversal technology. The United Arab Emirates, for example, has become the first country in the world to appoint a minister for artificial intelligence.
If the objective is to transform a country into an AI power, having a top-level profile in government will help to set in motion collaboration initiatives between civil society, academia and the business community, for the purpose of identifying areas in which AI will be fundamental to innovation. It will also be possible to design coherent and coordinated open data policies in the context of a country, create strategies for the attraction and generation of talent, develop policies based on evidence and data (what we call "evidence-based policies", which we already know to be critically needed), promote an appropriate research agenda, and enable investment in research centres, new businesses, centres for excellence, etc.