2023

"We are researching the social and emotional communication that can take place between a robot and a person"

Agata Lapedriza

Agata Lapedriza

27/04/2023
Teresa Bau
Agata Lapedriza, a leading international researcher in the field of artificial intelligence and head of the AIWELL group attached to the eHealth Center

Agata Lapedriza is a leading international researcher in the field of artificial intelligence, head of the Artificial Intelligence for Human Wellbeing (AIWELL) group – attached to the eHealth Center at the Universitat Oberta de Catalunya (UOC) – and also a researcher at the prestigious Massachusetts Institute of Technology (MIT). In addition, she is a member of the Faculty of Computer Science, Multimedia and Telecommunications. Her research focuses on computer vision and social robotics, and her articles have been cited thousands of times in recent years.

 

How do you rate the latest developments in artificial intelligence, particularly ChatGPT? Why has it generated so much buzz?

The progress of artificial intelligence in the last 10 years has been spectacular and the pace at which it's advancing means that this will continue. What ChatGPT does is incredible. Task-oriented chatbots have been around for a long time, such as chatbots that can assist us in booking a table at a restaurant or finding a product on a website. These task-oriented chatbots work relatively well, but they're very limited. The difference is that ChatGPT is an open dialogue system, so, in theory, you can talk to it about any topic. Creating open dialogue systems is much more complicated than building task-oriented dialogue systems. Five years ago, I was working on a project related to open dialogue models and the whole thing worked very badly.

Does ChatGPT have a lot of room for improvement?

These kinds of technologies still need a lot of iterations before they can be used in practical cases, because it's necessary to fully understand how they behave (e.g. what biases they have, what information they are not able to represent correctly, etc.). But, without a doubt, what's being achieved today is fantastic and could be very useful once it's been further developed.

Part of your work focuses on explaining how artificial intelligence works, so that it's not a 'black box'.

The last article we published in this area put forward a method of explainability to better understand the functioning of Deep Learning models that perform face classification. For example, face recognition systems, which many mobile phone models use for secure unlocking. We're working on designing computational models that can provide explanations for their responses, making them more transparent. These explanations help us to know whether we can trust the response of the artificial intelligence system.

How can we detect the biases that are often present in artificial intelligence?

We can discover biased behaviour of the system through explainability mechanisms. Both transparency and bias detection are important to ensure that artificial intelligence works fairly and reliably. This is particularly essential when this technology is used to make decisions that can affect people, such as assisting in the diagnosis of diseases. The transparency and unbiased behaviour of artificial intelligence systems are necessary to develop responsible artificial intelligence that has a positive impact on society.

Another focus of your research is interactions between robots and humans.

This year we want to concentrate part of our efforts on social robotics, to understand the shortcomings of verbal and non-verbal communication perception systems when processing information in real time. We're researching the social and emotional communication that can take place between a robot and a person. For example, in an interaction between the two, the robot should be able to perceive if the person is paying attention to it, if they're expressing a specific emotion, or if they're indicating that they expect a response from the robot.

You also have projects in computer vision systems, which is one of your specialities.

Yes, we want to make advances in the design of devices that capture images of the retina and the creation of computer vision systems that analyse these images for the early detection of diseases. These projects are being led by faculty members David Merino and David Masip, in collaboration with universities such as UC Berkeley (California, USA) and hospitals such as Vall d'Hebron in Barcelona. Also in relation to the diagnosis of diseases, faculty member Carlos Ventura is starting a very interesting project to detect mouth cancer using images.

What is the current stage of development of Computer Vision technologies?

In recent years, huge progress has been made in pattern detection: for example, object detection, facial recognition, detection of facial points of interest (such as the corners of the eyes or mouth), 3D reconstruction from one or several 2D images, or tools for diagnosis based on automatic analysis of medical images, such as mammograms or X-rays. All this progress has been thanks to deep learning, the large amounts of data available, and the availability of hardware to be able to perform more effective calculations (e.g. GPUs).

What are the current weaknesses of these systems?

Some of the weaknesses include biases or poor generalizability in data sets that are not well represented. Understanding how to overcome these weaknesses is a challenge and a lot of effort is being put into this. In addition, there are many general challenges in artificial intelligence that also affect computer vision, such as being able to learn from smaller data sets. Deep learning algorithms need huge amounts of labelled data. For example, to learn to detect pneumonia in chest X-rays you need many X-ray images of healthy people and people with pneumonia - and getting these images is expensive. It would be fantastic if artificial intelligence could learn robustly from just a few examples, but this isn't yet possible.

You are also working on solutions that predict health problems by analysing our personal data.

Faculty member Xavier Baró and I are exploring possible solutions based on data collected by personal electronic devices, such as mobile phones or smartwatches, for the early detection of mental illnesses. Our hypothesis is that the data collected by personal devices, together with sensors (such as the accelerometer, the light sensor, or physiological signals such as heart rate) contain information about our habits (for example, how many hours we sleep, or whether we move a lot or a little). This information is of clinical interest, because changes in habits can be a reason for (or a consequence of) a mental health problem. For example, it's known that a disturbed sleep pattern can be related to increased stress, and prolonged stress can lead to serious mental illness. We're studying whether the information that the sensors capture can be used for early detection of mental illness, at which point treatment would be more effective. In this project, we're working in collaboration with researchers at the Massachusetts Institute of Technology (MIT, USA), Albizu University (Florida, USA), and the Instituto Politécnico Nacional (Mexico). We expect to make exciting progress during 2023.

Is there any other research you would like to highlight?

Over the last three years, we've been working on detecting natural disasters and incidents (e.g. forest fires, earthquakes, or car accidents) in images from social networks. One of the goals is to be able to detect images of events that may require humanitarian aid and to facilitate a quick response from NGOs, which usually struggle to get information about what's happening. Our latest article features experiments on how to detect images of incidents on social media, and the results are very promising. We've also developed a web app in order to be able to do this monitoring in real time. For this research project, we're collaborating with researchers from MIT and the Qatar Computing Research Institute.

You've been a visiting researcher at Google. What was it like working there?

I spent a year at Google Research with the Visiting Faculty programme. It was the first time I'd done research in a company, outside academia. It was a very interesting and positive experience, which is helping me to better mentor students who want to go into industry after finishing their doctoral programme.

Finally, what message would you give to motivate girls to go into mathematics and technology?

I'd tell them that mathematics and technology can be used to find solutions to a huge variety of problems, in fields as diverse as health, education, entertainment, urban planning, and design. They can also be used to create visual art, music and literature. And I'd tell them not to believe the stereotypes that are sold to us in films or series, where the people working in tech are weird or antisocial. It's all false. There are wonderful, open and sociable people working in tech, and it's fun and exciting to be able to participate in the development of all these fields. We need people with different backgrounds, sensitivities and concerns to contribute to the creation of inclusive technology.

 

The eHealth Center investigates how digital technologies contribute to improving the health and well-being of individuals and groups; how digital health helps us to have more knowledge and a better capacity to make decisions about our health; and how it allows us to strengthen health systems and health professionals through technology, communication and data.

Related links