1/24/24 · News

eHealth Center participates in British government programme for ethical AI in healthcare

The UOC, King's College London and the Catalan government work together for an ethical AI in healthcare
ehealthcenter

The emergence of artificial intelligence (AI) as a clinical decision support tool has raised certain ethical questions. AI has emerged as a tool that can improve the capabilities of professionals in some medical tasks, such as interpreting medical images and making diagnoses based on them, optimizing care processes and personalizing treatments. However, if applied without taking ethical aspects into account, a number of risks arise, such as dehumanizing the practice of medicine, and lack of transparency and equal access. The Universitat Oberta de Catalunya (UOC) has been chosen to participate in The human role to guarantee an ethical AI for healthcare project, which is led by King's College London and is part of the British government's prestigious Responsible AI UK programme. The Government of Catalonia's delegation in Brussels is also involved in the project.

The aim of the project is twofold: firstly, to validate an ethical model with five concepts that can be adopted in a practical way and applied to artificial intelligence in healthcare; and, secondly, to generate reflection on the role of medical professionals, AI developers and patients in the goal of achieving ethical AI. The model has been developed by the researcher Dr Raquel Iniesta, leader of the Fair Modelling and TDA group of the Department of Biostatistics and Health Informatics at King's College London, and her team. "Although there's a lot of medical literature on AI applied to healthcare, there are still relatively few AI models used in clinical practice, as there are ethical factors that hinder their implementation. The integration of AI must ensure patients have the necessary space to explain their health problems. The medical community, in turn, must still be able to continue developing their clinical criteria without interference."

 

Five basic concepts for ethical AI in healthcare

The model developed by Iniesta, who has a degree in Mathematics from the Universitat Autònoma de Barcelona (UAB) and a PhD in Public Health, has five concepts that must be included in AI applied to clinical practice: adhering to the set of ethical pillars of the medical profession; the fact that technology cannot replace the knowledge of clinical professionals but only supplement it; the responsibility of doctors in clinical decisions, even if they use AI to help them; the essential empowerment and education of patients; and developers' responsibility and accountability for automated medical decisions made by AI.

Iniesta explained that "collaboration is needed. Human supervision of medical decisions based on AI algorithms reduces the potential dangers of this technology. The model we've defined must be qualitatively validated and opened up to debate and reflection by the different stakeholders: clinical professionals, patient representatives, academia, governments and society at large." To this end, meetings will be held in 2024 in three cities: London, Brussels and Barcelona. The project will officially start in February next year and run until mid-2025.

On behalf of the UOC, it will be coordinated by Dr Manuel Armayones, director of the eHealth Center's Behavioural Design Lab and member of the Faculty of Psychology and Education Sciences. According to Armayones, "the fact that AI exceeds human capabilities in certain areas, such as health sciences, raises important ethical dilemmas, such as the risk of replacing humans and the possible dehumanization of medicine. It's essential to integrate an ethical component that ensures the benefits of AI are compatible with respect and human dignity.

Specifically, the Behavioural Design Lab will explore how persuasive design strategies can be integrated into AI-based systems. "It still remains to be seen whether the persuasive techniques included in the design of applications, combined with the personalization of interactions allowed by AI, fall inside or outside ethical boundaries. The new forms of generative AI will lead to unimaginable situations. We must therefore carry out a forward-looking analysis to ensure we use it within the boundaries of ethics," said Armayones.

The issue of who should be responsible for the ethical application of AI, both in medicine and in other fields, is a hotly debated topic. The social impact caused by the launch of ChatGPT just over a year ago has brought the debate about AI to the general public. In Europe, a law regulating the use of this technology is about to be passed. As Iniesta explained, "current laws have vague concepts. For example, they say that we must ensure AI is fair or that it respect human dignity, but they're not specific enough. The aim of this project is to generate debate on the ethical aspects of AI because, if we do it well, we'll be able to enjoy its benefits without dehumanizing medical practice."

Press contact

Contacte

Uneix-te a nosaltres i fem possible el canvi.

Contacta

Related links

Other news

Show more