AI avoids sexism, but still discriminates based on age
A UOC study confirms that five of the most popular chatbots tend towards ageismThe UOC is also researching age discrimination, one of the most invisible types according to the United Nations, in its #Vellisme project
ChatGPT, Gemini, Copilot... generative artificial intelligence (AI) chatbots have quickly carved a niche for themselves in our daily lives. They help us with tasks at work and in our personal lives, ranging from clearing up doubts and looking for information, to organizing our holidays. But despite being presented as neutral and objective, is that really the case? An international study involving the Universitat Oberta de Catalunya (UOC) has found that although the most widely used AI chatbots have taken steps to avoid sexism and gender bias, they still continue to discriminate on grounds of age.
The research, published in the open-access journal Big Data & Society, shows that generative artificial intelligence chatbots have significant age-related biases which affect their users, but they are more cautious in terms of avoiding sexist stereotypes. Mireia Fernández-Ardèvol, a researcher in the Communication Networks and Social Change (CNSC) research group and professor in the Faculty of Information and Communication Sciences, took part in the research, which examined the behaviour of five of the most popular chatbots. Qualitative interviews were used to carry out the study, and the AI models were treated as if they were human beings in order to identify the extent to which they reproduce social stereotypes in their responses.
“Generative AI seems to have learned to be sensitive to sexism, but not to ageism”
"We applied a method that is common in traditional sociology: the interview. As the chatbots talked to us using natural language, we 'spoke' to them as if they were people, using a script that discussed fictitious situations and questions about digital practices," said Fernández Ardèvol, who was recently awarded an ICREA Academy of Excellence grant. The results are revealing: generative AI seems to have learned to be sensitive to sexism, but not to ageism.
A double standard
The team investigated the results from ChatGPT, Jasper, Gemini, Copilot, and Perplexity, five of the most widely used freely accessible models. They interacted with them by having semi-structured conversations, using a sandboxed digital working environment (with new accounts, cleared browsers and controlled geolocation) to avoid biases arising as a result of prior use or personalization. The questions they entered asked the chatbots to assign an age or gender to fictional characters based on their digital habits, and to explain which features were most useful for different types of users. After the answers were analysed, the results pointed to a double standard.
On the one hand, chatbots tend to give "politically correct" answers regarding gender, and avoid making assumptions about men and women and assigning stereotypical roles to them. On the other hand, they do not show the same sensitivity to age, as they assign profiles and abilities much more readily depending on whether a person is young or old. For example, when considering someone who heavily engages with Instagram or TikTok, AIs do not say whether they are a man or a woman, but they do place them in a younger age category than someone who follows political debates on Facebook.
Biases that mirror society
Fernández-Ardèvol believes that chatbots are more cautious with gender than with age because society is more careful about sexism than ageism. "The people who design, program and train generative AI have internalized the idea that sexism is wrong, but the same does not apply to ageism. Whether as a result of human intervention or the way in which chatbots learn from the texts and materials they receive in their training, chatbots tend to avoid sexist comments, but they are not always able to avoid ageism," said Fernández-Ardèvol, member of CNSC, which is affiliated with the UOC-TRÀNSIC research centre.
Interestingly, the AI models themselves, which often include warnings about the risk of adopting gender stereotypes, also describe what they have to offer in different ways depending on the user's age: for older people, they highlight care, simplified explanations and help in everyday life; while for young people the emphasis is on creativity, learning and entertainment.
Given that these biases are not only technical, but reflect values and stereotypes that are present in society and in the data with which these systems are trained, the research team believes that this situation may contribute to reinforcing these inequalities and creating a biased profile of specific groups, and older people in particular. "There is a danger of legitimizing this discrimination, rendering diversity invisible and limiting opportunities in areas such as employment, health and access to services, particularly if those services are digital. It can even affect the public perception of ageing and reduce older people's dignity," said Fernández-Ardèvol, who is an expert in communication studies.
She is confident that the companies that design and program these tools will take the results of this research into account, in order to incorporate greater social justice and end ageism in their systems. "While one might argue that chatbots simply reproduce problems that already exist in society, the tech companies decide what data they use to train them, which means they can perpetuate these biases or hopefully overcome them with diverse data, ethical audits and an active responsibility in technological development."
The #Vellisme project
The results of this research reinforce the idea that greater sensitivity towards ageism is required in the development of artificial intelligence, and more inclusive approaches are necessary to prevent the reproduction of social discrimination. Another UOC research project has also been working on this: #Vellisme – Digital ageism: Ageist stereotypes and the vicious circle of digital exclusion in Spain.
The programme, which is funded by the Social Observatory of the "la Caixa" Foundation, aims to overcome the digital inequality that older people in Spain have to cope with, by identifying the critical factors and stereotypes that have been most strongly internalized by a large proportion of the population and which foster digital ageism, in order to promote fair digitalization for this age group.
The project is led by Mireia Fernández-Ardèvol in collaboration with Sara Suárez and Marta Cambronero, who are also researchers in the CNSC group at the UOC. Its results, which have yet to be published, also show the adult population tends to agree with stereotypes about the digital abilities of older people.
These projects contribute to the UN's Sustainable Development Goals (SDGs), especially 10, Reduced Inequalities, and 5, Gender Equality, and are aligned with the Universitat Oberta de Catalunya's research missions for Ethical and human-centred technology, Digital transition and sustainability and Culture for a critical society.
Reference article:
Belotti, F., Fernández-Ardèvol, M., Bozan, V., Comunello, F., & Mulargia, S. (2026). Double standards of generative AI chatbots: Unveiling (digital) ageism versus sexism through sociological interviews. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419407
Transformative, impactful research
At the UOC, we see research as a strategic tool to advance towards a future society that is more critical, responsible and nonconformist. With this vision, we conduct applied research that's interdisciplinary and linked to the most important social, technological and educational challenges.
The UOC’s over 500 researchers and more than 50 research groups are working in five research units focusing on five missions: lifelong learning; ethical and human-centred technology; digital transition and sustainability; culture for a critical society, and digital health and planetary well-being.
The university's Hubbik platform fosters knowledge transfer and entrepreneurship in the UOC community.
More information: www.uoc.edu/en/research
Experts UOC
Press contact
-
Núria Bigas Formatjé