"AI should not replace scientific thinking, but it can strengthen research if used well"
Marc Romero, new editor-in-chief of ETHE, a global leader in education and technology research
Marc Romero is a member of the UOC's Faculty of Psychology and Education Sciences and the Edul@ab research group, affiliated with the Futures of Education in the Digital Age Research Centre (UOC-FuturEd). He has recently taken over as editor-in-chief of the International Journal of Educational Technology in Higher Education (ETHE), which had been led for more than 20 years by Prof. Josep Maria Duart. Having previously served as deputy editor, Romero now ushers in a new phase for research and academic publishing as they grapple with the multiple challenges posed by generative artificial intelligence (GenAI), a field in which he is an expert. In this interview, Romero shares his insights on the far-reaching changes that this technology is bringing to higher education, particularly in the field of research.
How are you approaching the challenge of leading ETHE?
I'm approaching this challenge with a combination of enthusiasm, a sense of responsibility and a forward-looking vision. After more than two years as deputy editor, I'm well acquainted with the journal's internal dynamics, its editorial processes and, most importantly, the people who make it possible. Thanks to this experience, I can take on this new chapter with a realistic perspective and level of ambition.
My goal is to strengthen ETHE's position as a global leader in educational technology in higher education, while upholding the scientific rigour that has earned it its reputation. I also want to usher in a more open, diverse and sustainable phase by further professionalizing the editorial team, expanding the journal's global dimension and strengthening the links between research, educational practice and social impact.
This comes out of a commitment to open science, collaborative work and the desire for ETHE to continue serving as an international showcase for the UOC's talent and values.
“Excessive or uncritical AI use can weaken fundamental processes such as questioning, carrying out in-depth analyses, synthesizing ideas and building one's own arguments”
The journal is a benchmark for education and technology research. How are technological disruptions such as AI affecting education?
The AI boom has far-reaching implications for education, extending beyond new tools to challenge how we think about learning assessment, academic authorship and even skills development. GenAI in particular is forcing us to reconsider what it means to learn, how we support students and the role of teachers in this new landscape.
This transformation is already clearly reflected in ETHE, with special issues and articles focusing on topics such as assessment in the age of AI, the educational use of GenAI, ethical implications and critical thinking regarding its use. The journal understands AI as a complex phenomenon that comes with both opportunities and risks. Our goal is to provide scientific evidence and critical insights to help universities integrate it in an educationally meaningful, responsible way, aligned with the values of higher education.
Where do you think the use of GenAI currently stands in the university sector?
We're in a period of transition and uncertainty. Generative AI has emerged at breakneck speed, disrupting many traditional university practices, particularly those related to authorship, originality and learning assessment. Universities are adapting to a new landscape in which it's not always easy to distinguish between students' own work and content generated with the assistance of AI tools.
Vastly different approaches coexist: from uncritical or instrumental uses where GenAI is blindly trusted, to more reflective efforts to integrate it as a support for teaching and learning. It's clear that the main challenge facing universities is not technical, but educational and institutional: redefining what it means to learn, how we assess learning and how we uphold academic integrity while still taking advantage of the opportunities offered by this technology.
How can we make it easier for students to use GenAI more critically and responsibly?
Promoting the critical and responsible use of generative AI starts with enhancing students' digital competence. This should be viewed not merely as a technical skill, but as a competence that encompasses civic, ethical and other dimensions, too. In fact, major digital competence frameworks, such as the recent DigComp 3.0 and initiatives in Catalonia, already embed GenAI throughout this competence. This means educating students about how these tools work and about their potential uses and risks: biased responses, hallucinations, lack of source traceability, overreliance on technology and so-called "cognitive atrophy".
The second step relates to teaching: it is necessary to design activities that make reflection inevitable. These include tasks where students have to justify their decisions, verify information, compare GenAI responses with reliable, field-specific sources, and take a stance when confronted with dilemmas they may encounter in their future careers. Instead of focusing solely on prohibiting or regulating it, the key is for students to learn to use GenAI with discernment, transparency and responsibility, and to ensure it becomes a support for their own thinking, not a substitute.
Critical thinking is becoming an increasingly essential skill in today's world. What is the best way to sharpen it at the university level?
This question has a lot to do with the last one, because we can't discuss the responsible use of GenAI or any other technology without a foundation in critical thinking. These two factors are inseparable and must be developed together in university education.
Honing students' critical thinking means helping them to embrace an active attitude towards knowledge. In this context, it's essential that they learn not only to seek answers, but also to ask meaningful questions and evaluate the information they receive, whether it comes from academic, technological or social sources. Knowing how to ask good questions is a core skill for deeper, more meaningful learning.
In teaching, this takes the form of activities that encourage reflection, comparison of ideas and argumentation when tackling relevant problems in a specific field of study. In this way, students not only learn, but also develop their own understanding of their context and discipline, with discernment, autonomy and responsibility.
ETHE is a scientific journal. What are the risks of using AI in research?
GenAI poses significant risks in research if it's not used judiciously. One of the main concerns is that the boundaries of authorship and scientific responsibility are becoming increasingly blurred. Even when such tools are used, the knowledge produced must always remain the responsibility of researchers: any mistakes generated by the tools are also the researchers' errors.
There are also risks related to the reliability of knowledge and the false assumption that technology is neutral. Believing that AI is inherently objective is naïve, since these tools are developed by specific groups and reflect certain values, interests and biases. Plus, they can produce content that appears highly credible, but this doesn't mean they are free from errors or hallucinations. These can go unnoticed if they aren't properly verified, thus reinforcing dominant perspectives while marginalizing other viewpoints that can increase plurality and representativeness of the situation or the phenomenon being researched.
Finally, there's the risk of cognitive atrophy that I mentioned earlier, which also affects research. Excessive or uncritical AI use can weaken fundamental processes such as questioning, carrying out in-depth analyses, synthesizing ideas and building one's own arguments. If technology systematically replaces intellectual effort, the research process itself can become impoverished. AI, like learning or indeed any task a human may perform, must serve as a support for scientific thinking, not as a substitute.
Apart from the risks, what advantages does GenAI bring to researchers' work?
GenAI can be a useful support tool in research when used responsibly and without replacing scientific judgement. Overall, it can streamline mechanical and repetitive tasks and free up researchers to focus on the more thoughtful and analytical aspects of their work. It's especially useful in the early stages of a research project, as it can be used to identify key concepts, lines of debate and relevant terminology, as well as assisting in outlining ideas or guiding study design, always with verification against reliable academic sources. It can also aid in repetitive tasks or data analysis, and can stimulate creativity by helping to clarify or reformulate ideas.
In fact, it's already used in many research projects to support systematic literature reviews and improve text drafting, although the direct generation of content using GenAI is not recommended. It's particularly useful for researchers who are not native English speakers. In any case, a key aspect is transparency: an increasing number of articles explain how GenAI has been used in the methodology or in text revision, but in no case can it be listed as the author. These are just a few examples of its potential, which must be explored cautiously to avoid compromising scientific rigour.
Could GenAI create a generational gap between junior and senior researchers?
At first glance, it might seem that GenAI could open a generational gap, but my experience in research, educational technology and digital skills reveals a more complex reality. It's true that younger researchers, such as students, often demonstrate more agile and efficient use of technology from an instrumental point of view, but this doesn't always translate into critical and appropriate use in academic or research activities.
By contrast, while senior researchers may initially be less familiar with these tools, they often show stronger scientific judgement, methodological experience and the ability to contextualize their use. For this reason, rather than speaking of a gap, I believe we should envisage a scenario of collaboration and mutual learning, where the technological agility of junior researchers complements the experience of their senior counterparts. GenAI can present an opportunity to learn and move forward together.
What should a scientific journal like ETHE do to face the technological challenges that lie ahead?
A scientific journal like ETHE cannot simply react to technological change; it must also strive to anticipate it. Given the constant pace of new developments, its role is to help guide academic debate and identify emerging directions in educational technology, positioning itself at the forefront of the field.
This means serving as a reference point for critical debate, where technological innovations are examined with scientific rigour through educational, ethical and social lenses. ETHE must avoid uncritical views and alarmist discourse, and instead promote evidence-based research that explains the real impact of technology on higher education.
At the same time, a journal like ETHE must play an active role in identifying and promoting emerging trends by publishing dossiers and synthesis articles that help the scientific community to anticipate the major debates of the future. All this must be underpinned by a firm commitment to quality, and by defining standards and good practices that guide research in times of uncertainty and reinforce the journal's position as an international benchmark.
Should the regulation of GenAI in research prioritize both ethical principles and transparency in the use of tools?
Yes, and in a clear way. Regulation of GenAI in research needs to establish sound ethical principles and promote transparency in the use of these tools. This second point is especially crucial. This process is already under way: international guidelines, such as those from UNESCO and the OECD, and specific European research guidelines, highlight human responsibility, the need to declare the use of AI and the preservation of scientific integrity.
Many universities already have their own policies on the use of GenAI in research, or are actively developing them, and continuously adapt them to technological advances. The same applies to academic publishing, where universities and scientific publishers have defined clear criteria on the acceptable use of AI and update them regularly. All this shows that the goal is not to prohibit AI, but to create flexible, transparent and shared frameworks that can keep pace with a rapidly evolving technology.
Transformative, impactful research
At the UOC, we see research as a strategic tool to advance towards a future society that is more critical, responsible and nonconformist. With this vision, we conduct applied research that's interdisciplinary and linked to the most important social, technological and educational challenges.
The UOC’s over 500 researchers and more than 50 research groups are working in five research units focusing on five missions: lifelong learning; ethical and human-centred technology; digital transition and sustainability; culture for a critical society, and digital health and planetary well-being.
The university's Hubbik platform fosters knowledge transfer and entrepreneurship in the UOC community.
More information: www.uoc.edu/en/research
Press contact
-
Leyre Artiz