Learning Technologies

Propuesta de tesis


Grupo de investigación

Visual learning analytics for virtual learning environments

Virtual learning environments generate huge amounts of interaction data that can be analysed and visualized in order to better understand both the teaching/learning process and users' behaviour. This analysis can be done at different levels of detail, combining data from multiple sources (services, learners' profiles, etc.) coming from one or more educational scenarios (a virtual classroom, blog, repository, etc.). Research on this topic is meant to build robust models that can be used to help learners, teachers and managers to fulfil their goals, and to detect and resolve bottlenecks in virtual learning environments, as well as identifying and explaining the most relevant reasons for these, by means of visual learning analytics (both methodologies and tools).

Dr Julià Minguillón


Algorithms and methods for educational data mining and learning analytics

Analysing what is going on in a virtual learning environment means dealing with huge data sets, which are gathered with different levels of detail. Traditional statistical analysis and data mining techniques need to be adapted in order to cope with partial records, categorical data, evolving models and, in general, large amounts of noisy data. Research on this topic includes the extension of state-of-the-art machine learning algorithms and techniques, in order for them to be used for building robust and interpretable models.

Dr Julià Minguillón LAIKA

ICT education through formative assessment, learning analytics and gamification

ICT degrees include very practical skills, which can only be acquired by means of experience, performing exercises, designs, projects, etc. In addition to the challenge of motivating students to solve activities, lecturers face the problem of assessing and providing suitable feedback for each submission. Receiving immediate and continuous feedback can facilitate the acquisition of the skills, although this requires support in the form of automatic tools. The automation of the assessment process may be simple in some activities (eg practical activities on programming) but it may be complex in activities about design or modelling. Monitoring the use of these tools can reveal very valuable information for the tracking, management and continuous improvement of the course by the teaching team. However, in order to leverage all its potential, this information should be complemented with data from other sources (eg the student's academic file) and historical information from previous editions of the course.

The main goal of this line of research is to design and build a set of e-learning tools and services to provide support to the learning process in university degrees in the field of ICT (Information and Communication Technologies). The expected benefits will have a repercussion on the students (improvement of the educational experience, greater participation and performance, lower drop-out rate) and on the lecturers, managers and academic coordinators (resources for monitoring a course, making decisions and predictions).

Taking into account these elements, the contributions will focus on three axes: - Tools for formative assessment, which can provide immediate feedback by means of automatic assessment. In particular, the research activity will focus on knowledge areas with high cognitive or modelling levels, such as the design or modelling of software and hardware. - Learning analytics that monitor the activity and the progress of the student regarding the use of the aforementioned tools and allow for analysis of the learning results, identifying the critical points and defining improvement actions. These analytics will also incorporate other sources of academic and historical information to facilitate the course tracking and decision making processes for the teaching team. - Gamification, as an incentive scheme in order to motivate students to perform new activities and increase their engagement without sacrificing the academic rigor.

A relevant aspect to be considered by e-learning tools developed in this line of research is the modularity and independence from technologies or particular virtual campuses, with the aim of facilitating its application to different courses and contexts. To this end, the functionalities of these tools will be offered as a set of services, using appropriate standards. The tools will be evaluated in courses on mathematics, computing engineering and telecommunication and it is expected that their use will become feasible as part of both self-taught education (life-long learning) and traditional formal education as well as massive on-line learning courses (MOOCs).

Dr Robert Clarisó

Dr Santi Caballé




Multi-modal emotion awareness e-learning tools

Emotions and affective factors, such as confusion, frustration, shame and pride, are acknowledged as major influences in education in LMSs (Learning management systems). However, despite major advancements in fields such as artificial intelligence, human-computer interaction, and sensorial technologies, e-learning environments are still struggling with incorporating emotion awareness tools. The limited to null adoption of emotional analysis tools and affective feedback prevents both learners and teachers from reaping the benefits of emotion awareness LMSs.

This line of research aims at enhancing existing e-learning platforms by developing tools and services that support the detection and representation of learners’ emotions, as well as emotion-based learning adaptation and affective feedback. To this end, the research will apply novel emotion detection models to rich multimodal data collected using state of the art channels, advanced sensors and novel adaptive interfaces. Moreover, via multiple small-scale pilots in formal, informal and workplace learning environments, the research will intend to demonstrate a positive impact of emotion-aware e-learning on decreasing learner drop-out rates, increasing satisfaction and improving learning performance, thus making learning as a whole a better experience.

The ultimate goal of the research conducted here is to understand the underlying mechanisms of socio-affective processes as well as how best to build multi-modal emotion awareness e-learning tools that are adaptive not only to learners’ cognitive performances but also to their affective states and social interactions with peers and teachers. This goal is thus two-fold:

  • To embed non-intrusive, module-based emotion awareness tools into LMSs that allow for socio-affective learning and assessment of individuals and groups in different environments: formal (university, primary/secondary school, and special education), informal (open education e-learning for adults), and the workplace.
  • To validate and measure improvements in knowledge gain, drop-out rate, learning analytics capacity, and affective profiling as measured by changes in socio-cognitive performance, motivation, collaborative and social interactions, together with the cost-effectiveness of the platform, including the rate of adoption of these technologies for the modernization of education and training, and also validating gender differences.

Dr Santi Caballé

Dr Atanasi Daradoumis



Cloud, cluster and distributed computing for e-learning

This research line will leverage intensive computational capabilities of Cloud, Cluster and Distributed computing for eLearning in order to integrate adaptive and personalised approaches capable of identifying learners’ requirements (using Artificial Intelligence and data mining techniques), building users models based on navigation patterns in virtual campus, intelligently monitoring progress to purposeful and meaningful advice both learners and teachers, among others. In particular:

Cloud computing technologies are more and more popular in eLearning, most computing platforms and standalone eLearning applications are being deployed in Cloud platforms and offered as a service (SaaS) with many benefits. For instance, by porting eLearning applications to Cloud, it is possible to offer on-line learning as a Cloud service, which would alleviate the final user from the burden of installing and configuring at local computer or local networking infrastructure. Moreover, porting to Cloud allows for tackling mining of very large data sets, i.e. Big Data for eLearning.

User modeling in eLerning implies a constant processing and analysis of user interaction data during long-term learning activities, which produces huge amounts of valuable data stored typically in server log files. Due to the large or very large size of log files generated daily in Virtual Campuses, the massive processing is a foremost step in extracting useful information. Cluster computing is commonly used for this purpose using different distributed frameworks, such as Hadoop Map Reduce.

Non-functional requirements in eLearning systems, such as maintenance cost, scalability and fault-tolerance are important aspects to consider. Distributed technologies, such as P2P are an important alternative to develop decentralized online learning systems in which students can be more than mere clients and can use their own computational resources for task accomplishment during online learning process.

This research line will implement and evaluate the eLearning approaches using the above computing paradigms in order to explore the real complexities and challenges, such as time performance of massive processing of daily log files implemented following the master-slave paradigm and the actual time efficiency of porting some Data Miming frameworks to the Cloud for mining Big Data for eLearning.

Dr Santi Caballé SMARTLEARN

Authorship and authentication through activities in e-assessment

Even though online education is a key factor for lifelong learning, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity, authentication and authorship and the integrity of the activities in a scalable and cost efficient manner would be very helpful.

This research line proposes to analyse how online assessment in distance learning environments is performed (using tools and resources through continuous e-assessment activities) based on authentication and authorship trustability and the systems used. The activities and their evaluation are core pieces to obtaining evidences about user authentication and authorship. Then the e-assessment approach will be based on a continuous trust level between students and the institution across curricula.

This line of research proposes to analyse a virtual assessment approach and systems based on a continuous trust level evaluation between students and the institution by analysing too the current online certification processes.

Dr  Ana Elena Guerrero

Dr M. Elena Rodríguez

Dr David Bañeres





Conversational Agents and Learning Analytics for MOOCs

Higher Education Massive Open Online Courses (MOOCs) introduce a way of transcending formal higher education by realizing technology-enhanced formats of learning and instruction and by granting access to an audience way beyond students enrolled in any one Higher Education Institution. However, although MOOCs have been reported as an efficient and important educational tool, there is a number of issues and problems related to the educational aspect. More specifically, there is an important number of drop outs during a course, little participation, and lack of students’ motivation and engagement overall. This may be due to one-size-fits-all instructional approaches and very limited commitment to student-student and teacher-student collaboration.

This thesis aims to enhance the MOOCs experience by integrating:

• Collaborative settings based on Conversational Agents (CA) both in synchronous and asynchronous collaboration conditions

• Screening methods based on Learning Analytics (LA) to support both students and teachers during a MOOC course

CA guide and support student dialogue using natural language both in individual and collaborative settings. Moreover, LA techniques can support teachers’ orchestration and students’ learning during MOOCs  by evaluating students' interaction and participation. Integrating CA and LA into MOOCs can both trigger peer interaction in discussion groups and considerably increase the engagement and the commitment of online students (and, consequently, reduce MOOCs dropout rate).

Dr Jordi Conesa

Dr Santi Caballé


Intelligent tutoring systems for learning digital systems

The synthesis of digital circuits is a basic skill in all the bachelors around the ICT area of knowledge, such as Computer Science, Telecommunication Engineering or Electrical Engineering. An important hindrance in a virtual learning environment is that the student does not have the face-to-face support of the instructor during their learning process.

This research deals with the design of a unified automated framework to provide a set of self-assessment services to learn digital systems. In addition to design tools where the personalized feedback is crucial, the research also focuses on the instructor point of view giving specific information related to the analysis of the learning progress of the students.

Dr David Bañeres

Dr Robert Clarisó


Learning analytics for automated feedback generation and self-regulated learning when assessing programming assignments

Feedback gives information to students about how their learning achievements and performance relate to the expected goals. Self-regulated learning help students set goals for their learning, monitor, direct, and regulate those actions that can best lead toward the accomplishment of these goals. Moreover, self-regulation can show differences in how we see ourselves compared to our peers, which can spur changes in awareness, motivation and behaviour. Learning analytics is the discovery, interpretation and identification of patterns in data that helps improving performance and behaviour.

The aim of this PhD proposal is to apply learning analytics to data generated by a tool that is used for automatic assessment of programming assignments and which aims at providing automated formative feedback to students so that they improve self-regulation and performance.

More specifically, we take DSLab (http://sd.uoc.edu:8080/dslab) as a base, a tool that is currently used to evaluate distributed systems assignments in a realistic environment. This tool allows students to upload the classes that implement the distributed algorithm or protocol to evaluate, run it in a realistic distributed environment, and finally, get a grade for the practical assignment.

The thesis will take an interdisciplinary approach and will, among other things, identify patterns of behaviour that result on good or bad results, propose models and mechanisms to improve self-regulation, identify feedback opportunities that will help learners to achieve their learning goals and will also help instructors have a clearer idea of how their students progress.

Dr Atanasi Daradoumis

Dr Joan Manuel Marquès


Tools for automatic assessment of technical exercises

This research line has the aim to develop new solutions to automatically correct technical exercises from elementary computer science subjects such as programming and databases. These set of tools will open the possibility to provide better feedback and leverage assessment efforts from tutors in these mass courses.

Automatic assessment of algorithmic exercises is a vibrating research line. Since the beginning, research efforts have been focus on assessing the exercises based on the testing of a set of input and compare the obtained outputs with the expected values (Boada et al., Prados et al.). The workflow has two steps. First, we compile the code, and if it is successful, the obtained executable is tested against a set of expected input and output datasets. The input can come from text or formatted data files, and the output is saved in files or printed on the screen. Using this paradigm, we can assess the response of a huge variety of exercises, whilst we give instant feedback to the student. However, in some situations despite the output is correct, the algorithm is wrong (i.e.: find against) or in others, the students spend more time trying to format their output to obtain the expect output that to learn to programming.

Little effort has been done in the algorithm structure comparison for automatic assessment. Our final goal is generate a tool able to compare the internal structure of two algorithms based on graph matching techniques. From this comparison, we expect to generate a more precise feedback that helps to reinforce the student’s learning experience. Structural algorithm comparison will avoid the assimilation of wrong concepts from early stages of their learning process. Furthermore, this automatic tool will help to focus the teacher daily work since it will be able to provide individual and group reports.

Prados, F. et al., Automatic generation and correction of technical exercises. In International Conference on Engineering and Computer Education 2005. ICECE 2005.

Boada, I. et al., A teaching/learning support tool for introductory programming courses. In Information Technology Based Proceedings of the FIfth International Conference on Higher Education and Training, 2004. ITHET 2004.

Dr Maria Jesús Marco Galindo

Dr Ferran Prados Carrasco




Instructional design and technologies for teaching programming in a fully online environment

Learning introductory programming is considered difficult for novice students, mostly because it requires a new way of thinking. As a consequence, drop-out rates in programming courses are typically high. This fact is worse when the learning environment/scenario is fully online. Therefore, teaching programming online is a great challenge.

The main goal of this research is to design, develop and test e-learning tools that support students and instructors throughout teaching-learning process in fully online programming courses. This research also includes the creation of new instructional designs that help students to acquire programming skills.

Dr David García-Solórzano LAIKA

Enhancing educational support through an adaptive virtual educational advisor

Nowadays, there are many systems that help students to learn. Some of them aid students in the phase of learning to find learning resources or to recommend exercises. Others aim to help the student in the assessment phase to give feedback. And others monitor the student ’s progress during the instructional process to recommend the best learning path to succeed on the course. Depending on the objectives / competences of the subject some features are more suitable than others.

The Universitat Oberta de Catalunya (UOC) has a heterogeneous campus where different specialties (knowledge areas) are taught. The UOC has started a three-year multidisciplinary project (LIS project) whose main objective is to develop an adaptive system to be globally applicable at UOC campus to help students to succeed in their learning process.

The developed system should be widely applicable to all types of courses and independently of the learning resources and contents. Also, some features might not be adequate for some courses. The system should be configurable, and the features should be enabled based on teacher criterion.

This research line proposes to work in this challenging institutional project focusing to the next topics:

  • predictive analytics
  • early warning systems
  • automatic feedback and nudging
  • data visualization and dashboards
  • gamification
  • virtual educational advisor (chatbots)

Dr David Bañeres


Dr Ana Elena Guerrero

Dr M. Elena Rodríguez


Dr Isabel Guitart







Interactive Tools for Learning to Code

This research proposal focuses on user-centered interactive tools for learning to code, and specifically on the CodeLab learning tool. CodeLab is an ongoing project that provides a learning environment and tool for non-STEM students to learn to code following a laboratory approach. Learning to code in a laboratory lets students build knowledge through practice and social interaction. CodeLab offers an online laboratory with a learning by doing approach to learn to code: learners are able explore their learning itinerary, solve problems, complete activities and can discuss solutions and problems with peers or with teachers. 

This research takes an interdisciplinary approach and focuses on:

  • the evaluation of CodeLab under educational, instructional and TEL (technology-enhanced learning) approaches;
  • the extension of CodeLab based on the design and evaluation of assessment and feedback modules, the integration of block-based programming languages and the design of strategies and tools for teachers to better support learners.
Dr Enric Mor DARTS

AI and Ethics in eLearning: from Ethical Design to Artificial Morality

Artificial Intelligence (AI) driven technologies can be used to automate pedagogical behaviours within online education environments in order to provide support to large cohorts of online students and human instructors. However, different studies have shown how, in some cases, AI-driven systems have been making unfair and biased decisions that have had unexpected and detrimental effects. Since then, the field of AI and Ethics (AIE) explores how to take into account ethical considerations in the design and integration of AI technologies in order to ensure that their use does not bear unexpected outcomes. These considerations are particularly important in those sectors that are meant to provide universal services to our society, such as public healthcare and education.

Even though ethical design can help foresee and prevent unwanted outcomes, the huge combination of potential situations makes it practically impossible to foresee, by design and in advance, every possible effect. Furthermore, the more autonomous AI systems get, the more we need to ensure that those systems have a way of taking into account the potential moral consequences of their choices. Common examples of systems that exhibit a high degree of autonomy and which may need to face complex ethical decisions include self-driving vehicles, lethal autonomous weapon systems, or healthcare robots. The field of Artificial Morality (AM) explores, precisely, how to integrate ethical awareness and moral reasoning within the AI’s decision-making procedures –just as other markers, like utility, or performance, are used to guide the system’s behavior.

This research line will delve into the fields of AI and Ethics (AIE) and Artificial Morality (AM) to respectively explore the effects of integrating AI in education, as well as the design of AI technologies that can take moral considerations into account when making decisions in learning environments; this is paramount to ensure that online education is fair, accessible and provides a quality learning experience to everyone. Addressing the challenge of AM is a highly interdisciplinary project in nature; it requires the combination of technical engineering skills, knowledge representation and reasoning about complex scenarios, as well as a holistic understanding of the ethical and social challenges behind the field of online learning and the application of AI tools in it.

[1] Casas-Roma, J. and Conesa, J. (2020) Towards the Design of Ethically-Aware Pedagogical Conversational Agents. Lecture Notes in Networks and Systems, vol 158, pp. 188-198. Springer. https://doi.org/10.1007/978-3-030-61105-7_19

[2] Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. (2016) The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679

[3] Misselhorn, C. (2018) Artificial morality. Concepts, issues and challenges. Society, 55(2), pp.161-169. https://doi.org/10.1007/s12115-018-0229-y

Dr Joan Casas-Roma

Dr Jordi Conesa

Dr Santi Caballé