Roberto Martinez-Maldonado is a data science researcher at Monash University in Australia. Roberto studies technologies that use artificial intelligence (AI) to support teachers. He is also exploring ethical issues related to the use of AI in classrooms. Annie Brookman-Byrne talks to Roberto about how AI can be designed to protect learner and teacher agency and safety, and the many challenges that need to be overcome.

Annie Brookman-Byrne: What technologies are you integrating into classrooms, and what are the ethical concerns?

Roberto Martinez-Maldonado: Through my research I aim to deepen society’s understanding of the social and technical issues surrounding the use of AI in education. I collaborate with teachers, students, and other educational stakeholders to design learning analytics dashboards that can be used in physical learning environments.

With my team, I have deployed various types of sensors and AI algorithms in classrooms and teacher training settings. For example, we have used positioning sensors that track and detect the most effective uses of the classroom space by teachers. In our effort to find ways to teach students how to cope with stress, we have used wristbands that measure students’ physiological stress levels during learning tasks. We’ve also used personal audio devices paired to state-of-the-art AI services to automatically transcribe students’ conversations and determine whether those conversations show the development of effective teamwork practices. Our goal is to use AI to gather evidence that inspires teachers to reflect on their teaching practices and provide more useful feedback to students.

More from Roberto Martinez-Maldonado
Capturing how teachers move around the classroom

Naturally, integrating such technologies can raise ethical concerns, especially if the technologies are perceived or used as tools for surveilling or assessing teachers. I am profoundly committed to designing interfaces with educators and students, ensuring that AI-enabled educational tools reflect their values and practices. I hope this will ensure that these rapidly advancing technologies remain supportive tools, preserving student and teacher agency, rather than posing threats or attempting to substitute for the irreplaceable human dimension of education.

ABB: What are the potential benefits and concerns surrounding generative AI, such as ChatGPT?

RMM: AI is entering its second wave of innovation. Generative AI, especially large-language models, is unquestionably paving the way for novel methods to analyze valuable student data. For example, emerging AI technologies can analyze spoken dialogue in learning settings, through automatic transcription and automated analysis. These tasks have taken researchers hours in the past. Now, we’re streamlining this process to give students and teachers insights that were once available only to researchers. Providing a summary of students’ conversations during phases of a learning activity might prompt reflection and facilitate formative assessment. Teachers might discern patterns in their classes, and use this information to offer informed feedback or adjust their practices to help students achieve the desired learning outcomes.

More on AI in schools
ChatGPT: Educational friend or foe?

However, this new wave of AI isn’t without its drawbacks. Generative AI tools are already influencing assessment practices. Numerous tools are openly available, and learners often use them without a clear understanding of their constraints or the potential impact on their learning trajectories. While AI can offer super-tools that enhance our capabilities, these technologies also have the potential to act as instruments of control and surveillance, disempowering both learners and educators.

“While AI can offer super-tools that enhance our capabilities, these technologies also have the potential to act as instruments of control and surveillance, disempowering both learners and educators.”

ABB: What are the other key challenges when integrating AI into education?

RMM: A primary challenge is interpreting AI outputs. While AI can suggest or predict learning actions, the algorithms’ rationale for certain choices often remains opaque, making many AI systems seem like mysterious black boxes.

Another challenge is to effectively balance personalized learning with maintaining a degree of standardization so that everyone has access to the same learning opportunities.

There are also concerns around using data collected by AI. How can we ensure the integrity of the data when using it for assessment purposes? Moreover, as educational platforms continue to gather and manage vast amounts of learner data, how can we make sure this information is used ethically, and protect teacher and learner privacy and security?

AI contains inherent biases, in terms of culture, gender, and socioeconomic status. This issue needs to be addressed to uphold equity and justice in our education systems.

Furthermore, achieving synergy between teachers and AI tools is vitally important. It’s crucial that AI technologies support educators, rather than supplanting them, especially as economic pressures may prompt the automation of various educational processes, potentially undermining the role of teachers.

“It’s crucial that AI technologies support educators, rather than supplanting them.”

ABB: How are you responding to these challenges with your human-centered approach, and what else needs to happen?

RMM: Today’s young learners are facing an ever-evolving and uncertain future. As AI continues to reshape the workforce landscape, it becomes imperative to design with this uncertain future in mind. I firmly believe in adopting a human-centered approach when integrating, designing, and researching AI in education. To that end, I have been forging robust partnerships with educational stakeholders. Together we aim to ensure that future AI applications are co-developed in harmony with authentic educational needs and aspirations. I make sure that educator and learner voices are taken into account when developing human-AI interfaces.

In the aviation and healthcare sectors, for example, external checks that guarantee the trustworthiness and reliability of systems are commonplace. These mechanisms have not yet been developed for AI. The first essential step for the sector as a whole is to draft ethical codes of conduct for AI, but we must also establish mechanisms that prevent AI systems deployed in schools from compromising the agency of teachers and students. 

“Predicting the future is challenging, and it’s crucial that those affected by AI in education play an active role in influencing its course.”

ABB: Are you hopeful about the future of this field?

RMM: Given my non-exhaustive but already long list of challenges the field faces, it may seem that I’m not very optimistic about the future. You might say that I’m on the fence. I acknowledge that we exist in a hyper-connected society, which is currently becoming increasingly fragmented. Expecting governments to reach universal agreement on optimal AI development to minimize societal harm might be overly naïve. Strong partnerships among the industrial sector, educators, families, and researchers are essential to proactively shape our desired future, rather than merely reacting to each new AI innovation. It is hard to envisage what the future will bring. Predicting the future is challenging, and it’s crucial that those affected by AI in education play an active role in influencing its course.


Roberto Martinez-Maldonado is a Senior Lecturer of Learning Analytics and Human-Computer Interaction in the Faculty of Information Technologies at Monash University, Melbourne. His research focuses on working with educational stakeholders to create strategies and promote digital technologies designed to support teaching and learning across physical and digital spaces. Roberto Martinez-Maldonado is a Jacobs Foundation Research Fellow 2021-2023.


This interview has been edited for clarity.

Keep up to date with the BOLD newsletter