Idea
Use of AI in education: Deciding on the future we want
How are schools, colleges and universities around the world using AI?
Students and teachers are certainly using AI for a variety of purposes: for ideation, for writing, for programming, and much more. It provides new avenues to explore topics and seek assistance, but it also provides shortcuts.
New large language model generative AI systems score higher on most standardized tests than average students and often in the top tenth or even one percentile. This is forcing school systems to reconsider standard modes of assessment and will prompt innovations in how learning is measured. In other words, it’s about rethinking the way we learn and teach and by consequence how students are taught and what they are encouraged to prioritize.
Yet even with all these uses, the benefits of the technology remain largely in the realm of hope and expectation. There is not yet conclusive evidence that generative AI applications like ChatGPT will improve learning outcomes.
AI is commonly billed as a tool to personalized learning experiences. We are confident about this potential, but we also believe that education is a collective and social endeavor, and schools are where children socialize and learn to live together.
In addition to supporting teaching and learning, AI is being employed to automate various administrative tasks such as grading and monitoring attendance and performance. This development might alleviate administrative burdens on educators and, if carefully managed by well trained and skilled operators, they can mark a positive step forward. At the same time, the IMF is ringing the alarm on the risk that 60% of new jobs will be replaced and/or heavily affected by AI in the near future. This is why, our mantra is ‘driving technology not to be driven’. Whatever the domain is, we need to be open to innovation and well-prepared instead of remaining stuck on a defensive approach against the future.
We need to openly ask hard questions such as: Should AI be used to determine university admissions? To read and respond to student essays? To tell students areas of strength and weakness? To support students while they are being tested (as is now common with calculators and word processors)? The mother of all questions is about who’s deciding for which purpose and that’s where UNESCO’s vision comes in: technology is not neutral, and it must be steered by our agency. Technology on our terms is our message, as highlighted in our latest edition of the Global Education Monitoring Report focusing on technology in education.
Efficiencies offered by AI are not always worth the trade-offs they entail. As an example: There is no doubt that AI systems can read and respond to student work faster and with greater efficiency than human teachers. But what about the quality of these responses? This is the approach we regularly remind governments and partners at UNESCO.
We also see lights and positive AI applications. This is particularly true in research involving large data sets, with already exciting breakthroughs. AI tools have, to cite just one example, facilitated work to model nearly every protein known to science. This is really good news! It shows us what is possible when humans deploy AI for good and maintain careful oversight of the technology.
What are the major differences between countries in adapting AI in education?
The application of AI in education varies enormously across countries, usually reflecting already existing disparities in technological infrastructure, funding, policy support, and digital literacy levels. Developed and rich countries can rely on more robust technology infrastructure as well on an ecosystem for innovation which includes the private sector. This ecosystem is supporting schools and universities in leading experiments with AI in education. But this is not the case in the Global South and broadly speaking in developing countries, which are grappling with fundamental challenges mostly related to basic prerequisites to make technology functional to quality education, from infrastructure to electricity.
Against this backdrop, I see two main priorities to make technology deliver on its own long-lasting promise of ‘leapfrogging’ for all. First, it’s about ensuring that investments can actually close the existing digital divide, in terms of connectivity, content and capacity. More than half of the world is still offline, while the other half is developing the future generation of AI tools through an unprecedent investment from public and private sectors. On the other side, human capacities will be decisive to steer the technological revolution. That’s why digital skills for teachers and learners should be prioritized in curriculum development, and digital literacy must be part of the core competencies all citizens should have in the future regardless of age, level of education and social position.
Second, we must focus on inclusion. At UNESCO we are working to ensure that AI technology will improve educational opportunities for all and help close rather than widen existing divides.
Are you more optimistic or pessimistic about the impact AI will have on education?
While technology is not neutral, as already mentioned, decision making is and will remain our responsibility as humankind. We can decide what kind of future we want, and this requires a radical change in our relationships with nature, technology, and each other. When it comes to technology, including generative AI, we can decide to unlock the potential, getting more serious on ethics, safety and inclusion or trying to protect us and our future from technology, banning and trying to buy time. Without a doubt, AI presents innovative opportunities to enrich and transform educational experiences. But as we do this, it will be crucial to prioritize ethical considerations and the preservation of education as a social and human-centred endeavour. In other words, it’s a question of finding the right balance between blind fanatism and absolute inaction and I am cautiously optimistic about it. At UNESCO, we strongly advocate that human teachers should largely steer the uses of AI in classrooms, ensuring that it aligns with pedagogical goals and ethical standards and is appropriate for contexts and cultures that vary enormously within as well as across countries.
What do you see as the red lines regarding the use of AI in the education sector?
Red lines are about the protection of privacy and personal data, the non-manipulation of student users, and maintaining unwavering focus on safety, especially for children completing compulsory education. Schools must be safe in both digital and physical settings – and reflect recognition that students now whisk across these thresholds constantly.
UNESCO's recent guidance on AI in education and its more expansive Recommendation on the Ethics of AI, revolve around the need to ensure AI's ethical use, and prevent biases, especially in interactions with minors. An age limit of 13 for the use of AI tools in the classroom and calls for teacher training on this subject should be set accordingly.
Finally, our recent publication ‘An Ed-Tech Tragedy’ highlighted the hazards of unregulated uses of technology in education in its critique of ed-tech modes of learning during the COVID-19 pandemic.
AI will join a wide range of technologies that will change the way learners learn. How are we preparing young generations for the future?
A fundamental goal of education is helping young people to have knowledge, awareness and consequently behaviours to live in harmony with each other, with the planet and with technology. Realizing these objectives requires human guidance and instruction – which technology can make more effective.
This is best accomplished by using AI as a tool to complement, rather than replace, the human elements of teaching. Good human teachers and mentors play a powerful role in helping learners discover their personal strengths and realize their potential. They offer guidance and support that is sensitive to individual students and alert to the particularities of the contexts in which they learn and live.
It is often said that AI is contextually aware, but, outside of narrow tasks, it usually pales compared to the awareness teachers bring to classrooms in terms of the local community and local culture.
Going forward, we need balanced approaches where AI supports educational processes while keeping humans in the driver’s seat.
At UNESCO we are putting forward ideas to direct AI in ways that will augment learners' autonomy and expand the pedagogical options available to them, better accommodating their learning needs.
What would you say to parents who hear conflicting news about the use of AI in education?
It is understandable that parents have concerns, especially given the regulatory and policy vacuums that commonly surround the use of AI in education and other domains.
That said, I would recommend and encourage parents not to be passively observing AI evolving rapidly and be defensive, but engaging with children, when feasible, using AI applications for learning and other purposes. This ensures adult oversight and can help children build more nuanced understandings of the strengths and limitations of new technology – a valuable life skill on its own right. Fear alone is not the right outlook.
What are your views on the regulation of AI such as the one that was approved by the European Union?
The AI regulations taking form in the European Union represent a promising start towards governing the use of AI, including in the education sector. These regulations are a step forward in maximizing the opportunities and minimizing the challenges and dangers posed by AI technologies while the EU Act lay foundations for more sector specific AI regulations which are clearly needed for education. We note that the EU has rightfully categorized education as an area at “high risk” from AI.
I think it is prudent to view these regulations as iterative, much in the same way that innovators see technology. Regulators need to be as bold as the creators of technology. The once-and-done regulations of the past will not suffice in our era of accelerating technological development.
Overall, I am encouraged that many governments are moving quicker to establish guardrails for AI than they did for previous digital technologies, namely by the UNESCO Recommendation on the Ethics of AI, which anticipated in 2021 the EU Act as the very first ever global framework on this matter.
AI is shaping the future of the world and should be considered and regulated as a global issue.