Idea
Generation AI: Navigating the opportunities and risks of artificial intelligence in education
Artificial intelligence is a technology whose rise has rightly been compared to the discovery of fire. Being here in Athens, this city of ancient wisdom, I am reminded of the myth of Prometheus, who is said to have brought humanity the gift of fire from the gods. Like fire, AI holds disruptive potential to transform our lives in remarkable ways, at both individual and societal levels—from schools to healthcare, work, and transportation. But, as with fire, AI has two sides – it represents warmth and progress, but also great danger if mishandled.
This morning, I will outline some of the challenges, risks, and opportunities that AI, and generative AI in particular, presents for education. I’ll then describe 䰿’s recent contributions in research, pedagogical tools, and normative instruments. These tools aim to answer some of the many open questions and envision future scenarios for today’s students and tomorrow’s adult citizens—whom I propose to call "Generation AI."
We are all "Generation AI"
Over the course of my career, I have witnessed at least four digital revolutions:
- From the advent of personal computers
- To the expansion of the Internet
- To the emergence of mobile devices and social media
- To the rapid and unexpected arrival of generative artificial intelligence like ChatGPT.
Each technological revolution has had broad social and educational implications, radically changing the way we live and how we learn. Although not all people and not all countries have felt these technological revolutions in the same way, everywhere the new worlds they create have been sources of both hope and concern.
To understand the risks and opportunities of generative AI systems and tools, we can look back to a chapter in our recent history: COVID-19 and its dramatic impact on education. Within three weeks of the pandemic’s onset, in March and April 2020, 1.6 billion students were deprived of regular school attendance and had to rely on technology to access formal education. In our publication "", we documented and analyzed the role of technology during this period of school closures. This unprecedented global disruption revealed the unintended consequences of shifting to technology-first solutions.
The lessons learned from the pandemic show that our choices for integrating technology must be guided by the four principles of inclusion, equity, quality, and accessibility. This is the main message of , which emphasizes that technology is never ideologically neutral, and new AI models and applications are no exception. Applications like ChatGPT generate new data—both linguistic and otherwise—from the vast amount of input available online. These applications raise fundamental questions for human knowledge, education, and learning.
To summarize, here are some distinctive features of the AI universe that are not always known to the user:
- Most AI applications come from one of the two leading countries in AI research and investment. This means certain worldviews are favored in content processing and production.
- The entire executive team of OpenAI, the company behind ChatGPT, is under 40 years old. This means that its language models reflect the specific ways of thinking and knowing of one generation.
- Major chatbots have been trained on only about 100 of the world’s 7,000 natural languages, with English as the primary source, given its predominance online. This means that 99% of the world's languages are currently excluded from the giant virtual library that underlies the most popular generative AI applications.
- 90% of online higher education materials come from the European Union or North America. This means, finally, that content production is developed mainly by just two regions, both made up of Western countries.
Clearly, these problems are linguistic, cultural, generational, and geopolitical. Therefore, it’s equally clear that, despite AI’s promise to diversify and enrich our knowledge commons, instead the universal encyclopedia may become homogenized and thus drastically impoverished. As the linguist Ludwig Wittgenstein noted: “the limits of my language mean the limits of my world.” We might extend this to say that the limits of large language models mean the limits of the worlds of knowledge they produce.
The epistemological and existential challenges
The first challenge posed by generative AI is epistemological, based on an entirely new relationship between human intelligence and intelligent machines. For the first time in human history, technology has made us both "consumers" and "producers," users and creators. We are both recipients of the information and new data we demand, and, crucially, the unique source and matrix of processing and production. In other words, generative AI applications work because they have devoured centuries of human knowledge stored in the digital encyclopedia we call the Web.
The scale of training data is hard to fathom: GPT-4 was trained on 10 trillion words. To put that in perspective, 10 trillion is the number of seconds separating us from our earliest Homo sapiens ancestors. It is a number that is possible only with computers and can be manipulated only with cutting-edge technology.
They are built from our collective intelligence, which continues to feed and shape them. By adding new content on the web, we give AI more information, greatly expanding its potential. Even as AI continues to mine our collective intelligence and may soon outpace human capabilities, as experts now argue, we will still have the tools to control and steer technology to benefit humanity. Some of the risks are obvious. The spread of hate speech, Holocaust distortion, disinformation on climate change, and election interference are just some of the challenges AI poses. Generation AI’s mission will be to ensure the dual goals of protecting ourselves and our planet from AI’s potential dangers while developing its potential to serve the public good.
To accomplish this mission, we need immediate action on three levels:
- First, we need strong normative frameworks developed by governments and international bodies. These frameworks must protect transparency, fairness, and ethics in fields such as governance, data protection, research, education, health, and the environment. In 2021, UNESCO led the way by adopting a , the first global instrument of its kind. Today, seven Member States are using it to shape their national AI strategies and policies, and more are following their lead.
- Second, our educational systems need to be transformed. They should be inspired by these principles and reimagined to train a new generation of digital citizens.
- Third, the private sector must step up. It needs to invest in the safety of AI systems and in technical skills for teachers, as well as the physical and social infrastructure for students. As societies, we must ensure that investments in smarter AI do not come at the expense of investments in educational systems and the people within them.
The impact of artificial intelligence on learning and teaching
At UNESCO, we are considering the many implications that AI presents for the future of education, with a clear universal goal: to develop AI technologies in a way that protects and expands our diverse knowledge systems and that equips learners with the skills and competencies they need to thrive in a digital age. To do this, we must address some difficult questions, which touch upon all dimensions of education, and which don’t always have clear answers.
First, what content and curriculum are appropriate for the digital age? For example, will we still need to learn foreign languages, and will we still need to train interpreters and translators? Intuitively, we may think it is no longer necessary. Why invest in human translation skills when a machine can do it faster and cheaper, with quality that is already similar and quickly improving? However, we know that learning and translating a language is more than just finding the right words and syntax. The cultural richness each language conveys is worth the educational investment and cannot be replaced by any chatbot. We must also reflect on how we develop and validate curriculum.
When I was the Minister of Education in Italy, textbooks and other educational resources were usually validated on four main criteria: (1) accuracy of content, (2) age appropriateness, (3) pedagogical relevance, and (4) cultural and social appropriateness. This typically required a year, or longer.
Do you know how long it takes to validate AI tools? Currently, and in most contexts, AI utilities require no validation at all. They have been released into the public space without discussion or review. Should we continue to tolerate this asymmetry?
A second pressing issue concerns assessment systems. How will we assess learning outcomes? Exams that were once "unhackable" are now easily hackable with AI applications. One example is the qualification of lawyers. In March 2023, GPT-4 passed the bar exam in the United States, scoring in the top 10% of all test takers. Earlier this year, the same programme passed the legal ethics exam. This raises tricky ethical questions, which have increased along with heated debate about the future of assessment, as students worldwide use AI for assignments. Should schools and universities try to block its use? Or rather, should we transform assessments to focus on presenting and supporting ideas and arguments with evidence?
Finally, the mother of all questions: will we still need teachers in the school of the future? Or, more cautiously and plausibly: how will AI tutors change the work of teachers? Of course, we do not have a crystal ball, so we must base our projections on objective data, not just desires. Let us start with teachers. I know we have many teachers in the room today. The recent shows alarming data: A global shortage of 44 million teachers to meet Sustainable Development Goal 4 of the 2030 Agenda. We know there is a clear positive correlation between the qualifications of teachers and the quality of students' learning outcomes, which calls for investment in transforming and improving the teaching profession, starting with digital skills training.
Some argue that the use of generative AI can fill this gap, especially in disadvantaged settings where teachers are in short supply or work in such extreme conditions that they do not regularly show up for work. In fact, from observing diverse geographic and geopolitical contexts, we know what is needed: Well-run, well-equipped schools with well-trained adequately paid teachers motivated in their mission. These are key to finding the right balance between humans and machines, and emotional intelligence and technology, which must characterize the school of the future.
Steering technology for the improvement of educational systems
So far, we have asked how AI will change educational systems. But the opposite question is equally relevant: how will educational systems shape AI and its role in society? At UNESCO, we are helping Member States chart an ethical and responsible course.
In 2023, we published the first , proposing key actions for government agencies to regulate GenAI based on safety and appropriateness for teaching and learning. These actions include:
The obligation to protect data privacy, especially for children. Young children need the same protections in the digital and online spheres as they have in the analog world.
Updating copyright laws for the age of artificial intelligence. We have seen many lawsuits against AI companies from writers, newspaper publishers, musicians, filmmakers, and voice actors. Countries need to determine how copyright applies to the datasets used to develop AI.
Setting age limits for using generative AI. We argue that untested and under-researched tools are not appropriate for primary education and should not be available in the classroom to children under 13. Similarly, there is strong evidence for banning smartphones in schools, given their clear negative impacts on mental health and well-being, as well as learning outcomes. A notification on a phone can result in children taking 20 minutes to re-concentrate on their class, affecting their retention and memory.
䰿’s and regulators are certainly important for defining the system of rules. But the real agents of transformation are students and teachers. It is for them that UNESCO is developing competency frameworks for using generative AI, which will be presented at the next Digital Learning Week in September at UNESCO Headquarters in Paris. The framework outlines the system of knowledge, skills, and attitudes needed to understand AI’s role in education and to use it ethically, effectively, safely, and meaningfully. It fits into the broader programme on global citizenship education and is now an integral component of it.
This leads to the question: What competencies are needed in the digital age? At first glance, the digital revolution seems to require widespread technical skills in computer programming, data science, and software engineering. Indeed, such skills are and will remain relevant. However, paradoxically, as artificial intelligence becomes more sophisticated and easier to use, the need for specialized technical skills may instead diminish. With generative AI, anyone can write a Shakespearean sonnet, program software, compose a violin concerto, or edit a photo. The quality of the result will depend on one’s ability to interact with technology. In this context, technical skills may no longer be essential, while the cognitive and socio-emotional abilities to interrogate the machine by asking the right questions will be critical.
This brings us to the crux of the educational transformation needed in the age of generative AI. To ask the right questions, one needs independent judgment, critical thinking, and emotional intelligence. In this sense, being part of Generation AI does not necessarily mean being a digital native, but rather becoming a digital citizen.
Choosing which technologies to adopt and which to reject
I would like to close by returning to our friend Prometheus, whose story reminds us of both the promise and perils of artificial intelligence—the fire of our times. As members of Generation AI, it is our responsibility to illuminate the path forward with this new fire. In doing so, we must embody the name Prometheus, which is to say, we need to exercise foresight. We must also exercise our agency and our ability to choose which technologies to adopt and which to reject – at individual, community, and government levels – through what author and computer scientist Cal Newport calls techno-selectionism. Our task as Generation AI is to make that selection. To steer AI so that it lights the way to more peaceful, just, and sustainable futures – and does not burn us along the way.