Unit 2: MIL Competencies in the Age of AI and Social Media

Last update:2 April 2024

Key Topics

  • Why are MIL competencies relevant to AI and social media?
  • What are proposed competencies needed for AI and social media?
  • Are MIL competencies the same as the competencies needed for AI and social media?
  • Applying MIL competencies to AI and social media environment.
Module 11 MIL

Learning Objectives

At the end of this module, educators and learners should be able to:

  • Describe how by being media and information literate, learners can better understand the social context of AI and how to critically engage with AI systems.
  • Identify and describe competencies needed for AI and social media and how these relate to MIL.
  • Understand how to apply MIL competencies in AI and social media environments and identify tools and resources that can help in this context.

Level of Competencies Targeted in this Unit

  • Basic

Concerns and Linkages

When the concepts of media literacy and information literacy were coined in the 1930s and 1960s respectively, the social media and AI systems that to a wide extent dominate our means of communication today did not yet exist. In fact, computers, as we know them today, did not exist. The concerns then were about information verification and political propaganda messages transmitted through traditional media, such as radio and television. It was, however, equally necessary to understand issues of media representation (see Module 6), how to engage with advertising and news, and how media messages were constructed to represent reality (See Module 10). With the advent of the Internet, social media and AI systems, these fundamental concerns remain. However, they have been magnified and made more complicated because of how new technologies used in particular business models have transformed how people connect, interact socially, and learn and understand the world around them. Social media, for example, encode social interactions as text (written messages, images, audio, video, art, emojis, likes, shares etc.). Again, such symbols are not entirely new but social media also offer these alongside the classic form of face-to-face communication that generally requires people connected in different physical locations, but now possible in common virtual locations. These involve strangers as well as mutually known participants, saying and sharing things that are often changing - influenced by culture and experiences*.  Adding AI as another layer to social media and technological devices and platforms further expands the concerns mentioned above and raise new concerns. This is so because AI systems make it easier to gather vast amounts of data, which they process and learn from, and which in turn enable or determines decision-making that can have positive or negative outcomes for ordinary citizens.


Livingstone, S. (2014) Developing social media literacy: How children learn to interpret risky opportunities on social network sites. Communications. The European Journal of Communication Research, 39(3): 283–303

Figure 11.1

Basic Characteristics of AI Systems

Source: Content adapted from Long, B. and Magerko, D. (2020).
CHI ‘20, April 25–30, 2020, Honolulu, HI, USA © 2020 Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-6708-0/20/04...$15.00

MIL Figure 11.1

Table 11.1

Using MIL to Address Concerns Raised by Use of AI and Social Media

Table 11.1 below describes some of the concerns about social media and AI systems and how MIL helps people to better mitigate these concerns. The next section will address some of the many benefits of Al.

Pedagogical Approaches and Activities

As discussed earlier in this Curriculum (Part 1), various pedagogical approaches are possible. Please review the list in Part 1 and decide which approach to apply to the suggested activities below and others that you may formulate.

  • Organize discussion, debates, other group activities, games, and use of social media in connection with the 13 points in Table 11.1. Be sure to draw on relevant activities in the various modules referenced in the table and translate these activities to focus on AI.
  • Bots are computer programmes driven by AI systems. Internet users interact with bots online through written or spoken languages. Watch this about bots with your learners, Ask the learner to indicate which of the popular bots, mentioned in the video, that they use frequently. Ask them to share their experiences and concerns.
    • Focus on the positive uses of bots then discuss potential negative uses.
  • While AI can be used to tackle misinformation and disinformation, it can also be used to spread misinformation and disinformation. A study by the European Parliament, entitled “Automated Tackling to Disinformation” (add year) (See more on Misinformation and Disinformation in Module 4) points to fake accounts and bots being widely used in social media manipulation strategies, to carry out attacks on opposition parties, post distracting messages, or engage in trolling and harassment. Educators should guide learners to research other similar related studies in their region or country. Do these studies exist? What are some of the findings? Are policy related actions being taken to address these findings? What concrete national and community level actions are being implemented in connection with some of the findings?
    • Plan a visit, where possible, to the relevant authority (such as government ministry/ministries responsible for these issues.
    • Alternatively plan a series of visits to the learning environment from experts in this area to give talks to educators and learners.
  • Educators should highlight the importance to engage in advocacy for women’s involvement in AI in particular, and in science and technology and general. Read more about organizations such as (WAl), WAI is an NGO “do-tank” with a mandate to increase women’s representation and participation in AI. Search for others.
  • Read and discuss the Forbes blog of Kim Nelsson, Nelsson is an entrepreneur and CEO of Europe’s largest data science hub, Pivigo. Search for other related blogs or article from local experts or respect authority in your country and regions. Deconstruct the pieces selected. Arethey opinionated or factual? Are they fair or one-sidedly optimistic or pessimistic about AI? Are the arguments supported by evidence? What do you agree with or not? Why? How can you engage for action and change? Are they local entities that you can contact to motivate actions?
  • There are cases of persons who have been attacked and killed because of misinformation and disinformation shared about them on social media.
    • Participants read and discuss a local or international story about AI systems and how they relate to misinformation and disinformation, as selected by trainer. Ask learners to share what went through their minds as they read. How do they feel? Do they believe the story? Why do they believe the story or not; what makes it credible? What happens when AI systems are used to create and distribute misinformation and disinformation? What can be done to stop the spread of misinformation and disinformation through AI and algorithms? For the last question, learners should reflect on personal/individual actions they can take, as well as possible actions of other stakeholder groups (governments, digital communications companies, etc.). For each proposed action, discuss the potential implications from different stakeholder perspectives.
    • Research two cases where false and misleading content was created by AI systems or bots that led to psychological or physical harm to persons. Discuss them with learners along similar lines.
  • Researcher have carried out experiments on the effect of misinformation on people. In one study a research team (Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). . Journal of Experimental Psychology: Human Learning and Memory, 4(1), 19–31.)  “showed participants slides of a car accident, and then later had the participants read inaccurate or misleading information about the accident. The experiment showed that participants easily assimilated this flawed information, making mistakes when later asked what had happened in the accident”. (Reboot Foundation, .)  Discuss  with  learners  what  factors  lead  to  the  assimilation  of  false information. These include memory, past experience, emotions (fear, anxiety, apprehension, doubt), biases, expectations etc. Ask learners to reflect and share their experiences.
  • Divide learners into groups. Ask them to create a piece of disinformation in whatever form they chose (news, story, fabricated eyewitness story, image, video). Then ask each group to present the information to the others. Discuss whether the information is credible or not. What makes it believable or not? What are the potential effects to disseminate such false information? What algorithms would catch this case, and what would amplify it?
  • Study the 13 selected concerns with AI in Table 11.1 above. Plan activities around these issues based on their relevance to your country and interest to the learner group that you are working with.
  • Organize learners in groups and have them do desk research to gather more information and examples where the following tips to identify ‘deepfakes’ are applicable. See more about the and the tips they offer below. See more on Misinformation and Disinformation in Module 4.
    • Pay attention to facial transformations or deformation. Also, does facial hair look real or appears in places where they should not and absent from others? Do facial moles and marks look real? Do the size and colour match the rest of the person’s face?
    • Check if skin is too smooth or too wrinkled on cheeks and forehead. Does ageing in hair match that of skin and eyes?
    • Are there shadows appearing on the face, eyes and eyebrows where such shadows would not be expected? Are there glares on the glasses that a person wears and are these glares changing as the person moves? Are persons blinking too much or too little? “DeepFakes often fail to fully represent the natural physics of a scene… natural physics of lighting.”

As mentioned above, to detect deepfakes sometimes require expertise and particular competencies similar to those used in forensic science. Therefore, educators and learners should engage in discussion and practice to become proficient over time. lt is equally important to be aware of the option to advocate that companies deploy resources to identify such content and subject it to moderation like applying labels, and explaining the conditions for such labels and possibilities to appeal against the application.

  • In Table 11.1 a series of AI and social media related competencies are captured from various sources. Plan various activities around each of these competencies. In each case specify how MIL competencies are related or can be applied. Have learners offer arguments in relation to other issues, as in the third column of Table 11.1 above. Share your completed table or parts thereof on social media and tag @MlLCLlCKS or send an email with your resources to MlL CLlCKS.

Table 11.2

Media and Information Literacy Competencies for AI Engagement