Article
UNESCO kicks off sessions on the monitoring of harmful content on social media in times of elections
![UNESCO kicks off sessions on the monitoring of harmful content on social media in times of elections](/sites/default/files/styles/paragraph_medium_desktop/article/2023-04/Alice%20training%20for%20Colombia_roundtable%20on%20monitoring%20in%20elections%20%202023-04-17%20at%2015.55.31.png.jpg?itok=OUr0O_Ln)
This workshop was the first in a series of training on online harmful content analysis in periods of elections, with the next workshops to be held in Kenya on 24 April, Indonesia on 4 May, and Bosnia and Herzegovina on 5 May 2023.
“Social media are increasingly used as an information source in electoral processes. This means that, while they enable greater access to information, they can also be used to distort the information ecosystem in a divisive manner and influence voters with manipulative or deceptive messages,” said Alice Colombi, UNESCO digital communications consultant and an expert in election observation, notably on social media, who led the roundtable.
Sharing experiences about the dynamics and trends detected in periods of elections by civil society organisations in different regions of the world is an important exercise for finding ways to better protect users from undue interference in their voting choices.
With this in mind, the roundtable was joined by Allan Cheboi, Senior Investigations Manager at Code for Africa, the continent’s largest network of civic technology and data journalism organisations. He shared experiences in the identification of harmful content on social media during the general elections in Kenya in August 2022 and in Nigeria in February 2023.
A contribution was also made by Daniel Suarez Perez, Research Associate for Latin America at the Atlantic Council Digital Forensic Research Lab (), who shared the learnings of detecting harmful content during elections during the Colombian presidential elections in 2022.
The roundtable was also the opportunity to discuss the results of the monitoring exercise supported by the project “Social Media 4 Peace” and conducted by in Colombia. “The monitoring exercise highlighted some issues regarding the detection of harmful content by platforms. In particular, platforms do not provide sufficient protection against revictimization, discrimination based on social class, and negative comments about the appearances of public figures,” said Alejandro Moreno, a researcher at Linterna Verde. “Platforms do not recognize these as content that can be harmful.”
The discussions put into stark relief how the approaches and methodologies for detecting harmful content on social media vary between civil society organisations, not only across different regions but also at the national level in Colombia.
The representatives of civil society organisations participating in the discussions all highlighted the importance of integrating aspects of local context and specific social, linguistic, cultural, and political nuances into content moderation practices. “Certain terms only become malicious when used in specific contexts,” said Allan Cheboi, highlighting the increased use of hidden or coded language during recent elections in Kenya and Nigeria. “One of the strategies for spreading harmful content during elections relies on the use of vernacular terms or slang to avoid triggering built-in safeguards on platforms.”