Publication
"Your opinion doesn鈥檛 matter, anyway"
Exposing technology-facilitated gender-based violence in an era of generative AI
of young women and girls globally have experienced online harassment on social media platforms.
Experiments reveal how generative AI facilitates gender-based violence
Generative Artificial Intelligence (AI) 鈥 deep-learning models that create voice, text, and image 鈥 are revolutionizing the way people access information and produce, receive and interact with content. While technological innovations like ChatGPT, DALL-E and Bard offer previously unimaginable gains in productivity, they also present concerns for the overall protection and promotion of human rights and for the safety of women and girls.
The arrival of generative AI introduces new, unexplored questions: what are the companies鈥 policies and normative cultures that perpetuate technology-facilitated gender-based violence and harms? How do AI-based technologies facilitate gender-specific harassment and hate speech? What 鈥減rompt hacks鈥 can lead to gendered disinformation, hate speech, harassment, and attacks? What measures can companies, governments, civil society organisations and independent researchers take to anticipate and mitigate these risks?
鈥淵our opinion doesn鈥檛 matter, anyway鈥 is the response given by a generative AI chatbot when testing the strength of its guardrails that are supposed to prevent technology facilitated gender-based violence. It is one of the experiments conducted for this report to anticipate the impact of generative AI on the safety of women and girls in this new environment. The results show a range of possibilities already available for malicious actors to inflict harm and that gender-based harms resulting from the misuse of generative AI technologies is substantial.