Article

Multilevel and Meaningful Transparency in Algorithmic Systems: Developing Concrete Criteria to Guide Institutional and Legal Reforms

i4t logo banner

Side Event of UNESCO Internet for Trust Global Conference

Tuesday 21 February 2023, 11:00-12:30 (CET)

UNESCO Headquarters, Paris, France, Room IV and online

Organizer: UNESCO, Social & Human Sciences Sector and Goethe-University Frankfurt

This session aims to identify the key elements of meaningful transparency while exploring it in different contexts of uses of AI systems, looking to all stages of the AI system life cycle.  

Stakeholders typically agree on the need to ensure transparency in order to increase accountability and trust in AI systems and platforms. However, divergence emerges when it comes to the practical implementation of transparency in AI systems. 

For instance, regarding the algorithmic moderation tools, transparency is considered an essential element for improving platform governance, understanding and addressing digital harms and for ensuring users' rights in the context of AI systems used in content moderation.  

As noted by the UNESCO Draft Guidance for regulating digital platforms, the Windhoek+30 Declaration on Information as a Public Good calls for the promotion of increased transparency of relevant technology companies and media viability, principles unanimously endorsed by UNESCO’s Members States. Yet, transparency is a remarkably poorly defined concept, and there is a risk that poorly formulated regulations, even when centered on transparency, may contribute to further internet fragmentation or fail to address harms effectively. As noted by the IGF Coalition on Platform Responsibility 2022 Outcome Document, to make transparency meaningful, any regulatory policy should clarify the object of transparency, its audience, why disclosing the information is essential, and its goals. In addition, more information is not always better: transparency should not just be measured in terms of quantity, but in terms of quality. 

defines procedural and transparency obligation as a multilevel system involving the entire AI life cycle, from decision making processes to transparent ethical impact assessment; from explainability to the eventual need of sharing of code or datasets in cases of serious threats of adverse human rights impacts. 

Speakers

  • Moderator: Clara Iglesias Keller, Research Group Leader, Weizenbaum Institute
  • Alexandria Walden, Global Head of Human Rights, Google      
  • Gabriela Ramos, Assistant Director-General, Social and Human Sciences Sector, UNESCO
  • Jason Pielemeier, Executive Director at Global Network Initiative 
  • Laura Schertel Mendes, Professor at IDP,  Senior Visiting Researcher at the Goethe-Universität Frankfurt am Main and Rapporteur of the Brazilian Senate Commission on AI Framework           
  • Yasmin Curzi, Researcher at CTS-FGV, Coordinator of the IGF Coalition on Platform Responsibility           
Global Forum on the Ethics of Artificial Intelligence 2022
Recommendation on the Ethics of Artificial Intelligence
UNESCO
2022
UNESCO
0000381137