Ethical Impact Assessment
The Ethical Impact Assessment (EIA) considers the entire process of designing, developing and deploying an AI system allowing for assessment of the risks before and after the system is released to the public.
This is a vital component of ensuring the ethical design and use of AI. AI systems and tools have often been released to users without clear and transparent analysis of the potential risks and how they might be mitigated, even when such risks were foreseeable.
The dangers of this approach have been exacerbated by the arrival of powerful new generative AI tools branded “experimental” by their developers, such as large language models that have routinely generated inaccurate, misleading or discriminatory content (see also: "").
The EIA is initially intended to help public sector officials involved in the procurement of AI systems, furnishing them with the set of questions to ask in order to ensure that the AI systems they are purchasing are aligned with the ethical standards set out in the Recommendation. However, it can also be used by developers or others in the private sector and elsewhere to facilitate the ethical design, development and deployment of an AI system.