News

Insights from practice: building a practical AI ethics process for data engineering at Rolls-Royce

Ethical AI frameworks can be a catalyst in improving the safety and well-being of people impacted by AI systems, but they need to go beyond aspirational principles and offer practical, deployable guidance for developers.
AI Ethics and Governance Lab - Insights from practice: building a practical AI ethics process for data engineering at Rolls-Royce

Author: Caroline Gorski, CEO, Capital Enterprise (Rolls-Royce 2017-2023)

Originating contributors: Lee Glazier, Rebecca Hallows, Ben Todd  

The pace at which AI is developing almost always outstrips that of policy and regulatory development. This is a problem because AI-based systems have been shown to reproduce existing biases or even significantly strengthen them. While several frameworks provide theoretical guidance for these and other challenges, methods to help developers address them practically are rare. 

To contribute to solving these challenges, in 2020, aerospace and power engineering firm Rolls-Royce released its ethics and trustworthiness toolkit, The Aletheia FrameworkTM. It helps guide organisations as they consider the ethics around the use of AI, and provides a checklist for AI developers to audit themselves against. In 2021, an update was released to enable data bias to be considered in preparing for AI developments. As part of Rolls-Royce鈥檚 outreach, it has since been used by organisations in sectors as diverse as pharmaceuticals, healthcare, education, music, manufacturing, and engineering. It has also now formed part of a full governance system for fully autonomous, safety-critical AI applications, which is being discussed with regulators for approval in the near term.   

The Aletheia Framework moves beyond theory and into the practical application of responsible and trustworthy AI: going from the 鈥榳hat?鈥 to the 鈥榟ow?鈥. It also allows AI developers to tackle prevailing issues in automated systems, including preventing the reinforcement of social biases, sustaining the jobs and skills of people, covering accountability to ensuring trust in the outputs of an algorithm, and more. 

Freely available under Creative Commons licence, the Aletheia Framework was two years in development and based on more than 30 years of advanced data analytics and AI in the Rolls-Royce engine health monitoring capability. Engine health monitoring ensures the availability of Rolls-Royce jet engines 鈥 thousands of which are in operation on aircraft around the world continuously 鈥 allowing them to be maintained effectively by using predictive self-learning algorithms to continuously monitor and respond to near-real-time events. 

In thinking about developing AI for further services for customers, as well as internal processes for improved efficiency 鈥 such as the automated visual inspection of components as they come out of manufacturing or servicing environments prior to assembly 鈥 Rolls-Royce recognised that these deployments of AI had potentially ethical challenges. The implementation team needed a process for fully considering those potential ethical challenges and finding solutions to use in model scoping and development so that they could continue. No such process could be found so they created their own.  

The starting point was to assimilate authoritative high-level guidance 鈥 the 鈥渨hat鈥 鈥 including EU ethics guidelines and the Asilomar principles. Then, using their product safety mindset, Rolls-Royce assurance specialists created a 32-step process for applying and evidencing AI trustworthiness in daily industrial contexts 鈥 the 鈥渉ow鈥 鈥 for what was previously highly abstract guidance. During the development of the framework the implementation team discussed this with workers鈥 representatives. 

The process starts prior to the deployment of an AI system, with the full and transparent documented consideration of ethical implications of the proposed activities, particularly from a social impact perspective, but also including a safety and bias viewpoint, before moving to a five-layer high-frequency checking system to ensure that an AI system's decisions are not wrong and can be trusted.  

The Aletheia Framework assures the trustworthiness of the AI by focusing on the inputs and outputs on either side of algorithms, not the encoding of the algorithms themselves (which are subject to quality assurance during development). It uses expectation-bounding, synthetic data exercising, independence, comprehensiveness and data corruption assurance. This allows the continued assurance of the algorithmic outputs, throughout its life, independent of the 鈥榖lack box problem鈥. This makes it relatively fast to implement and, although originally created to solve an internal Rolls-Royce challenge, applicable in varying applications beyond its industrial manufacturing origins.  

Having developed it internally, the team embarked on a comprehensive peer review process, including other safety-critical industries, technology companies and academics. Rather to our surprise almost everybody said: 鈥淭his is the first time we have ever seen something like this鈥. Mostly, even now, other frameworks are guidance-led documents that do not provide a method to operationalise ethical practice like The Aletheia Framework. 

And it has not stood still. A second updated version was published in December 2021, which included a module with a step-by-step process for identifying, assessing, and mitigating bias risk in the requirements, algorithms and datasets that are used in the development and use of AI. While the internal version has developed into a full assurance process for fully-automated safety-critical AI, something which is expected to enter regulatory assessment in the near term, The Aletheia Framework tools, case studies and explainers remain freely available for any developer or data science team who would like to explore the practical deployment of AI ethics in machine learning and can be downloaded from the Rolls-Royce website at  


The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.