Defining a generalized rule to ensure an AI system complies with ethical AI standards is challenging. It’s not enough to develop a singular practice set for the parameters on which the system should learn. Every problem must be evaluated objectively and use logical and contextual reasoning. For example, let’s assume you are building an AI system recommending people for a specific job.
You utilize all relevant data available to train your model and find an existing bias towards a particular race that already populates most of the filled jobs. The model could develop a race bias, making the resumes of this cohort more likely to be selected for the job. In this case, it is clear that including racial and geographical features as a part of a training, engine is problematic.
Conversely, suppose you are building a system to recommend clothing types and skin-care products. In that case, geographic and ethnic parameters might help positively. There could be a specific fashion trend emerging from a particular geography/ethnicity which people may like to follow.
The same parameter that made the 1st engine ethically problematic is a relevant factor in the 2nd. In short, every problem must be addressed objectively, as every situation has unique ethical considerations.
In the construction of an ethically constructed AI, the following must be considered
The data about the user, from their history of decisions to characteristics, should be leveraged only with expressed permission.
Transparency / Explainability
The user should know what data is used to predict an outcome that could influence them.
Security / Robustness
The AI system should be protected from all cyber threats, and user data should be entirely safeguarded.
Below is an example of how Ellicium has tackled such a case.
Ellicium was tasked with preparing an AI system that categorizes media articles reputationally significant to aspects of the corporation. Most of the data was from US media outlets, demonstrating a bias against companies and organizations in Russian demography – the AI system consequently preferred articles that mention Russian references.
We then carefully masked demographic references to make the system ethically compliant. Of course, it has its negative bend on accuracy to an extent. However, such systems would be beneficial/ethical in the longer run.