European Approach to AI Bias and Fairness

One of the main risks with the use of AI systems is the potential for bias and discrimination. Regulators emphasize the need for fairness in the operation of AI systems. Legislation is beginning to address the need for fairness in AI systems to overcome the risks of bias and discrimination. The European Union’s General Data Protection Regulation provides one approach to the mitigation of AI bias.

Biased data as input can give rise to biased outputs and predictions. In summing up the risk of bias, Dr. Eric Topol’s book on AI in medicine states:

In Weapons of Math Destruction, Cathy O’Neil observed that “many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives.” Bias is embedded in our algorithmic world; it pervasively affects perceptions of gender, race, ethnicity, socioeconomic class, and sexual orientation. The impact can be profound, including who gets a job, or even a job interview, how professionals get ranked, how criminal justice proceeds, or whether a loan will be granted.

The GDPR’s requirements of a right of explanation and a right of human intervention, discussed in the previous section, are two approaches to addressing bias. Note that its requirements do not forbid bias. Rather, the GDPR’s requirements give data subjects the right to information that can expose the bias, and a procedure to challenge it. Note also that businesses are not required to have humans override the results of the AI system. Other laws, however, such as those barring unfair and deceptive trade practices, may be necessary to redress bias and promote fairness.

One way to analyze laws attempting to address the problem of bias is to use an analogy to data security controls. Security controls fall within three categories:

·      Preventative controls: controls to prevent the problem from occurring.

·      Detective controls: controls to detect or reveal the problem after it occurs, which assists by warning those in control so they can respond to the problem.

·      Corrective controls: controls to remediate or reverse the effects of the problem.

The GDPR’s right of explanation for transparency purposes is a detective control, because it gives a data subject the right to information allowing the data subject to see if the result of the automated data processing was a mistake. The explanation could enable a data subject to analyze the decision, which could reveal possible bias. By contrast, the right of human intervention is a corrective control. A data subject is entitled to a second opinion from a human looking at the result of the automated decision making. Human intervention can correct any bias or other mistake revealed by the information provided in response to a request for an explanation.

The GDPR’s use of detective and corrective controls is one approach to addressing bias and promoting fairness. Another approach would be to use a preventative control to forbid businesses form implementing a biased AI system in the first place

Please contact me if you eliminating bias in your product or service is a top priority, or if you are a customer and must eliminate bias as part of your own compliance program. I would be happy to assist with understanding your requirements for elimination of bias and how you can address your compliance obligations.

Steve Wu

Previous
Previous

2020 AI Legislative Year in Review

Next
Next

Compliance with AI Transparency Requirements