Compliance with AI Transparency Requirements

You may have heard about the requirement of “transparency” when it comes to compliance with AI-related legislation. Many of the requirements for transparency come from data protection legislation. Even the AI professionals who create AI systems may not know how their own systems operate. Their inability to explain how the AI comes up with the results that it does means the system operates like a “black box.” Work and research on transparency are underway to help improve explainability of AI systems.

How does the law require companies offering AI products and services to promote transparency of their AI systems? The European Union’s General Data Protection Regulation (GDPR) [1] takes one approach to address the problems of transparency in AI. The GDPR actually speaks in terms of “automated data processing.” Automated data processing is not the same as AI, but it is broad enough to encompass AI. Article 15 of the GDPR gives individuals a right of access to information about personal data collected about them. Paragraph 1(h) of article 15 includes the right of the data subject to know about the existence of automated decision-making and “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”[2]Recital 71 refers to the data subject having a right to an explanation of a decision reached by automated means.[3] Thus, the GDPR gives data subjects a “right of explanation” requiring transparency of businesses using AI to make decisions about them.

In addition, under the GDPR article 22, a “data subject shall have the right not to be subject to a decision based solely on automated processing” producing “legal effects concerning him or her or similarly significantly affects him or her.”[4] (Data subjects are living identifiable individuals.). In other words, a data subject can opt out of automated data processing, with the implication that a human must make a manual decision. When the lawful basis of processing such personal data is consent or performing a contract, the data controller must still provide for safeguards for the data subjects, which at the least includes “the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”[5] Accordingly, the GDPR provides data subjects a “right of human intervention” to have a manual review of the results of the operation of AI systems using personal data.[6]

The combination of the right of explanation and right of human intervention creates a transparency mechanism to shine light on the results of automated data processing, allowing data subjects to ferret out mistakes, corrupted data, and bias. The GDPR’s right of explanation and right of human intervention are the only prominent current examples of laws intended to address transparency. With machine learning systems and black box AI systems, businesses may face problems trying to explain AI results to data subjects. Accordingly, building transparency into systems within the scope of the GDPR is critical.

Please contact me if you think your business is facing a “transparency” requirement. I would be happy to help you understand what your compliance requirements are and how you can meet them.

Steve Wu


[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1.

[2] Id. art. 15, para. 1(h).

[3] Id. recital 71.

[4] Id. art. 22, ¶ 1.

[5] Id. art. 22, ¶ 3; id. recital 71.

[6] For additional guidance on this application of GDPR to AI, see U.K. Information Commissioner’s Office, Explaining Decisions Made with AI Part 1 (Dec. 2, 2019).

Previous
Previous

European Approach to AI Bias and Fairness

Next
Next

California’s Other Privacy Law