2022 AI Legislative Year in Review Part 2

This post continues my review of 2022 AI legislative developments. In part 1 last month, I talked about generally applicable privacy laws at the state level. These laws will affect AI vendors and customers. This month, I will tackle two specific pieces of AI legislation.

First, on January 1, 2022, amendments to the Illinois Artificial Intelligence Video Interview Act became effective.  The amendments appear in a new section, 820 ILCS 42/20.  The amendments create a new reporting requirement for Illinois employers that rely solely on AI analysis of a video interview to determine whether a job applicant will be selected for an in-person interview.  Such employers must collect and report demographic data on race and ethnicity of those who are and are not given interviews and of those who are hired.  They must then report the data to the Illinois Department of Commerce and Economic Opportunity annually by December 31.  The Department will then analyze the data received from employers and report to the Governor and General Assembly whether employers’ data discloses racial bias in the use of video interview AI systems.

Second, a New York City Council November 2021 law seeking to prevent AI bias in hiring, effective as of January 1, 2023, covers “automated employment decision tools” and requires employers using AI job applicant screening tools to take steps to mitigate hiring bias.[1]  An automated employment decision tool is defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”[2]

The law precludes employers in the city from using automated employment decision tools unless an independent auditor conducts an audit and tests the tool's potential disparate impact on persons in certain protected classes.  Employers and employment agencies must post a summary of audit results on their website before using the audited tools.  Employers and employment agencies must also notify job applicants about the use of the automated employment decision tool and the qualifications or characteristics assessed by the tool.  Applicants have a right to opt out of the use of the tool.  Also, applicants have a right to know the type of data collected for the tool, the source of the data, and data retention practices via a website or upon request.  Employers that violate the law are subject to civil penalties of $500 for each of the first violations on a given day and $500-$1500 for each later violation.

New York City employers using these AI tools will want to work with tool vendors to make sure that an independent audit has been completed. While it isn’t clear what kind of independent audit would suffice, I believe an attestation report from a CPA firm proceeding under the Statement on Standards for Attestation Engagements No. 18 (SSAE 18) would be an ideal audit document. Nonetheless, until criteria for such an attestation are established, there may be other certification and testing companies that can audit these tools. Covered employers will also need to publish the required information to their websites and be prepared to handle any requests for information from job applicants


[1] N.Y.C. Admin. Code §§ 20-870 to 20-874.

[2]  Id. § 20-870.

Previous
Previous

ChatGPT Stole the Show:  Generative AI and the Law

Next
Next

2022 AI Legislative Year in Review Part 1