Update on FTC Activity on AI

The FTC has been acting as the de facto national general regulator of AI activities.  It provides guidance and conducts FTC Act Section 5 enforcement activities striking at unfair and deceptive trade practices.[1]  For instance, in March 2022, the FTC published a press release with news about its February 2022 action against Kurbo, Inc. and WW International, Inc. (formerly, Weight Watchers).[2]  Kurbo is whole owned by WW and is a weight management service to help children and teens in the U.S.  The complaint alleged that the defendants failed to provide clear notice of their privacy practices in collecting and disclosing personal information of children under thirteen years of age, failed to ensure that parents received notification, failed to obtain verifiable parental consent, and retained personal information too long in violation of regulations under the Children’s Online Privacy Protection Act (COPPA).

In a stipulated order to settle the case in March 2022, the defendants agreed to destroy the wrongfully collected personal information and also had to destroy any models or algorithms derived from personal information collected from children.  This order is similar to the FTC’s May 2021 Everalbum decision requiring destruction of models or algorithms derived from facial recognition data.  The destruction remedy is a relatively new one and helps to mitigate the effects of the wrongful collection of personal information.

In June 2022, the FTC issued a report to Congress on the subject of using AI to combat harmful online content.[3]  Harmful content includes scams, deepfakes, fake reviews, fake accounts, manipulative interfaces, offers of contraband, incitements to violence, harassment, terrorist content, and disinformation.  The report stated that AI is not a cure-all for such content and that a larger societal effort will be necessary to combat it.  Given the limits in technology, potential bias, and AI’s effect of incentivizing commercial surveillance, policymakers should be cautious about promoting AI tools as a policy solution for harmful content.

In August 2022, the FTC issued an advanced notice of proposed rulemaking (ANPR) regarding commercial surveillance and data security practices that harm consumers.[4]  The FTC’s enforcement historically relied on a case-by-case approach against individual businesses.  Moreover, privacy regulation has traditionally relied on requiring or encouraging businesses to provide effective notice to consumers and obtaining informed consent to their privacy practices.  The ANPR recognized, however, that consumers frequently lack control over businesses’ privacy practices, their permissions are not always meaningful or informed, and consumers have little practical choice but to accept terms from businesses providing essential online services.[5]  Moreover, consumers frequently don’t understand such terms or have time to read them, while some businesses engage in deceptive conduct to obtain personal information.[6]  Finally, the use of automated systems may result in algorithmic discrimination against persons in protected classes in legally significant contexts, such as employment, lending, and housing decision making.[7]

The ANPR sought comment on the desirability of a more blanket approach of adopting a trade regulation rule to protect the public and provide greater certainty.[8]  The ANPR’s exploration of the blanket approach focused on commercial surveillance and lax data security practices.  Nonetheless, the ANPR also sought comment on various individual aspects of algorithmic and other automated decision making.  For instance, it asked if data minimization requirements would hamper the development of processes or techniques for algorithmic decision making.  In addition, a section on automated decision making asked questions about twelve specific topics, such as the nature of algorithmic errors, whether new rules should require steps to mitigate algorithmic errors, the harms and benefits to consumers from automated decision making, and potential regulatory obstacles posed by the First Amendment or Section 230 of the Communications Decency Act.[9]  The ANPR also asked about eight specific topics regarding algorithmic discrimination against protected classes.[10]  The comment period for these questions closed in October 2022.  A possible outcome of the ANPR is a later notice of proposed rulemaking to consider a new trade regulation rule.

In March 2023, the Commission issued a series of 6(b) orders[11] to major social media and streaming tech companies – such as Meta, YouTube, TikTok, and Twitter, which rely on advertisers for revenue – demanding answers to questions concerning how they screen paid advertisements for deceptive statements.[12]  The Commission is concerned about disinformation, consumer deception, and fraud.  The orders seek information about topics such as the amplification of content, AI systems used to create or optimize ads, controls to review or evaluate ads, advertising standards and polices, and human or machine vetting of ads.  Significantly, the Commission is interested in processes, mechanisms, and strategies concerning the use of generative AI systems to create or optimize ads, including its use for creating deepfakes, product placements or simulations, or falsified content.  The orders are for fact-finding purposes, although they very well may motivate additional regulations.

On March 31, 2023, the non-profit advocacy group The Center for AI and Digital Policy (CAIDP) filed a complaint with the FTC urging the Commission to initiate an investigation into OpenAI to determine if its chatbots and GPT products violate FTC Act Section 5 as well as emerging national and international norms for AI governance.  Specifically, CAIDP is concerned about bias, deception, risks to privacy and public safety, and a lack of reliability of OpenAI’s GPT products.  CAIDP seeks an order halting commercial deployment of GPT, requiring an independent assessment of GPT products, requiring compliance with FTC AI guidance, establishing an incident reporting mechanism, and initiating a rulemaking process to set baseline standards for generative AI.

CAIDP filed its complaint as a follow up to a viral March 29, 2023 letter posted by the Future of Life Institute calling for a six month pause on training LLM systems more powerful than GPT-4.  Among many others, Elon Musk, Steve Wozniak, Andrew Yang, and prominent AI researchers signed the letter.  The letter calls for robust public policy and industry AI governance systems to ensure accuracy, safety, transparency, robustness, and trustworthiness of AI systems.[13]  Ultimately, my guess is that the CAIDP FTC complaint is likely to have more real-world impact than the Future of Life Institute’s letter.  Nonetheless, the letter did serve to raise awareness of the issues and risks involved with generative AI.

Practitioners interested in following FTC AI activity may want to attend its annual PrivacyCon.  The last PrivacyCon, the seventh annual iteration of the program, was on November 1, 2022 and addressed topics such as automated decision making, commercial surveillance, children’s privacy, surveillance devices, augmented/virtual reality, manipulation of interfaces, and AdTech.  Links to a recording of the November 1, 2022 program are available on the PrivacyCon 2022 home page in the “video” section.[14]


[1] 15 U.S.C. § 45.

[2] United States v. Kurbo, Inc., No. 22-CV-00946 (N.D. Cal. filed Feb. 16, 2022).

[3] U.S. Fed. Trade Comm’n, Combatting Online Harms Through Innovation (June 16, 2022), https://www.ftc.gov/system/files/ftc_gov/pdf/Combatting%20Online%20Harms%20Through%20Innovation%3B%20Federal%20Trade%20Commission%20Report%20to%20Congress.pdf.

[4] Trade Regulation Rule on Commercial Surveillance and Data Security, 87 Fed. Reg. 51,273 (2022).

[5] Id. at 51,274.

[6] See id. at 51,275.

[7] See id. at 51,275-76.

[8] See id. at 51,276.

[9] See id. at 51,283-84.

[10] See id. at 51,284.

[11] A 6(b) order requires an entity to file an annual or special report answering the Commission’s questions to provide information about the entity’s conduct.  See 15 U.S.C. § 46(b).  The 6(b) is an investigative tool similar to an interrogatory in civil litigation.

[12] A blank form of order sent out to these companies appears here:  https://www.ftc.gov/system/files/ftc_gov/pdf/P224500-Social-Media-6b-Model-Order.pdf.

[13]Pause Giant AI Experiments: An Open Letter, FUTURE OF LIFE INST. (Mar. 29, 2023), https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

[14] U.S. Fed. Trade Comm'n, PrivacyCon 2022, https://www.ftc.gov/news-events/events/2022/11/privacycon-2022.

Previous
Previous

The Leading Edge on Legislation Governing Generative AI

Next
Next

ChatGPT Stole the Show:  Generative AI and the Law