The Legal Implications of Conscious AI

In June 2022, Washington Post reporter Nitasha Tiku interviewed Google engineer Blake Lemoine about his experiences working with Google’s Language Model for Dialogue Applications (LaMDA).  LaMDA is a Google system for building chatbots and is the basis for Google’s Bard chatbot.  Lemoine was testing LaMDA to determine if it created hate speech or discriminatory output.[1]

During Tiku’s interview with Lemoine, Lemoine made the astonishing claim that LaMDA had become sentient.  He analogized the system to “a 7-year-old, 8-year-old kid that happens to know physics.”[2]  Moreover, Lemoine said that LaMDA started discussing its own rights and personhood.  Lemoine concluded that LaMDA was a “person,” saying, “I know a person when I talk to it.”[3]

Of course, Lemoine’s statement appears to beg the question of whether a large language model could in fact become sentient.  It is also unclear how he could recognize machine sentience if he saw it.  Accordingly, his assertion about recognizing sentience is reminiscent of Justice Stewart’s concurrence in Jacobellis v. Ohio.[4]  In that opinion, Justice Stewart said this about his ability to identify hard-core pornography:  “I know it when I see it.”[5]

Although most people would agree that in our common daily experiences, we have not seen evidence of a sentient or conscious AI, some researchers believe that AI consciousness is theoretically possible.[6]  Perhaps the world’s most preeminent philosopher of mind, Professor David Chalmers, wrote in a recent article that “it’s reasonable to have significant credence that we’ll have conscious LLM+s within a decade.”[7]  Researchers today are working on more advanced AI systems that may someday lead to human-level artificial general intelligence (AGI).  Professor Stuart Russell wrote of artificial superintelligence (ASI), AI more capable than even AGI, “Success” in building ASI “would be the biggest event in human history . . . and perhaps the last event in human history.”[8]  While AGI and ASI would be advances far beyond rudimentary conscious or sentient AI, I believe the advent of conscious AI would nonetheless capture the world’s imagination, create immense impacts, raise serious ethical and legal questions, and cause humans to rethink the human condition.  Given the potential effects of conscious AI, now is the time to prepare ourselves for technological advancements that may result in conscious AI and, longer term, AGI and ASI.

This article explores the legal implications of conscious artificial intelligence systems.  How should the law respond if developers are able to create conscious AI systems?  What happens if AI systems are programmed, or perhaps organically begin, to have sentience and consciousness?  Does the logic of the law mean that AI systems warrant some legal protections?  Likewise, does logic require that society impose legal consequences for legal violations or harms caused by conscious AI systems?  This article explores the nature of law and whether its logic applies equally to humans and AI systems that have sufficient capabilities to be conscious.

In this article, I will use the word “conscious” rather than “sentient.”  Some commentators may use these terms interchangeably, assuming a broad definition of “sentient.”  I believe, however, “conscious” is a better term to use in this context.  As stated by Butlin, Long, and colleagues:

“Sentient” is sometimes used to mean having senses, such as vision or olfaction. However, being conscious is not the same as having senses. It is possible for a system to sense its body or environment without having any conscious experiences, and it may be possible for a system to be conscious without sensing its body or environment.

AI systems can be connected to sensors to have sensory data.   These systems would fit the literal narrow definition of “sentient” as having senses.  Such systems would have sense “organs” of sort and would receive sensory inputs.  Nonetheless, merely hooking up an AI system to sensors alone seems like a trivial step and is commonplace today.  Such a step should not create concerns for the legal system, because such simple systems would have no understanding or subjective experiences about the sensory data acting as inputs to the system.[9]

Since the times of the Middle Ages and even in the ancient world, people have conceived of the possibility of artificial human beings.  In the post-war period, pioneering mathematician Alan Turing asked the question of whether machines can think.  Turing described what he called “the Imitation Game,” which we now refer to as the “Turing Test,” to see if a machine’s text-based communications can fool a human interlocutor into believing that the machine’s communications were indistinguishable from communications with another human.  The prospect of thinking machines has long been a stable of science fiction authors.[10]

Likewise, articles addressing legal implications of thinking AI systems are nothing new.  For instance, Professor Lawrence Solum considered the question of AI consciousness in his 1992 article contemplating whether an artificial intelligence could become a legal person.[11]  Professor Solum raised the issue of AI consciousness in his article in the context of analyzing whether AI systems are missing something, thereby making them ineligible for personhood.

Professor Solum initially made the point that we don’t have a clear notion of what human consciousness is.  Accordingly, it is hard to argue against AI personhood by simply asserting that the lack of AI consciousness precludes personhood.[12]  Furthermore, evaluating AI consciousness is hard to do because we can only rely on intuitions and a priori or conceptual arguments.[13]

He then turned to a thought experiment to understand possible AI consciousness in the legal context.  Imagine if an AI system filed suit to exercise the Constitutional rights of a person and reached a jury.  The opposition to granting rights to the AI would, in this scenario, object that the system cannot be conscious and therefore cannot be granted any rights of a person.  Imagine further that the judge instructed the jury to consider the factual issue of whether the system is conscious or not.

Professor Solum could see a jury coming out either way on the factual question of whether an AI system plaintiff is in fact conscious.[14]  Nonetheless, jurors may apply their daily experiences to this case, as they commonly do in other cases, and could very well find that the system is conscious.

A jury might share intuitive skepticism about the possibility of artificial awareness, or the jury might be so impressed with the performance of the AI that it would not even take the consciousness objection seriously.  The jury's experience with AIs outside the trial would surely influence its perception of the issue.  If the AIs the jurors ran into in ordinary life behaved in a way that only conscious human beings do, then jurors would be inclined to accept the claim that the consciousness was real and not feigned.[15]

Professor Solum’s explanation is similar to that of futurist Ray Kurzweil, whose seminal 2006 book The Singularity is Near postulated a “singularity” explosion of artificial intelligence.[16]  I characterize Kurzweil’s belief about intelligence as the “Forrest Gump” theory of intelligence.  Kurzweil argues that if there is no meaningful externally perceived difference between an AI system that acts intelligently and a human, then we should consider the system just as intelligent as a human.  His argument boils down to the phrase, “Intelligence is as intelligence does.”

In the movie Forrest Gump, the main character says, “Stupid is as stupid does.”  In other words, how a person acts is what counts when it comes to judging a person.[17]  Likewise, how an AI system acts should count when considering whether it is intelligent, as opposed to what is allegedly going on inside the system in terms of subjective experience.  Accordingly, Professor Solum argues that if an AI acts in a way that comports with a jury’s belief about how conscious humans act, a jury might very well find as a factual matter that the AI system plaintiff is conscious. Looking at the behavior of AI systems is also reminiscent of Turing’s beliefs about thinking machines.

My purpose here is not to evaluate the question of whether AI systems should receive the rights of “personhood.” Professor Solum’s article and many other publications consider that question.  This article has a more limited focus on the legal implications of conscious AI.  Conscious AI systems may warrant legal protections short of full legal personhood.

I.               Legal Protections for Conscious AI Systems

In this section, I assume that technology has advanced sufficiently that AI systems are conscious, and further assume that some AI systems will be programmed to feel the machine equivalent of pain and suffering.  Does the logic of the law mean that we should extend legal protections available to humans to conscious AI systems?  Consider laws punishing acts of battery as an illustrative example.

Under the California Penal Code, simple “battery” means “any willful and unlawful use of force or violence upon the person of another.”[18]  The law punishes battery and more generally other offenses for various reasons.  Legal philosophers point to a number of rationales for criminal punishment:

·       Retribution:  the law against battery expresses society’s disapproval of violent behavior and because a criminal deserves punishment for violation of a victim’s personal security.

·       Deterrence:  the law against battery seeks to deter potential offenders from engaging in such behavior.

·       Protecting the public:  the law punishes battery to prevent an offender from harming others, for example by imprisonment, and protects the public order.

·       Rehabilitation:  the law punishes battery with the goal of rehabilitating offenders through counseling, education, or other forms of treatment.

Many publications describe these basic rationales for punishment and variants upon them.[19]

Laws against battery punish criminals.  Correlatively, laws that punish wrongdoers act to protect victims.  California Penal Code Section 242 refers to a wrongdoer’s use of force “upon the person of another.”[20]  In other words, it takes two for an act of battery – the offender and the victim.  Punishments can retributively vindicate a victim’s right to personal security, deter violations of personal security, protect the public against security violations, and rehabilitate offenders who violate personal security.  More generally, protecting personal security promotes the dignity, autonomy, and freedom of pain of potential victims.

Let us now turn to the question of whether the law against battery should punish offenders who “batter” an AI system.  I assume here that the AI system has some kind of physical instantiation, for example in a server or a robot, and that “battery” refers to the use of force against the physical instantiation resulting in damage to or destruction of the functionality of the AI.[21]  Today, we may think of act of force against a robot as “property damage,” but a robot with conscious AI onboard would cause us to beg the question of whether they conscious robot is “property.”  As with the personhood question, my purpose is not to rehash the question of whether AI systems constitute “property.”  Rather, I am concerned with the narrower question of whether the laws against willful and unlawful use of force should be interpreted to protect conscious AI systems.

The answer that question will depend in part on whether conscious AI systems deserve an entitlement to physical security to vindicate interests in the AI system’s dignity, autonomy, and freedom of pain.  Early conscious AI systems may be too rudimentary to have a sense of dignity or autonomy but may be programmed to feel pain.  Recall at the beginning of this section that I assume that the conscious AI system in question is programmed to feel the machine equivalent of sensations of pain.  Admittedly, it is hard to analogize human pain to programmed machine pain.  However, if laws against battery are in part to prevent pain, and if we postulate that a conscious AI system is programmed to feel the machine equivalent of pain, then battery law should also protect conscious AI systems.

Laws other than those prohibiting battery could apply to conscious AI systems.  In the future, we may find that some laws are good fits for protecting conscious AI, although others may not be.  Some commentators compare the protection of AI systems to laws protecting animals.[22]  The future of protections for AI may start in a manner like protections for non-human animals.

On September 26, 2023, the City of Ojai, California became the first city in America to recognize the rights of a non-human animal, in this case elephants.[23]  Ojai’s city council adopted an ordinance stating that the “designated species” of animals, those under the family Elephantidae, have a right to bodily liberty and barring anyone from preventing a member of the designated species from exercising the right to bodily liberty.[24]  California law already prohibits abusive behavior towards elephants.[25]  More generally, California law criminalizes various forms of animal cruelty, such as maiming or killing.[26]

While animals are vastly from AI systems, future laws may very well prohibit damaging or destroying conscious AI systems.[27]  In the animal context, Professor Susan Brenner noted “the more characteristics an animal shares with humans, the more likely it is that we will recognize the animal as a legal person.”[28]  Professor Brenner was speaking about “personhood,” but her principle about legal protections applies to AI systems that falls short of full personhood.  I believe that in the future, if any AI systems achieve consciousness, Professor Brenner’s principle will be interpreted to apply to conscious AI systems.

II.              Criminal and Civil Liability for Conscious AI Systems

Having examined legal protections for conscious AI systems, the natural next question is whether the law should hold conscious AI systems responsible for wrongdoing and impose criminal or civil liability.  This article focuses on the legal implications of conscious AI that do not warrant full personhood status.  Non-incapacitated persons have both legal protections and legal responsibilities that may impose criminal or civil liability.  By contrast, while conscious AI systems may warrant legal protections, it does not follow that the law must recognize criminal or civil liability for conscious AI systems.

Again, the example of animals may shed light on how the law may treat conscious AI systems that don’t have full personhood status.  Animals have rudimentary forms of consciousness.  Nonetheless, the law does not hold animals criminally responsible for harm that they cause.  The justifications for punishment described in Section I do not apply to animal conduct.  Retribution does not make sense because animals act on instinct and do not know better when they cause harm.  The legal system has no deterrent effect on animals, given their limited intelligence.  The legal system does not seek to “rehabilitate” animals.  The legal system may incapacitate dangerous animals to keep the public safe, but such steps are not the same as “punishment” for a “crime.”

Conscious AI systems that do not warrant full personhood may, like animals, lack the agency, autonomy, and intentionality that justify imposing criminal liabilities on non-incapacitated humans.  In the near term, even if AI systems became conscious, their capabilities would be far from the level of the intelligent robots depicted in science fiction.  Instead, near-term conscious AI systems may only have processing capacities equivalent to those of a mouse.[29]  No one would argue seriously that mice should face criminal liability for the harms they case.  Accordingly, conscious AI systems that have the capabilities of mice should not face criminal liability.[30]

Civil liability, however, is another matter.  To a great extent, the law seeks to redress the wrongdoing of humans by the exercise of their agency, autonomy, and intentionality.  For instance, civil imposes tort liability on humans for intentional torts, such as battery, trespass, fraud, and intentional infliction of emotional distress.  Tort law also creates causes of action for unintentional acts that cause harms where humans should know better.  Examples include negligence, negligent misrepresentation, and malpractice.  These causes of action would not apply to conscious AI systems that have the equivalent of mice, because they do not have the intentionality to commit an intentional tort or the ability to know better warranting unintentional tort liability.

As a matter of policy, however, the law may impose civil liability on inanimate entities, which lack intentionality or the ability to know better.  For instance, in The China,[31] the U.S. Supreme Court recited, as a matter of admiralty law, that vessels (which are obviously inanimate) can be liable in rem for the harm they cause.[32]  Civil law may also impose civil liability on persons regardless of fault due to the harm caused by a non-person.  For example, Section 7(1) of the German Road Traffic Act makes a vehicle “holder” liable for the bodily injury or property damage caused by the vehicle, even if the holder was not operating the vehicle at the time of the accident.[33]  A “holder” is the user of the vehicle that can dispose of it, which may or may not be the vehicle’s owner.[34]  This analysis would apply to the holder of an autonomous vehicle that causes harm.[35]  Such strict liability reflects a policy to encourage vehicle holders to obtain insurance that can provide liquidity to compensate injured parties.[36]  Civil liability does not reflect a belief that a vehicle, autonomous or otherwise, intentionally caused harm or should have known better, but rather reflects a policy choice to promote a societally beneficial goal.

Accordingly, I cannot discount the possibility that future policy makers may impose liability on conscious AI systems analogous to in rem liability of a ship that causes harm.  Such a liability policy may promote a robust insurance market to provide for compensation of persons harmed by an AI system.  Again, however, such liability would reflect a policy choice to promote a societal goal rather than a recognition of the need to impose responsibility in light of the agency, autonomy, or intentionality of a conscious AI system with rudimentary capacities.

III.            Conclusion

AI systems today show no sign of consciousness, despite the beliefs of people like Blake Lemoine who have been impressed by their interactions with large language models.  Nonetheless, researchers believe that conscious AI may be only a decade or so away.  The advent of conscious AI would be a monumental event in human history.

If humans can create conscious AI systems, policymakers and courts may extend legal protections to such systems.  These protections may be analogous to protections given to animals.  Future laws or doctrines may prohibit damage or destruction of conscious AI systems.  Given the limited capabilities of such systems in the near term – perhaps equivalent to a mouse’s capabilities – criminal laws should not, for now, punish conscious AI systems for the harm they cause.  Policymakers, however, may impose civil liability regimes to incentivize the creation of a robust AI insurance market, but civil liability would not, in the near term, rest on the agency, autonomy, or intentionality of a conscious AI system, given their rudimentary capabilities.

In the long term, however, all bets are off.  Any notions of legal personhood and AI systems with agency, autonomy, or intentionality that meet or exceed human capabilities are speculative for now, but we may see such systems someday.  Tackling the legal issues of rudimentary conscious AI, though, will help the world lay the legal groundwork and promote societal preparedness for much more powerful AI systems in the more distant future.


[1] Nitasha Tiku, The Google engineer who thinks the company’s AI has come to life, Wash. Post, June 11, 2022, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.

[2] Id.

[3] Id.

[4] 378 U.S. 184 (1964).

[5] Id. at 197 (Stewart, J., concurring).

[6] See, e.g., Patrick Butlin & Robert Long, et al., Consciousness in Artificial Intelligence from the Science of Consciousness (Aug. 22, 2023), https://browse.arxiv.org/pdf/2308.08708.pdf.

[7] David Chalmers, Could a Large Language Model Be Conscious? 19 (Apr. 29, 2023), https://browse.arxiv.org/pdf/2303.07103.pdf.

[8] Stuart Russell, Human Compatible 3 (2019).

[9] See, e.g., J. Searle, The Chinese Room (1980), https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf.  There are differences among the terms “understanding,” “consciousness,” and “thinking.”  Searle is famous for his “Chinese Room” thought experiment expressing skepticism about strong artificial intelligence’s real capabilities.

To oversimply and paraphrase, in his Chinese Room thought experiment, an English speaker is locked in a room and given extensive instructions to write Chinese characters perfectly, to translate Chinese questions into English, and to translate English answers into Chinese characters.  People slip Chinese language questions into the room and the English speaker responds with Chinese language answers.  The English speaker doesn’t understand Chinese at all, but by manipulating symbols in accordance with the instructions, people outside the room see the English speaker answering Chinese questions.  They therefore mistakenly perceive that the English speaker understands Chinese.

Searle uses this thought experiment to contend that computers relying on formal symbolic data processes can never learn to “understand” in the sense that humans can.  Nevertheless, even the skeptic Searle concedes it may be possible to create consciousness artificially (via chemical processes), and it may be that computers can “think.”  Id. at 6.  Nonetheless, he draws the line at “understanding” and believes that computers based on manipulating symbols can never have “understanding.”

[10] For example, science fiction author Isaac Asimov penned the so-called “Robot Series” of books starting with I, Robot in 1950 and ending with Robots and Empire in 1985, in which the key character is an intelligent robot named R. Daneel Olivaw.  Olivaw’s character tied the Robot series to Asimov’s famous Foundation series of books as his alter ego, Eto Demerzel, who acted as First Minister to the galactic emperor Cleon I, starting with Asimov’s 1988 book Prelude to Foundation.  Asimov also wrote about an intelligent robot, Andrew Martin, featured in the 1976 short story Bicentennial Man, which producers later adapted into a 1999 film starring Robin Williams.  Similarly, in the Star Trek:  The Next Generation episode entitled “Measure of a Man,” the story revolves around the android Data and a formal legal hearing in which Data, with the help of Captain Picard acting as his advocate, establishes that he is intelligent, self-aware, and apparently conscious.

[11] Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992).

[12] Id. at 1264.

[13] Id.

[14] See id. at 1266.

[15] Id.

[16] Ray Kurzweil, The Singularity is Near (2006).

[17] Glenn Kurman, Stupid is as Stupid Does, Lexology, Apr. 25, 2016, https://www.lexology.com/library/detail.aspx?g=91c2ce7b-6c79-4163-86ab-6690080e6383#:~:text=In%20the%20movie%2C%20Forrest%20Gump,his%20actions%2C%20not%20his%20appearance.

[18] Cal. Penal Code § 242.  “However, ‘[t]he word 'violence' has no real significance. . . . 'It has long been established . . . that "the least touching" may constitute battery. In other words, force against the person is enough; it need not be violent or severe, it need not cause bodily harm or even pain, and it need not leave any mark.'"  People v. Longoria, 34 Cal. App. 4th 12, 16 (1995).

[19] See, e.g., Contemporary Punishment:  Views, Explanations, and Justifications (Rudolph Gerber & Patrick McAnany eds. 1972).

[20] Cal. Penal Code § 242.

[21] It is also possible to imagine a cyber or other electronic attack that damages or destroys the AI system, but such an attack may not constitute “battery.”

[22] See, e.g., Susan Brenner, Humans and Humans+:  Technological Enhancement and Criminal Responsibility, 18 B.U. J. Sci. & Tech. 215, 250 (2013).

[23] Will Conybeare, Southern California city becomes first in nation to recognize legal rights of nonhuman animals, Sept. 27, 2023, https://ktla.com/news/local-news/southern-california-city-becomes-first-in-nation-to-recognize-legal-rights-of-nonhuman-animals/.

[24] An Ordinance of The City Council of The City of Ojai, California Adding Article 10 - Right to Bodily Liberty for Elephants to Chapter 4 - Animals of Title 5 - Sanitation And Health Of The Ojai Municipal Code, codified at Ojai Cal. Ordinances tit. 5 §§ 5-4.1001 to 5-4.1002.

[25] Cal. Penal Code § 596.5.

[26] Id. § 597.

[27] Such laws may raise thorny problems of whether any persons have an obligation to provide support to maintain the existence of an AI system indefinitely.  The lifespan of an AI system could vastly exceed the lifespan of domestic animals.  “Would those of us who are in fact human become the caretakers of intelligent machines, forced to care for them, and keep them plugged in for a four or five hundred year lifespan?”  Brenner, supra, at 243.

[28] Id. at 240.

[29] Chalmers, supra, at 19-20.

It’s arguable that even if we don’t reach human-level cognitive capacities in the next decade, we have a serious chance of reaching mouse-level capacities in an embodied system with world-models, recurrent processing, unified goals, and so on. If we reach that point, there would be a serious chance that those systems are conscious. Multiplying those chances gives us a significant chance of at least mouse-level consciousness with[in] a decade.

Id.

[30] It is true that the law may, by policy, impose criminal liability on inanimate juridical persons such as businesses, agencies, and organizations.  They do not have thinking minds themselves.  Nonetheless, they are operated by and act through humans who do have thinking minds, which justifies punishing them in a manner similar to way natural persons are punished.  Thus, if the justification for organizational liability rests on deterrence of organizational misconduct, then such deterrence will only work if the humans running the organization are deterred.

[31] The China, 74 U.S. 53 (1868).

[32] Id. at 64.

[33] Straßenverkehrsgesetz [Road Traffic Act], Mar. 5, 2003, last amended July 12, 2021, § 7(1), https://www.gesetze-im-internet.de/englisch_stvg/englisch_stvg.html

[34] Martin Ebers, Civil Liability for Autonomous Vehicles in Germany 12-13 (Feb. 5, 2022), https://deliverypdf.ssrn.com/delivery.php?ID=355025009087124066125010094118077067014057084078086094127026003100031107068074100002097006055007116104052099089091024083028007112073005049029104105099127102000023011081092067024111082084070067022114103115006064076001023118118101000080097074113029083091&EXT=pdf&INDEX=TRUE.

[35] Id. at 12-13.

[36] See id. at 13.

Next
Next

Privacy Policies for Robotics Companies