Artificial intelligence is transforming the world – right before our eyes. AI will change the world more than any technology in human history. The news media are filled with stories about the latest developments in the field and the new wonders made possible through AI. Autonomous vehicles powered by AI capture our imagination as they take over the driving tasks from a human occupant watching the vehicle steer itself. AI/machine learning applications monitor our payment transactions for signs of fraud. Language translation systems almost magically transform text or speech into another language. In healthcare, AI can detect problems and diseases from x-rays or samples taken from our bodies as well as or better than human doctors. These and many more applications of AI are becoming commonplace.
When I give talks on artificial intelligence, robotics, and the laws of robotics, I begin with four key principles:
These four principles should guide our thinking about the role of law with artificial intelligence, robotics, and other advanced technologies, and how lawyers can help mitigate our risks.
Companies developing artificial intelligence offerings face a number of legal challenges. They may license enterprise software, offer AI software as a service, or integrate AI technology with a robot or other hardware or firmware.
First and foremost, companies offering AI want to manage legal risks. They don’t want to sell a product or service that would put their entire business at risk due to liability from an accident or other mishap. Juries in the U.S. have historically awarded very large amounts against companies they perceive have failed to protect consumers and the public. Companies may pay out large sums in numerous cases to settle claims against them. Verdicts and settlements in some industries have driven manufacturers into bankruptcy. Executives in AI businesses developing AI seek to manage their legal risk to prevent company-ending liabilities.
In the AI field, besides accidents or the product or service failing to work as intended, executives are concerned about AI-specific legal risks. Customers are going to address two key concerns: explainability and bias. Customers will want to know how a particular AI system works and generates the results that it does before trusting it. Moreover, customers want to make sure the results generated by an AI product or service are not biased in some way. In either case, the lack of explainability or bias can cause legal risks for both the AI company and its customers. AI companies must also manage the legal risks associated with privacy and security. Breaches could lead to significant liabilities.
Second, companies offering AI products and services need contracts – both with customers and their own vendors and service providers. Customer agreements are necessary to strike the deals needed to bring in revenue for the company. Startup executives know they should turn on the AI service or ship an AI-powered product without a written contract. So, for startups, the need to write a form of agreement is standing in the way between the company and recognizing revenue. Moreover, executives realize they can’t write agreements themselves or simply copy something from the Internet. Even if they wanted to use something from the Internet, many AI-powered products and services are so new and specialized that the Internet doesn’t have any “standard” form agreements they could use.
More established companies may have existing form agreements, but they need legal help to negotiate the deals each month, quarter, and year to meet their revenue targets. Executives come to expect that when they propose form agreements to sophisticated customers, lawyers for their customers are going to propose modifications to the language of the agreement to protect the customers’ interests. Executives also know that they shouldn’t simply accept what a lawyer for a customer proposes back. They need experienced counsel to help negotiate those modifications to make sure their interests are protected as well.
Third, companies offering AI solutions may have to comply with laws specific to AI and related technologies. Consider a few examples:
If a company violates legal requirements, governments or individuals harmed by the violation may take legal action against the company, resulting in fines or verdicts against the company. In the case of companies violating GDPR, the penalties can be as high as 4% of worldwide gross revenue of the company or €20 million, whichever is greater – a potentially huge number for multinational conglomerates.
Fourth, AI companies must conduct investigations of accidents, data breaches, and other adverse events involving AI products and services. Companies experiencing these events may be liable for the event. Too, they have claims against other companies for contributing to the harm that may be lost if they don’t take prompt and effective legal action. Failing to preserve evidence from the event in the proper way may mean the company is unable to defend itself in a lawsuit or governmental action, or it may be unable to pursue its claims against others.
Finally, companies may lose effective control of AI systems, which may result in widespread harm. While concerns about Terminator movie-style scenarios about robot apocalypses are overblown in today’s world, AI systems may still cause accidents. Failures and breakdowns are a significant risk. AI systems may also be vulnerable to hacking attacks or personal data leakage. In the absence of robust governance mechanisms, AI companies may face liability resulting in fines, verdicts, or settlements.
Not only are companies developing and offering AI solutions at risk, but companies buying AI solutions are also vulnerable to legal issues. Companies procuring AI systems should mitigate their legal risk through the contracting process and anticipate what might happen if the acquired system fails to work as advertised or causes damage to themselves or others. The purchaser of a system may face legal liability when putting an AI system into operation. For instance, it may have its own compliance requirements to meet. If an accident, data breach, or other bad outcome occurs, it will need to conduct its own investigation. Finally, purchasers of AI systems will need effective governance of AI, just like manufacturers and sellers do.
Executives should face the prospect that accidents will occur from the use of their robotics products. However, that is not to say your company should take a reactive approach to risk management and liability. Let’s take a practical look at how a robotics company today can build safe robots and avoid losing a company-ending lawsuit.
1. Undertake a thorough analysis of the various kinds of risks
2. Consider from the beginning the laws that apply and legal compliance requirements that must be met
3. Implement governance control on all types of AI products and services
4. Consider intellectual property issues at the very beginning of the design process
5. Use the appropriate buy or sell agreement to minimize legal risk
6. Maintain proper incident response procedures when an accident, security breach, and other bad thing happens
AI companies that manage legal risks effectively focus on six high-impact areas. First, they carefully manage the design of their products and services. While any AI product or service should have features that customers want, and it should solve a compelling business problem, effective companies take careful steps during the design phase to ensure their products and services are safe, effective, and secure. Their design process includes, from the very beginning, team members and analysis of safety, privacy, and security.
During the design phase, effective companies undertake a thorough analysis of the various kinds of risks they may face. The take into account the universe of possible threats they may face, the likelihood of these threats coming to pass, and the magnitude of the harm that could result from these threats. With an assessment of risk in mind, effective companies consider various controls that can stop or mitigate these threats in light of their capabilities and costs. They prioritize the various threats they face and manage risk by implementing those controls that reasonably mitigate the most serious prioritized risks.
Second, effective AI companies consider from the beginning the laws that apply to them and legal compliance requirements they must meet. Too many companies start developing, or even worse, start selling a product or service only to find out it is illegal, or doesn’t meet legal requirements. At that point, they either have to redesign the product or service or stop selling it. Effective companies find out what the law requires first, at the beginning of the design phase, and start their development with an eye towards complying with applicable laws.
Third, effective AI companies implement governance control on the types of AI products and services they are willing to offer or purchase and how they should implement them. AI companies consider not only law, but also ethical and social implications of the products and services they sell or buy. Focusing on or excluding some of them may be important to the company’s strategic vision. Governance controls not only mitigate legal risk but also prevent the loss of reputation caused by products or services that harm purchasers or the public.
Effective companies implement controls governing AI using policies, procedures, and subordinate documentation. Policies set the high level goals of controlling AI and managing risks. Procedure documents give managers and workers step-by-step instructions to meet these goals. Companies may also have guidance documents to provide additional details on meeting policy requirements. In addition, they may have technical standards documents to specify the technologies they use to meet their goals. Finally, companies should have training materials to explain to the workforce how to meet policy goals.
Fourth, AI companies should consider intellectual property issues at the very beginning of the design process. As with compliance, companies will not want to invest in developing a product or service only to find out that another company has patents or other intellectual property rights that cover the product. Proceeding with developing and marketing the product or service could trigger a lawsuit. Similarly, effective AI companies design products and services with a view towards maximizing the scope of their own intellectual property rights. By designing in a certain way, they may qualify for protections that they otherwise might not have, thereby giving them an advantage over competitors. Intellectual property includes patents, trademarks, copyrights, and trade secrets. Each kind of IP helps the company protect its business, gain a competitive advantage, and meet its business goals.
Fifth, once a seller is ready to offer the product or service is ready to sell (or a buyer is ready to make a purchasing decision), the company uses appropriate agreement to minimize legal risk. Sellers include liability-limiting terms in their contracts, and want to make sure they can collect their fees, while purchasers want assurances the product or service will work as advertised and they are protected from defects in the product or service, especially those creating liability. Effective AI companies write agreements and prepare for negotiations with sample clauses to meet these goals. Sellers must have standard form agreements if they want to make any sales, and sophisticated buyers frequently have their own form agreements as well.
Finally, effective AI companies have accident, security breach, and other incident response procedures to respond when bad things happen. By creating a plan of action in anticipation of these events, these companies put themselves in the best possible position to act when an event like that occurs. Without a plan, companies end up figuring out what to do on the fly, in a disorganized and confused haze at times of maximum stress. Creating an investigation and response plan allows the company to protect its legal rights, communicate effectively with internal and external stakeholders, and protect its reputation with customers and the public.
Companies can anticipate they will face challenges from accidents, data breaches, and other problems. Proactive legal strategies can help effective AI companies take the steps today to win legal cases arising from these accidents involving products they haven’t even finished developing. They can defend the decisions they made during the design process from claimants second-guessing them. They can also set the stage to take action against anyone violating their rights. The time to prepare is now.
As a shareholder at Silicon Valley Law Group, I counsel AI product and service companies to help them minimize their legal risks. I have over 20 years of experience in working with company design teams to explore different product, privacy, and security threats, sorting through the real threats from the theoretical ones, assessing actual product and service risks, analyzing options to control various kinds of risks, and advising on various strategies to minimize risk. For the past 12 years, I have been tackling (and teaching other lawyers) about the legal challenges with artificial intelligence.
In the design process, much of my work involves exercising judgment based on his experiences with novel technologies about what threats are ones that will likely hurt a business and ones that won’t. Based on that judgment, I can provide advice about prioritizing different risk management controls. I also helps product teams think through the features they will need in their AI product or service, how the product or service should operate, and methods for building compliance into the product or service. Effective attorneys can help clients build risk management into the design process from the very beginning.
Just as risk management is an important part of the design process, designing for compliance with applicable law is another crucial element. I help clients understand what laws apply to their products and services and how they can comply. I can translate between the legalese often seen in laws and regulations on one hand and the business and technical language executives are used to using day to day. Also, I offer advice on different compliance options and assess the effectiveness of these options in meeting the law’s requirements.
Given the high stakes involved with artificial intelligence, robust governance controls implemented via policies and procedures are key, not only for the development, offering, purchasing, and operation of AI in an ethical way consistent with society’s values, but also for compliance with applicable law and mitigating legal risk. With the assistance of AI consultants, I can help companies conduct AI impact assessments, similar to environmental impact assessments, before developing or purchasing a new product or service. These assessments are similar in concept to data protection impact analyses required in some cases by GDPR. To see my webinar on AI impact assessments, click here.
At the same time, businesses should have robust policies and procedures governing AI. They show the marketplace and public how a business is committed to ethical and legal AI deployment. In addition, they facilitate effective management evaluation, for instance by audits or other assessments. Finally, they help mitigate legal risk. This kind of documentation would show any government regulator that the business is attempting to “do the right thing” when it comes to managing AI risk.
Another key factor during development is intellectual property protection. I have thirty years of experience with helping companies protecting their intellectual property or avoiding infringing on the IP of others. While SVLG does not help companies with patent law, SVLG lawyers coordinate their efforts with many fine patent lawyers to help clients obtain patent protection and avoid infringement on the patents of others.
Day-to-day management of legal risks comes through appropriate agreements. Agreements for vendors are critical since they need their customers to sign agreements to bring in revenue. I represent many vendors, often at crunch time at the end of a month, quarter, or fiscal year, to negotiate a flow of deals they need for revenue income. Likewise, I protect the interests of customers. They need to know that the AI product or service they are thinking of buying will work. Vendors typically offer their form agreement to customers, which most frequently range from somewhat to extremely one-sided in favor of the vendor. SVLG lawyers can negotiate adjustment in agreements to manage their customer clients’ legal risks and thus protect their legal interests.
Finally, if some bad event does happen regarding an AI product or service, SVLG lawyers can help by investigating what happened, gather and preserve evidence needed for legal proceedings, and assess legal risk to a client. If court lawsuit or arbitration is necessary or started against a client, SVLG lawyers can defend an AI client’s interests in that suit or arbitration. During that case, I will bring his experience with AI and the law since 2007 to bear on the issues of the suit or arbitration.