AI Is Creating New Forms of Liability. How Can It Be Managed?
Everyone is on the artificial intelligence train. But that means that they may be culpable for AI’s mistakes.
Artificial intelligence offers irresistible gains in efficiency -- indeed, most larger organizations need to incorporate AI into their operations in order to remain competitive. Some 65% of companies are using generative AI, according to a McKinsey survey from this year. Another survey, from the US Census Bureau, found that 50% of companies with over 5,000 employees and 60% of companies with over 10,000 employees were using the technology. While use varies across industries, it is on the rise in nearly all sectors.
Though AI is sure to increase profits and streamline operations by taking over previously manual tasks and streamlining analytics, it comes with a new set of liabilities -- many of which remain poorly understood.
The definition of AI itself is somewhat vague, encompassing a wide variety of algorithms capable of completing complex tasks. Ultimately, what distinguishes these technologies is their ability to learn from their inputs and complete tasks in an autonomous fashion -- meaning that they are often unpredictable. This may lead to desirable results. Conclusions drawn from massive amounts of data that would be impossible for human agents to effectively analyze can offer a crucial edge.
But even when implemented with scrupulous care, AI can unleash chaos and land its users in court. The complexity of AI’s machinations leads to the so-called “black box” problem -- it can often be difficult to discern why it made the decisions that it did. Errors can be compounded by the use of inaccurate or incomplete data, which is ultimately traceable to the fallible humans who initially compiled it. And sloppily designed systems also process additional data -- often of a sensitive nature -- after they are implemented.
This lack of supervision makes it difficult to forecast where damage may occur -- and thus where liability may emerge. Data leaks and violations of privacy and intellectual property rights abound. These and other errors intersect with a variety of tort and contract liabilities.
Here, InformationWeek looks at the new risks posed by AI and how they can be mitigated, with insights from Bridget Choi, lead product counsel for cyber at insurance brokerage and consulting firm Woodruff Sawyer, and Charles Nerko, team leader of data security litigation for law firm Barclay Damon.
What Types of Liability Does AI Implementation Create?
Given the range of AI applications, the liabilities it can create are nearly limitless. Some are legal and others are reputational.
The responsibility for these problems is another question entirely. Is it the company that uses the AI technology? The company that programmed it? Are multiple technologies interacting, thus creating or exacerbating the problem?
“The creator of the technology has to deal with how they’re building the model -- what they’re using to train the model, how they’re fine-tuning that data. When they bring this product and/or service to market, how are they describing it in their contract? How is the end user accepting the risk?” Choi ponders.
The exponential growth of AI complexity will make it ever more difficult to determine whether the producers or operators of AI are liable. Some types of harm, though real, may not even fall under legally recognized categories and thus be challenging if not impossible to litigate.
“It’s almost like a living and breathing thing. It can change, it can mature. If you don't feed it the right information, it could become obsolete or useless,” Choi says. “Defining the input and output and the expectations of the model is going to be really important.”
Faulty or incomplete datasets may lead to suboptimal performance and create problems down the line. And it may not even be possible to trace the source of the problem at all due to the opacity of the decision-making processes involved. Necessary layers of human supervision may also create shared responsibility -- if an AI program makes a diagnostic error, to what extent is the physician using the program responsible for rectifying it, for example?
If the program has become a standardized part of a care regimen and the physician has used it as instructed, they are likely not liable. So, too, in drug development and trials that utilize AI, it is incumbent upon the medical professionals involved to assess the study population and double-check the potential costs of taking the medication. In such cases, it becomes a matter of determining whether an AI program malfunctioned and whether its recommendations could have reasonably been anticipated and mitigated.
Further, AI implementation leads to complexities in assessing harms because it is both a service and an operator of products, such as AI-assisted robots. The extent to which a machine that operates in the physical world is dependent on responsibly designed AI technology in order to function will thus be crucial in determining where liability is assessed. Did the product itself cause harm, as in a car accident caused by a malfunction in a self-driving car? Or did a vulnerability in the AI program that propels the product allow for a malicious actor to access sensitive information?
Improper use of data may also create copyright and privacy liabilities -- if an AI model is trained on proprietary or private data without permission from its owner, the owner may have grounds to make a copyright claim against the creator of the model.
While there has been speculation on the worst possible harms that AI might inflict -- human extinction, at the extreme end -- realistically, liability in the near term will be restricted to specific systems. That is not to say that the damage might not be widespread. A vulnerability in an AI system that results in a data breach might potentially affect many thousands of people.
“It’s going to be in everything -- if it's not already,” Choi says of the technology. “Understanding that new reality and adjusting is what we're going to need to do commercially to deal with this risk, mitigate this risk and transfer the risk.”
Chatbots in particular appear to have created a unique set of problems. Some have hallucinated responses -- essentially lying to customers about company policies. Such cases have the potential to verge on unintentional fraud. Assessing who is liable may hinge on the provisions of the contract between the designer of the chatbot and the company that deploys it.
Bridget Choi, Woodruff Sawyer
“You’re going to have to do a quality check to make sure there are no hallucinations,” Choi says of some current contracts. “A warranty or acknowledgement in the contracts is how they’re going to manage that risk.”
Others have mysteriously generated racist and sexist responses. And observers have speculated that their powerful capabilities might offer dangerous knowledge to bad actors -- on how to build bioweapons, for example.
This constellation of possible scenarios can be roughly categorized into risks to safety and to fundamental rights. Safety broadly includes both potential physical harm -- as in the case of medical incidents and car accidents involving AI -- and damage to property and even personal data, which can result in financial and other consequences. Fundamental rights are more vague but may include discrimination -- as in the chatbot cases -- and infringement upon free speech. The two categories may overlap in cases where infringement upon fundamental rights create severe psychological distress.
The morass of potential liability will only continue to grow as AI is further integrated into the programs and devices that define modern life. Even in its most conservatively managed implementations, risks cannot be entirely obviated through design and safeguarding procedures -- the AI juggernaut is simply too powerful to be throttled.
Liability will thus become a new cost center that is simply factored into the rollout of new applications -- and will be assessed as it arises.
Current Legal Framework of AI liability
“In the United States, AI-specific laws are still developing. But we have what's called common law. That’s the system of legal principles that evolves through court decisions over time. Those common law principles already apply to artificial intelligence,” Nerko says.
The majority of AI risk can be categorized as either contractual or tort liability, though there is significant overlap.
Contractual liabilities relate to the provisions of the Uniform Commercial Code (UCC) in the US in cases where AI software is categorized as a good, rather than as a service. If an AI program is considered a good and is defective in some way, the customer may be able to claim violations of the contract between themselves and the provider.
Vendors of AI software are typically offered protection by contractual limitations. However, this may not always be the case -- a vendor of a tenant screening program that illegally discriminated against certain groups was found liable for the practice in 2019 despite protesting that it was not aware the program could do so.
Contractual liabilities may be complicated by customization of software for particular purposes, which would then potentially make the AI technology a service. In such cases, claimants must resort to tort law.
Tort liabilities relate to harms potentially caused by AI -- to customers, to users and to outside parties. The difference in how these harms are assessed also hinges on whether AI is viewed as a product. If it is viewed as a product, and the harms are due to design flaws and resultant harms, liability will be assessed under strict liability and product laws. If it is viewed as a service, it will be subject to laws governing negligence.
For example, if an AI program used in healthcare has an inherent flaw that results in misdiagnosis, the original designer of the product might be liable. That liability would have to be traced back through the supply chain to determine the source of the flaw.
“The law right now treats an AI system similarly to how a human worker is treated,” Nerko says. “Any time a business has a representative, the representative can create legal liability for that business.”
But if the program is misused or its results are misinterpreted or not verified by a physician, the incident may become a negligence case -- medical malpractice -- that assesses blame to the physician, too.
Cases involving AI may also involve copyright, intellectual property, and privacy law, among many others, depending on the particulars of the implementation of the technology and the consequences of its misuse or malfunction.
Regulations, Policy and AI Liability
The regulations governing AI have struggled to keep pace with the technology as it evolves. A number of initiatives have attempted to offer guidance.
In the US, the National Institute of Standards and Technology (NIST) published an AI Risk Management Framework in 2023. The framework offers voluntary standards for identifying risks and suggests policies and procedures for mitigating them. It is intended to “be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.”
The framework notes that AI may harm people, organizations, and even the environment and encourages organizations to govern their AI risk by mapping, measuring, and managing it throughout the lifecycle of the product.
A number of federal agencies, including the Federal Trade Commission, the Consumer Product Safety Commission, the Consumer Financial Protection Bureau and the Equal Opportunity Employment Commission have stated their positions on how the law applies to AI, with varying degrees of specificity and have made clear that they intend to apply their regulatory power to the issue.
In some cases, they have proposed new rules that attempt to grapple with emergent problems caused by AI. The FTC has suggested new protections from AI impersonation, for example, potentially creating liabilities for firms that enable the creation of deepfakes. And a wide variety of additional legislation has been proposed as well.
In the European Union, the AI Act, which came into force in August 2024, provides further instruction on liability in AI cases from a regulatory standpoint for companies operating in EU member states. It clearly establishes who constitutes providers, distributors, and users of AI platforms.
And it lays out categories of risk. Technologies that can be used to deceive people in harmful ways and social scoring systems are categorized as unacceptable risk and are considered illegal under the act. High-risk technologies are those used in particular sectors, such as education, judicial administration, and law enforcement. These would be subject to strict liability. Such technologies as chatbots, image generators, and biometric systems are considered to be limited risk. They would be subject to fault-based liability.
The act also requires that member states designate a market surveillance authority and a notifying authority. Penalties for violations of the act run as high as €35 million, depending on the severity.
A further AI Liability Directive, proposed in 2022, aims to make AI providers just as liable as providers of other products and services. If adopted, it will establish a “presumption of causality” and make it easier to hold these providers accountable for harm inflicted by their technologies. Existing EU law makes it challenging to trace the liability of AI. The directive would lower the burden of proof for making effective claims and mandate disclosure of evidence.
It aims to complement the revised Product Liability Directive, adopted in March 2024, which addresses AI and software-related liabilities more broadly. The PLD does note that developers may be liable for defects that arise in AI after it is implemented due to its ability to learn, emphasizing a responsibility to institute safeguards that prevent programs from engaging in dangerous or harmful behaviors.
Monitoring AI to Avoid Liability
The pounds (and dollars and euros) required to remedy AI liability cases in coming years are likely to reinforce the importance of at least a few ounces of prevention. While some AI providers will have likely built these inevitable costs into their business models -- and even be able to shrug them off due to their massive profitability -- others, including their clients, may not.
“Shareholders are saying that what public companies say about risk -- their ethical risk posture around AI -- is not actually true. It’s not congruent with what they’re doing,” Choi says.
Building transparency into the functions of AI programs is a crucial step toward ensuring accountability and explainability. The “black box” problem that defines AI will only serve as an excuse for so long. While additional applications of AI may be able to reverse engineer the decision-making of an AI program that caused a problem, in the long run, there will be a demand for AI that is transparent about its activities.
Employing licensed and vetted data will be crucial in both ensuring that models are accurate and limiting liability due to potential privacy violations in cases where data may have been collected unethically or by illegal means. Ensuring that trade secrets and proprietary information are deployed with proper protections when training a model will help to avoid leaks when it is actually put into practice.
“Take a close look at making sure that the data that's input into the model -- and the license agreement you have with your technology provider -- is keeping your data with you and not going to the public model,” Choi advises. “When that information is going somewhere it shouldn't and it's giving you some financial benefit, that's when you're putting yourself up for a liability situation.”
Getting ahead of the demand for transparent AI products will likely offer advantages to both the developer of the program and the consumer. The developer will be necessarily more attentive to the potential for problems and the consumer will be less likely to experience them.
In the interim, buyer beware. Understanding how and where AI will be implemented -- and ensuring that limits are clearly established -- will help businesses prevent irresponsible decision-making that may cost them money. Implementing clear parameters for areas that require human supervision can keep AI from running off the rails entirely.
Charles Nerko, Barclay Damon
“Businesses have to supervise AI the same way that they would supervise a human workforce. That applies even if the company didn’t code the AI,” Nerko claims. “If the company is using an AI system designed by a third party and using a training set that a third party developed, that doesn't matter. Under the law the company that’s using it to make those decisions and convey that information is still liable under common law principles.”
In order to establish those parameters, users of these systems must have detailed conversations both internally and with their providers about the capabilities and limitations of the AI they will be using and how long it will remain viable for those purposes. Vulnerabilities may emerge if it is not deployed in the right landscape.
If unlicensed, general-use AI applications are used in the course of business operations, clear rules should be set for when and where they can be deployed. And their use should be tracked in order to create a clear chain of accountability.
“Having an AI policy discourages employees from being overly ambitious and using personal AI systems,” Nerko says. “It makes sure the AI enterprise system is properly vetted, receives legal review, and has the proper contract in place to protect the organization using it.”
Structuring Contracts to Mitigate AI Liability
Both AI providers and their customers must approach any contracts for AI services with a rigorous understanding of risk appetite. When deploying AI, there will always be some risk. While it is poorly understood in a legal sense, cases playing out and increasing regulation will soon begin to offer some clarity.
“It manifests a little bit differently when you're a user of AI or you're a creator of the technology,” Choi says. “Each needs to have a different risk transfer. They need to think about the different machinations of how it could go wrong and how they’re protecting the company and the balance sheet.”
In anticipation of what will likely be a lively and complex area of litigation, tightly worded agreements that comprehensively assess potential liability are advisable.
“AI should perform better as the model matures. But that’s not always the case,” Choi adds. “You should make sure that there’s a true partnership. Making sure that’s included in a contract and contemplated in the risk management structure is going to be really important.”
“Set out a standard of performance,” Nerko continues. “You can, as a business, have a performance standard that gets into the training set that’s used, and set a standard that the AI isn’t trained on existing copyrighted works to try to limit the liability for that. You can also have remedies if the AI doesn't function properly as part of the contract.”
Many large providers already offer indemnification against such issues as copyright claims, suggesting that their models likely employed data that might result in such claims. “It basically works by saying that if the company using the AI is sued by someone else, the company that designed and developed the AI will step in and defend the company, and kind of handle the lawsuit, pay for the resulting legal fees,” Nerko says.
This type of indemnification is typically only available through paid agreements, so the use of free accounts may lead to additional risk. And some smaller providers may be unwilling to indemnify at all.
Use cases matter, too. Protections may vary if a product is altered after purchase. If a customer plans on customizing an AI product, it should assess whether these alterations may affect the protections offered by the original designer. Whether regular updates are implemented and whether AI decisions are subject to appropriate human supervision may affect the degree of liability assessed to producers and users.
“It’s just taking the existing law and the existing warranties and applying it to this malleable thing,” Choi says. “There are clauses that you could put in -- if it changes up to a certain point, that’s terms for renegotiation. Here are the variables that we can accept and here are the ones we cannot accept.”
Organizations may be able to transfer some of the risk they assume when deploying AI by taking out appropriate cyber insurance policies. These policies may be particularly useful in covering risk associated with data breaches.
About the Author
You May Also Like