In previous posts, our series on AI and the law discussed the implications of generative AI on intellectual property, infringement, bias and discrimination. Part Three now reviews AI developer and deployer liability for negligence in:
- Design
- Data protection and breaches
- Transparency and explainability of AI decision making
- Agency, and
- Human oversight that leads to harm
Increasingly, AI developers, the companies they work for and AI deployers are being held responsible for faulty design, inadequate product testing and negligent use of AI tools and products beyond their expertise.
Because generative AI makes what are considered “autonomous” decisions, it can be misleading. Developers can mistakenly believe their AI tools can be deployed to perform any task whatsoever without legal consequence. Users can believe laws have not yet caught up with the technology and, therefore, use AI for any purpose without consequence.
But many existing laws do place legal responsibilities on AI software developers, their employers and the businesses and individuals who deploy AI. Applications of existing laws are being expanded to cover more AI products and uses than ever before under a variety of legal theories.
Laws already require that AI tools, just like any other marketable product or service, must be properly developed, adequately tested and responsibly deployed. AI performance of tasks, execution of processes and decision making are all included in these duties. Irresponsible, unethical, discriminatory and negligent product creation and release into the marketplace of ideas are actionable by regulators and individuals harmed by a developer’s failure to safeguard. When a human knew or should have known their AI could present a danger to the public, a child or third party, they are being held liable. More regulation may be on the way, but litigants and regulators are not waiting.
As the New York State Bar Association notes in its recent article, “AI’s Escalating Sophistication Presents New Legal Dilemmas,” agency law “governs the relationship between a principal (who grants authority) and an agent (who acts on their behalf). As AI agents begin to function similarly to human agents – making decisions, forming contracts or even generating intellectual property – the legal framework must adapt to address accountability, liability and rights over AI-generated outputs.”
In the past, a computer program was not considered capable of acting as a principal or an agent, but rather as an instrumentality of the person who uses it. As AI has grown more autonomous, courts are showing a willingness to treat AI products like self-driving vehicles, holding AI developers and operators directly liable for any harms it causes.
For example, in Florida, Character Technologies was sued for wrongful death by the mother of a 14-year-old who was allegedly emotionally and sexually abused by its chatbot Character.AI, and encouraged by the chatbot to commit suicide. Significantly, individual developers of the AI product, as well as their former employer, Google, are also named defendants.
It is alleged the software engineers were negligent in their design and implementation of their AI tool, failing to equip it with sufficient safeguards against risks of harms of which they were aware. The court has already held AI tools have no First Amendment free speech rights, but users have free speech rights to receive the speech of chatbots.
AI tools and their creators are also liable when their systems lack required security features or contain errors, bugs and other flaws that expose data to risk. This can occur either with the data they are using to teach their AI tool, or with the AI output.
Data privacy and security laws offer no legal exceptions for AI. If protected data is in use at any stage, all laws governing that data must be followed. Violations are punishable against AI creators and users just like any other defendants. These are strict liability laws, meaning it does not matter whether the software engineer knew about the laws or not. Like paying taxes, ignorance of the laws is no excuse; they knew or should have known.
Identification of a speaker or decision maker as an AI tool has become crucially important. When one AI entrepreneur attempted to introduce AI in his courtroom appeal, it in fact did not please the court.
An appellant attempted unsuccessfully to deliver his pro se oral argument to the five-justice panel of the New York State Appellate Division, First Department in Manhattan using his AI-generated avatar. The Court wasn’t having it. The avatar was shut off in mere seconds.
Justice Sallie Manzanet-Daniels said appellant Jerome Dewald, an AI entrepreneur, misled the court about the nature of the images he intended to show. She was not pleased to discover for the first time in open court that the appellant programmed an AI-generated avatar to argue his case for him. She said Dewald had not received permission from the Court to do so, which would have required him to establish a good reason, namely that although a pro se litigant, he was unable to speak for himself.
Dewald never got the chance to show what his creation could do, particularly how well it would have answered questions in real time from the Court. It appeared from the brief moments the avatar was turned on, it was unable to adapt or respond to the changing, unanticipated discussion.
It did not appear Dewald's avatar had “learned” from any actual human attorneys. As any attorney who has appeared before an appellate court knows, a significant amount of the allotted time is spent fielding questions live from the panel of justices. File this one in the, “it’s not as easy as it looks” category.
These examples raise the question of required guardrails when AI learning and deployment venture into areas where they lack expertise, such as making consequential decisions responsibly and ethically, counseling a distressed teenager or appearing in front of an appellate court.
"...the legal framework must adapt to address accountability, liability and rights over AI-generated outputs."
https://nysba.org/ais-escalating-sophistication-presents-new-legal-dilemmas/
