It is estimated that at least 45% of all new business software utilizes some aspects of AI automated decision tools to make the job of running a business faster and easier.
But when AI has been improperly designed, it can produce decisions impactful on individuals that violate the law.
Businesses are utilizing AI to make decisions about their consumers. Employers are utilizing AI to assist them in the hiring, firing, promotion, pay, layoff and performance evaluation decisions of their employees. But increasingly, businesses do not even realize their software, or the third-party resources to whom they outsource tasks, are using AI technology in ways that may violate federal, state and/or local laws.
In Mobley v. Workday, the plaintiffs have recently been certified in a federal class action lawsuit alleging AI software used by a private company violates U.S. employment laws by discriminating against job applicants on the basis of age, ethnicity, disability and race.
Human resources professionals lean heavily on AI technology to screen and rank applications and resumes. While it saves manpower and time, civil rights experts warn the technology may suffer from [internal] biases that discriminate against job applicants based on protected characteristics such as gender and race.
In the federal case, plaintiffs allege Workday's AI pymetric tools called Candidate Skills Match (CSM) and Workday Assessment Connector (WAC) have embedded discrimination biases. They allege the employment applications of individuals in certain protected classes rarely, if ever, make it through Workday's AI candidate screening algorithms.
It remains to be determined why Workday’s AI sorting and ranking tools would have rejected applicants for employment in protected classes like age, ethnicity and race. Some suggest its AI was trained on historic data which was biased, or it was intentionally skewed to only return candidates that look on paper like the employees this employer already has. The AI tool is causing legal risk for the developers and the employers who relied on it.
In New York, New Jersey, Connecticut, Virginia, Florida, Georgia, Texas, California, Michigan and Illinois, to name a few, state-level AI legislation is either enacted or proposed that will regulate the use of AI in business decisions across the board, including in any HR matters.
In New York state, AI legislation is proposed and pending in both the Senate and Assembly.
Senate Bill 1169, called the New York AI Act, would regulate the use of AI in business far beyond employment decisions, to include determinations about education, health care, insurance, credit scoring, public safety, retail, banking and financial services, media and more.
At issue is whether the AI tools harm the public because they are “deployed without adequate testing, sufficient oversight and robust guardrails.”
Of heightened concern is whether the civil rights and liberties of historically disadvantaged groups are being infringed, “thereby further entrenching inequalities.”
The proposed New York AI laws aim to “ensure that all uses of AI, especially those that affect important life chances, are free from harmful biases, protect our privacy and work for the public good.”
Proposed AI legislation in New York would target both AI developers and AI deployers.
Noting the thousands of technology start-ups in the state, the NY AI Law would enact standards of “safe innovation." It would provide “clear guidance for AI development, testing and validation both before a product is launched and throughout the product's life cycle.”
The proposed law would require AI product developers and SaaS providers to audit their products and ensure they do not violate human rights laws, with a responsibility to prove their AI products do not cause harm to New Yorkers.
The proposed law would require companies “employing and profiting from the use of AI [to ensure] that their products are free from algorithmic discrimination.” It would also provide for enforcement with “clear statutory authority to investigate and prosecute entities that break the law.” Any AI uses that “infringe on fundamental rights, deepen structural inequality or that result in unequal access to services shall be banned.”
The law could include a private right of action.
New York proposed Assembly Bill 768, called the Protection Act, broadly defines AI “algorithmic discrimination” as the use of an AI decision tool that results in unlawful differential treatment or impact that disfavors any individual or group on the basis of their actual or perceived “age, color, disability, ethnicity, genetic information, English language proficiency, national origin, race, religion, reproductive health, sex, veteran status or other classification protected pursuant to state or federal law.”
The proposed law would require an independent auditor. It would take aim to protect consumers from any businesses deploying AI to make or assist in making any consequential decision.
New York proposed Senate Bill S8331 is aimed at the use of AI in journalism.
In 2023, the New York City “NYC AI Law” went into effect, which narrowly applies to employers and employment agencies in New York City that use automated employment decision-making tools (AEDTs) to screen candidates or employees for positions located within NYC. Notice to candidates and employees must be provided 10 days in advance when AEDTs are used.
Colligan Law LLP is following these employer and consumer laws and is here to answer any of your AI business and employer questions.
The proposed law would require companies “employing and profiting from the use of AI [to ensure] that their products are free from algorithmic discrimination.
