Is your AI software biased? An attorney at the FTC's privacy and identity protection warns businesses this week to find out, or else the federal government will find out for them. 

Bias in software with Artificial Intelligence is usually so deeply baked into the algorithms and the data on which they were created, that it can be a challenge to spot the ways in which they violate consumer protections laws. Racial, gender, economic, and other biases are rarely obvious until you learn the rules from legal and technical professionals, run testing, and open it up for assessment.

According to the FTC, the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act are violated by unfair or deceptive business practices when the sale of software with biased AI algorithms is used to deny people employment, housing, credit, insurance, or other benefits. The ECOA makes it illegal for a company to make, sell or use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

"Under the FTC Act," said Elisa Jillson, an attorney in FTC’s privacy and identity protection division, "your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence."

Jillson said AI may claim to be unbiased but, as an example, "the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action."

Jillson specifically cited a healthcare study published by the Journal of the American Medical Informatics Association that found significant bias in prediction models to help allocate staff, medications, ventilators, ICU beds and other important elements of care during the pandemic. It used health costs as a proxy for health need. This was biased because it favored wealthy patients who could afford to spend more. The AI's predictive algorithm results worsened healthcare disparities among already underserved populations, not improve efficiency. 

By citing this study, the FTC's attorney appears to be sounding the alarm for businesses in healthcare and any other industry, that the federal government's next targets will be the making, sale, and use of AI based on biased data or that uses biased decision making. 

photo credit dreamstime.com