Navigating Artificial Intelligence and Consumer Protection Laws In Wake of the COVID-19 Pandemic

The Federal Trade Commission's (FTC) Bureau of Consumer Protection Director Andrew Smith issued a statement on Using Artificial Intelligence and Algorithms, providing added insight into how the FTC assesses a company's use of Artificial Intelligence and Algorithms (collectively AI). This statement comes in the midst of the COVID-19 pandemic during which we have seen a wave of ingenuity unleashed, much of which implicate AI. COVID-19 tracking mechanisms, disinfecting robots, smart helmets, thermal camera-equipped drones and advanced facial recognition software are being considered and deployed in the fight against COVID-19.1

These solutions may help save lives, but they also have consumer protection implications that must be considered. This FTC statement is timely and reminds us of the potential consumer protection exposure for companies – reaffirming that existing consumer protection laws covering traditional human activity and automated decision-making technology will equally apply to sophisticated AI. It further highlights how companies can manage the risk, emphasizing that the use of AI tools should be transparent, explainable, fair and empirically sound while fostering accountability.

Consumer Protection Risks Presented by AI

The FTC has long experience enforcing consumer protections presented by the use of data and algorithms that make decisions about consumers, and the statement reinforces the reality that such protections will be enforced in connection with AI technology. Front and center in the assessment will be traditional concepts of fairness, accuracy and transparency implicated by Section 5 of the FTC Act's prohibition against unfair and deceptive acts, equal protection laws such as the Equal Credit Opportunity Act (ECOA), and laws impacting consumer access to credit, employment and insurance such as the Fair Credit Reporting Act (FCRA).

Unfair and Deceptive Acts. Section 5(a) of the FTC Act protects against "unfair or deceptive acts or practices in or affecting commerce," and is often used to hold companies to fair and transparent privacy and security standards. For example, in this time of crisis, people may be more willing to share personal information related to COVID-19 status and location for certain uses. This triggers numerous privacy concerns for consumers providing their sensitive information as well as responsibilities for companies collecting consumer data.

Nondiscrimination Laws. Equal opportunity laws, such as the ECOA and Title VII of the Civil Rights Act, protect consumers from being discriminated against on the basis of their race, gender, national origin or sex. With AI, we know that objective data (such as zip codes) may serve as a proxy for race, resulting in actionable disparate impact claims. In 2019, the federal government charged a social media and technology company with violating fair housing laws by enabling discrimination on its advertising platform under a disparate impact analysis.2 Data is now surfacing that COVID-19 hospitalization rates and death rates appear to be disproportionally impacting black and Latino people.3 If COVID-19 related data is used in connection with the extrapolation, prediction or access to healthcare, the utilization of such an algorithm could result in a disparate impact on these black and Latino communities if such disparities are not accounted for.

Fair Credit Reporting Act (FCRA). The FCRA protects information collected by consumer reporting agencies (CRAs) and sets strict notice, disclosure and investigation requirements around the use of such information. Companies should be aware if activities and use of AI could cause the company to be deemed a CRA or otherwise trigger obligations FULL ARTICLE