CFPB, DOJ warn AI advances are not an excuse to break the law

Share This Post

representatives of federal enforcement and regulatory agencies, including consumer financial protection bureau (CFPB), Department of Justice (DOJ), federal trade commission (FTC) and equal employment opportunity commission (EEOC) are warning that the emergence of artificial intelligence (AI) technology does not give license to break existing laws related to civil rights, fair competition, consumer protection and equal opportunity.

Thank you for reading this post, don't forget to subscribe!

In a joint statement, the DOJ, the CFPB, the FTC and the Civil Rights Division of the EEOC underscored their commitment to enforcing existing laws and regulations regarding emerging AI technologies, despite the current lack of regulatory oversight.

“Private and public organizations use these [emerging A.I.] systems to make important decisions affecting the rights and opportunities of individuals, including fair and equal access to jobs, housing, credit opportunities, and other goods and services,” notes the joint statement. “These automated systems are often Advertised as providing insights and breakthroughs, increasing efficiency and cost-savings, and modernizing existing practices. Although many of these tools promise advancement, their use also has the potential to perpetuate illegal bias, automate illegal discrimination, and produce other harmful consequences.

Potentially discriminatory results are a major concern in the CFPB’s areas of focus, according to agency director Rohit Chopra.

“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its development to prevent discriminatory outcomes that threaten the financial stability of households,” Chopra said. Is.” “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to eliminate discrimination caused by any device or system that enables unlawful decision-making.”

These agencies have recently had to address the rise of AI.

Last year, the CFPB published a circular confirming that consumer protection laws would remain in place for its covered industries regardless of the technology being used to serve consumers.

The DOJ’s Civil Rights Division published a statement of interest in federal court in January suggesting that the Fair Housing Act apply to algorithm-based tenant screening services, following a lawsuit in Massachusetts that alleged that tenants An algorithm-based scoring system was used to screen. Black and Hispanic rental applicants.

Last year, the EEOC published a technical assistance document detailing how the Americans with Disabilities Act (ADA) applies to the use of software and algorithms, including AI, to provide opportunities for job applicants and employees. Employment decisions can be taken about.

The FTC published a report last June warning about harms coming from AI platforms, including inaccuracy, bias, discrimination and “creeping commercial surveillance.”

In prepared remarks during the interagency announcement, Director Chopra cited potential pitfalls that could come from AI systems as it relates to the hostage location.

“Machines crunching numbers may seem capable of taking human bias out of the equation, but that’s not what’s happening,” Chopra said. “Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that black households were more likely to be denied by the algorithm than white households with similar financial and credit backgrounds. The chances of being rejected were 80% higher.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Sign up now

Get a Featured listing updates on your area.

[impress_lead_signup phone="1" new_window="1" button_text="Sign up for updates!" styles="1"]