Regulation of AI – the Path Ahead | Seyfarth Shaw

This post was originally published to Seyfarth’s Gadgets, Gigabytes & Goodwill blog.

Last week, a joint statement was issued by four federal agencies expressing their apprehension regarding the use of AI for discriminatory or anticompetitive purposes and outlining their plans for regulation. This comes on the heels of Elon Musk requesting a “pause” in AI development and meeting with Senator Chuck Schumer to guide the statutory framework of AI management. It also comes as China and the EU unveiled their own proposed laws surrounding the regulation of AI. How will governments across the globe respond to this pivotal technology? And for the United States, how will our government response?

1.Joint Statements

The Consumer Financial Protection Bureau (CFPB), the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) released a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (“Joint Statement,” found here). The Joint Statement expressed the collective concern that AI would be used for discriminatory or anti-competitive purposes.

  • The CFPB is concerned about the discriminatory use of AI in lending decisions. “[T]he fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”
  • The EEOC similarly cautioned against the use of AI “to make employment-related decisions about job applicants and employees” and made clear that there is no technology exception to its rules.
  • The Civil Rights Division followed suit, explaining “the Fair Housing Act applies to algorithm-based tenant screening services.”
  • The FTC predictably focused on the potentially anticompetitive impact of AI and “warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.”

This is all to say that one cannot outsource its liability to AI. In the realm of decision-making and data-driven solutions, pleading ignorance to the ways in which AI is utilized is not, nor will it ever be, an adequate defense, whether presently or in the future. Many AI algorithms are a black box for the user, so the views of the Joint Statement will be difficult to mesh with the practical use of artificial intelligence tools.

2. International AI Laws

Countries across the globe are rushing to institute their own regulation of AI in response to how it is implemented in their societies. China’s law would require AI models to pass a security assessment before the tool is publicly released. This development is in response to the recently unveiled of numerous generative AI tools by large Chinese companies, which have the capability to simulate human speech and generate text and image-based content.

The EU AI Act would assign three categories to AI tools based on their risk level. The most egregious tools are assigned “unacceptable risk, such as government-run social scoring of the type used in China.” These would be outright banned. Less egregious tools would be deemed “high-risk applications, such as a CV-scanning tool that ranks job applicants” and such tools would be regulated, but not per se banned. Those models not falling within either category would be “largely left unregulated,” whatever that means.

3. What will the United States do?

The Joint Statement makes clear that there is no AI exception to current regulatory rules. It also implies the regulatory agencies believe its framework is already set. For example, using an AI tool to discriminate on the basis of gender or race is illegal—just as it has always been. Anticompetition is anticompetition—AI or not. It appears the agencies are awaiting congressional action for more aggressive oversight on AI.

How the Congress reacts will be interesting to watch. The stakes could not be higher and legislation will certainly take time. It is helpful that Congress is seeking guidance from the private sector and understanding the ways in which this revolutionary technology is being used. Perhaps they should seek guidance from the AI ​​itself?

4. What AI Has to Say About…Regulation of AI

I had the following exchange with ChatGPT to see what it felt would be effective AI regulation in the financial lending industry:

Pat: I work for the government. We are concerned that artificial intelligence may cause financial firms to discriminate in their lending. We are also concerned that companies will use artificial intelligence to gain an anti-competitive advantage. What are some policies that may prevent such abuse of artificial intelligence?

ChatGPT: There are several policies that could help prevent discrimination in lending and anti-competitive behavior resulting from the use of artificial intelligence (AI):

  1. Algorithmic transparency: Requiring financial firms to disclose the data, models, and algorithms used in their lending decisions would make it easier to detect and address discriminatory practices.
  2. Ethical guidelines: Developing ethical guidelines for the use of AI in financial services could help ensure that algorithms are designed and implemented in a way that is fair and non-discriminatory.
  3. Oversight and regulation: Government agencies could establish oversight and regulation to ensure that AI applications in financial services comply with ethical and legal standards, and do not lead to unfair market practices.
  4. Anti-discrimination laws: Enforcing anti-discrimination laws can help prevent lending decisions based on factors such as race, gender, or religion, even if those factors are indirectly inferred from other data.
  5. Competitive safeguards: Promoting competition by preventing the accumulation of market power by large firms could help prevent anticompetitive practices enabled by the use of AI.
  6. Data protection and privacy: Ensuring the protection of customer data and privacy rights could also help prevent discriminatory and anti-competitive practices enabled by AI.

It will be interesting to see how the government responds and who—or what—they listen to. We will certainly keep you updated as developments arise.

Leave a Comment