Resources

The ABA’s Vision for AI Risk Management: What Financial Institutions Need to Know

Key Takeaways:

  • The ABA supports a unified federal AI risk management framework, with strong preemption of state regulations to avoid conflicts and ensure consistency for financial institutions.
  • AI regulations should apply to all parties in the ecosystem, including third-party providers, to ensure clear accountability and transparency throughout the process.
  • Regulatory bodies should focus on evaluating model outcomes, not just technical details, making it easier for auditors to assess AI systems without deep coding expertise.
  • Policymakers are encouraged to promote voluntary standards like model cards and certifications to support collaboration across sectors and foster responsible AI practices.
  • Financial institutions must stay updated on evolving AI regulations and take proactive steps to meet new requirements, ensuring compliance and effective risk management

The American Bankers Association’s (ABA’s) comment letter to the National Science Foundation outlines a vision of how the federal government can build a regulatory program around artificial intelligence (AI) that promotes, rather than hinders, innovation. The letter includes four key recommendations. Do these align with your organization’s AI-related business objectives?

1.Legislation: “Congress must pass comprehensive laws establishing an AI risk management framework with strong preemptions of state requirements and which do not create duplicative or inconsistent obligations for banks.”

The ABA’s vision is an agreed-upon risk management framework established by the federal government and supplemented by state requirements. This approach responds directly to the challenges posed by states enacting independent statutes and regulations in the absence of federal guidance – many of which are duplicative, conflicting, and difficult for national organizations to navigate.

States with more active or mature regulatory agencies, such as the New York Department of Financial Services (NYDFS), may issue additional guidance. However, the ABA advocates for strong federal preemption of conflicting state requirements. A centralized legislative approach would reduce the number of agencies providing comments and guidance, which, while not formal policy, often shape how examiners conduct reviews.

The ABA further suggests that the legislation should be a flexible framework to allow for innovation rather than a prescriptive set of procedures and specifications.

2. Regulation: “Agencies should identify clear regulatory outcomes and objectives, while enabling regulated entities the ability to deploy effective risk management techniques based on common sense and best practices. Agencies must have a perspective on the entire AI ecosystem rather than merely isolating the regulated entities, using existing authorities to address risks wherever they arise. Additionally, regulators must be transparent about the ways they utilize AI themselves.”

The second recommendation emphasizes the need for transparency across the entire AI ecosystem. While banking is already one of the most heavily regulated industries, the ABA argues that responsibility for AI-related risks should extend beyond banks to include all involved parties: end users, financial institutions, AI models, and connected entities.

Specifically, the calls for AI regulations to apply equally to third-party providers and non-bank entities. It is important for consumers, auditors, and regulators to know that while a bank may be utilizing AI, they may not be able to control or configure every aspect of this software.

3. Supervision: “The Federal Reserve (Fed), the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC) should update model risk management guidance, subject to a notice and comment period. Moreover, field examiners must be trained not to focus on granular matters such as LLM’s code but rather on the inputs, outputs, and outcomes of models.”

The association recommends that regulatory bodies update existing model risk management guidance to explicitly address its applicability to AI and to clearly define the responsibilities of banks versus third-party providers. As AI systems become more integrated into banking operations, model validation audits will play a critical role in ensuring consistency and accuracy across inputs and outputs. These validations will be especially important in areas such as bias and fair lending, privacy compliance, and anti-money laundering (AML) efforts.

The ABA raises an interesting question regarding the proper focus for examiners and those charged with risk governance. Rather than focusing on granular technical details – since examiners and auditors may not possess the same level of coding expertise as developers – the emphasis should be on evaluating model quality through tangible inputs, outputs, and outcomes.

4. Other Measures: “Policymakers should encourage the adoption of voluntary standards and frameworks where possible to encourage cross-sector collaboration. For example, this could include model cards to allow for explainability without divulging confidential commercial information, or certifications to help demonstrate sufficient maturity of policies and procedures.”

The ABA encourages the development and support of AI risk management strategies that will aid institutions and facilitate cooperation between sectors. Certifications and voluntary frameworks are mentioned as potential tools. Specifically, the ABA cites the NIST AI Risk Management Framework as a strong baseline and recommends aligning risk management practices and regulatory expectations with it.

AI presents unique risks – notably transparency, explainability, and bias – which must be addressed in a coordinated fashion using common language in order to understand the risks and controls. The framework approach, as well as potential certifications or other independent assurance, allows all sectors to work towards responsible, risk-informed innovation.

Shaping the Future of AI Regulation in Financial Services

Overall, the industry is seeking a consistent set of standards to align with. Given the well-documented risks associated with AI, it’s no surprise that financial institutions are proceeding cautiously and advocating for a structured, methodical regulatory approach. To prepare, institutions should stay informed on evolving guidance from the ABA and Congress, and proactively assess how they plan to meet emerging regulatory expectations.

Prepare your institution for evolving AI regulations. Reach out to us today to learn how our team can guide you in navigating these changes and implementing a strong AI risk management strategy.