The ABA’s vision is an agreed-upon risk management framework established by the federal government and supplemented by state requirements. This approach responds directly to the challenges posed by states enacting independent statutes and regulations in the absence of federal guidance – many of which are duplicative, conflicting, and difficult for national organizations to navigate.
States with more active or mature regulatory agencies, such as the New York Department of Financial Services (NYDFS), may issue additional guidance. However, the ABA advocates for strong federal preemption of conflicting state requirements. A centralized legislative approach would reduce the number of agencies providing comments and guidance, which, while not formal policy, often shape how examiners conduct reviews.
The ABA further suggests that the legislation should be a flexible framework to allow for innovation rather than a prescriptive set of procedures and specifications.
The second recommendation emphasizes the need for transparency across the entire AI ecosystem. While banking is already one of the most heavily regulated industries, the ABA argues that responsibility for AI-related risks should extend beyond banks to include all involved parties: end users, financial institutions, AI models, and connected entities.
Specifically, the calls for AI regulations to apply equally to third-party providers and non-bank entities. It is important for consumers, auditors, and regulators to know that while a bank may be utilizing AI, they may not be able to control or configure every aspect of this software.
The association recommends that regulatory bodies update existing model risk management guidance to explicitly address its applicability to AI and to clearly define the responsibilities of banks versus third-party providers. As AI systems become more integrated into banking operations, model validation audits will play a critical role in ensuring consistency and accuracy across inputs and outputs. These validations will be especially important in areas such as bias and fair lending, privacy compliance, and anti-money laundering (AML) efforts.
The ABA raises an interesting question regarding the proper focus for examiners and those charged with risk governance. Rather than focusing on granular technical details – since examiners and auditors may not possess the same level of coding expertise as developers – the emphasis should be on evaluating model quality through tangible inputs, outputs, and outcomes.
The ABA encourages the development and support of AI risk management strategies that will aid institutions and facilitate cooperation between sectors. Certifications and voluntary frameworks are mentioned as potential tools. Specifically, the ABA cites the NIST AI Risk Management Framework as a strong baseline and recommends aligning risk management practices and regulatory expectations with it.
AI presents unique risks – notably transparency, explainability, and bias – which must be addressed in a coordinated fashion using common language in order to understand the risks and controls. The framework approach, as well as potential certifications or other independent assurance, allows all sectors to work towards responsible, risk-informed innovation.
Shaping the Future of AI Regulation in Financial Services
Overall, the industry is seeking a consistent set of standards to align with. Given the well-documented risks associated with AI, it’s no surprise that financial institutions are proceeding cautiously and advocating for a structured, methodical regulatory approach. To prepare, institutions should stay informed on evolving guidance from the ABA and Congress, and proactively assess how they plan to meet emerging regulatory expectations.
Prepare your institution for evolving AI regulations. Reach out to us today to learn how our team can guide you in navigating these changes and implementing a strong AI risk management strategy.