Questions About AI
Interesting question! AI lending platforms won’t necessarily prevent compliance with current regulations, but it isn’t a simple yes or no answer. Compliance is a complex objective and boils down to risk management. Concerns about Artificial Intelligence (AI) and Machine Learning (ML) platforms used for automated decisions have been discussed for several years now, and most of the conversation has been centered around the Equal Credit Opportunity Act (ECOA), commonly referred to as Fair Lending. Commenters have cited several potential issues when discussing AI/ML and credit underwriting, but the primary concern is that people worry an automated machine learning tool may inadvertently learn biases that could result in violations of fair lending laws.
Whenever a financial institution (FI) considers a new initiative, risk factors including compliance risk should be prioritized. Assessing the compliance risk of AI can be daunting, especially since the FFIEC has not issued prescriptive guidance about the use of artificial intelligence. However, the lack of specific AI guidance doesn’t mean guidance standards don’t exist. Several existing regulations intended to prevent consumer harm when approving and denying credit can be helpful. For example, In 2021, the FFIEC issued a Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning. This RFI includes an appendix of Laws, Regulations, Supervisory Guidance, and other Agency Statements Relevant to AI. That appendix includes, among others, both the Fair Credit Reporting Act, and the Equal Credit Opportunity Act (Reg B). In September of 2023, the CFPB issued Guidance on Credit Denials by Lenders Using Artificial Intelligence, which focuses on due care for Adverse Action notices and checklists as FIs consider utilizing predictive decision-making technologies in their underwriting models.
To mitigate risk and ensure compliance to lending standards, FIs should incorporate a sound understanding of the methodology employed by the AI model, assess all compliance risks associated with it, and ensure that controls exist to mitigate those risks. Understanding the models being employed can be a barrier for some institutions, especially smaller ones, that may not have the expertise necessary to fully understand complex approaches used by these tools. Also, smaller institutions likely rely on third-party developed AI platforms – some of which are incorporated into larger processes – so it is imperative that the AI solutions in use are properly identified and understood during the due diligence process. Then, the risks should be periodically reassessed to ensure that no new controls are necessary, and the controls should be tested on a periodic basis by an independent third party or internal auditor, to ensure that they are appropriate and effective. The overall process of risk management is the same, no matter how you apply it – the cycle should look very familiar.
At this time the FFIEC has not released specific Artificial Intelligence (AI) regulatory directives to lean on. However, armed with the experience of other recent innovative communication, and collaboration tools/technologies (e.g., social media, mobile banking, remote access, Teams, Slack, etc.), management teams can capitalize on a proven approach for mitigating risk and establishing basic standards for responsible use of AI technologies.
First, start with a basic definition of AI and provide examples of AI that employees may come across in business and personal settings. Include references to specific AI software/technology that is in current use at your institution and what AI tools may be available. Consider role-based training based on the types of AI that the employee may come across as part of their specific position or job responsibility. For example, IT/IS personnel need to acknowledge applications such as fraud detection/prevention, customer authentication, and anti-money laundering (AML) solutions as these features commonly utilize AI. Vendor Managers also should be trained to look for AI features as part of a third party’s product-specific due diligence.
Understanding machine learning (ML) terminology is also essential for addressing security and compliance implications. These two steps will establish a baseline and set the stage for future AI tools that may be used.
Once AI tools are identified that employees may encounter during normal business operations, incorporate an Acceptable Use Policy (AUP) for employees to acknowledge. This policy should be part of your institution’s HR and information security/IT standards.
Here are a few AUP concepts to consider:
- AI, like any new and emerging technology, presents a significant risk if used irresponsibly or unethically. Data entered into AI prompts has no guarantee of confidentiality.
- Require employees to request approval from their manager in writing prior to using generative AI prompts or datasets (For example using Copilot, ChatGPT, Gemini, etc.) for any purpose.
If approved:
- Prohibit employees from submitting any non-public data (including but not limited to any confidential bank and customer/member information) or content into AI prompts or datasets.
- Thoroughly review AI-generated content to ensure the information is accurate and does not contain inherent bias.
- Verify any facts presented in AI-generated content independently to ensure that the data presented is objectively true.
- Content generated with AI tools must not be considered final work without a thorough human review. Generally, AI-generated content should be considered a starting point rather than a finishing point.
- Warn employees about the risk of AI as a tool for malicious actors to impersonate individuals (including employees or customers/members of the institution) to commit fraud. (Also, known as “Deep Fakes”).
- Assess your current social engineering (phishing) testing as grammatical errors and wordsmithing have been greatly improved by AI. Malicious emails may not be as easy to spot now thanks to AI.
While AI offers numerous advantages and can yield beneficial insights, protecting NPI remains paramount. As we leverage AI’s benefits across the financial industry, maintaining this focus ensures ongoing security and ethical use.
Related Resources
Ask a Question, Get an Answer!
Ask a question and our compliance experts will email you back!
Explore Other Risk Management Articles