Questions About AI
As background, the New York State Department of Financial Services (DFS) recently issued a letter addressing inquiries they’ve received about AI and cybersecurity. Although the letter is addressed to FIs regulated by the DFS, institutions not based or regulated in NY State shouldn’t disregard this. Institutions would benefit from a better understanding of how AI can both allow criminals to “…commit crimes at greater scale and speed…”. The letter can potentially assist all FIs in strengthening their cybersecurity posture through enhanced threat detection and improved incident response strategies.
At some point, we expect other State and Federal regulators to weigh in on this AI and cybersecurity dynamic. In the meantime, forward-thinking FIs can use this letter as a starting point for identifying and combating AI threats.
Entitled “Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks”, the letter is not an exhaustive review, but highlights 4 specific “…more concerning…” AI-related cyber risks (2 external and 2 internal), and 6 potential controls:
External Threats:
- AI-Enabled Social Engineering – Ability of threat actors to create highly personalized and more sophisticated content that is more convincing than historical social engineering attempts.
- AI-Enhanced Cybersecurity Attacks – Ability of threat actors to amplify the potency, scale, and speed of existing types of cyberattacks.
Internal Threats:
- Exposure or Theft of Vast Amounts of Nonpublic Information – AI engines developed or deployed internally require the collection and processing of substantial amounts of data, often including NPI and biometric data. This gives threat actors a greater incentive to target those entities.
- Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies – All FIs, and particularly smaller ones, depend heavily on third-party service providers (TPSPs), who in turn depend on sub-service providers. Each link in this supply chain introduces potential security vulnerabilities that can be exploited by threat actors.
Controls:
- Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans – should address AI-related risks in the following areas:
- the organization’s own use of AI,
- the AI technologies utilized by TPSPs and vendors,
- any potential vulnerabilities stemming from AI applications that could pose a risk to the confidentiality, integrity, and availability of the Covered Entity’s Information Systems or NPI, and
- the incident response, business continuity, and disaster recovery plans should be reasonably designed to address all types of cybersecurity events and other disruptions, including those relating to AI.
- Third-Party Service Provider and Vendor Management – including guidelines for conducting due diligence before an institution uses a third-party that will access its Information Systems and/or NPI.
- Access Controls – designed to prevent threat actors from gaining unauthorized access to a Covered Entity’s Information Systems and the NPI maintained on them. Institutions must periodically, but at a minimum annually, review access privileges to ensure each Authorized User only has access to NPI the Authorized User needs to perform their job functions.
- Cybersecurity Training – conduct at minimum annual cybersecurity awareness training that includes social engineering training for personnel, including senior executives and Senior Governing Body (i.e. Board) members that is enhanced for awareness of “deep fakes” and more sophisticated AI types of phishing emails.
- Monitoring – must have a monitoring process in place that can identify new (internal and external) security vulnerabilities promptly, so remediation can occur quickly.
- Data Management – implement data minimization practices as FIs must dispose of NPI that is no longer necessary for business operations or other legitimate business purposes, including NPI used for AI purposes. As of November 1, 2025, institutions should maintain and update data inventories as they are crucial for assessing potential risks and ensuring compliance with data protection regulations.
In addition to this high-level summary, we’ve also identified all instances in the guidance where it states that institutions “must” take action. Proactive institutions can utilize this as a checklist to guide the development of their AI risk mitigation strategy. Click here for a copy, and thanks again for the question!
Interesting question! AI lending platforms won’t necessarily prevent compliance with current regulations, but it isn’t a simple yes or no answer. Compliance is a complex objective and boils down to risk management. Concerns about Artificial Intelligence (AI) and Machine Learning (ML) platforms used for automated decisions have been discussed for several years now, and most of the conversation has been centered around the Equal Credit Opportunity Act (ECOA), commonly referred to as Fair Lending. Commenters have cited several potential issues when discussing AI/ML and credit underwriting, but the primary concern is that people worry an automated machine learning tool may inadvertently learn biases that could result in violations of fair lending laws.
Whenever a financial institution (FI) considers a new initiative, risk factors including compliance risk should be prioritized. Assessing the compliance risk of AI can be daunting, especially since the FFIEC has not issued prescriptive guidance about the use of artificial intelligence. However, the lack of specific AI guidance doesn’t mean guidance standards don’t exist. Several existing regulations intended to prevent consumer harm when approving and denying credit can be helpful. For example, In 2021, the FFIEC issued a Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning. This RFI includes an appendix of Laws, Regulations, Supervisory Guidance, and other Agency Statements Relevant to AI. That appendix includes, among others, both the Fair Credit Reporting Act, and the Equal Credit Opportunity Act (Reg B). In September of 2023, the CFPB issued Guidance on Credit Denials by Lenders Using Artificial Intelligence, which focuses on due care for Adverse Action notices and checklists as FIs consider utilizing predictive decision-making technologies in their underwriting models.
To mitigate risk and ensure compliance to lending standards, FIs should incorporate a sound understanding of the methodology employed by the AI model, assess all compliance risks associated with it, and ensure that controls exist to mitigate those risks. Understanding the models being employed can be a barrier for some institutions, especially smaller ones, that may not have the expertise necessary to fully understand complex approaches used by these tools. Also, smaller institutions likely rely on third-party developed AI platforms – some of which are incorporated into larger processes – so it is imperative that the AI solutions in use are properly identified and understood during the due diligence process. Then, the risks should be periodically reassessed to ensure that no new controls are necessary, and the controls should be tested on a periodic basis by an independent third party or internal auditor, to ensure that they are appropriate and effective. The overall process of risk management is the same, no matter how you apply it – the cycle should look very familiar.
At this time the FFIEC has not released specific Artificial Intelligence (AI) regulatory directives to lean on. However, armed with the experience of other recent innovative communication, and collaboration tools/technologies (e.g., social media, mobile banking, remote access, Teams, Slack, etc.), management teams can capitalize on a proven approach for mitigating risk and establishing basic standards for responsible use of AI technologies.
First, start with a basic definition of AI and provide examples of AI that employees may come across in business and personal settings. Include references to specific AI software/technology that is in current use at your institution and what AI tools may be available. Consider role-based training based on the types of AI that the employee may come across as part of their specific position or job responsibility. For example, IT/IS personnel need to acknowledge applications such as fraud detection/prevention, customer authentication, and anti-money laundering (AML) solutions as these features commonly utilize AI. Vendor Managers also should be trained to look for AI features as part of a third party’s product-specific due diligence.
Understanding machine learning (ML) terminology is also essential for addressing security and compliance implications. These two steps will establish a baseline and set the stage for future AI tools that may be used.
Once AI tools are identified that employees may encounter during normal business operations, incorporate an Acceptable Use Policy (AUP) for employees to acknowledge. This policy should be part of your institution’s HR and information security/IT standards.
Here are a few AUP concepts to consider:
- AI, like any new and emerging technology, presents a significant risk if used irresponsibly or unethically. Data entered into AI prompts has no guarantee of confidentiality.
- Require employees to request approval from their manager in writing prior to using generative AI prompts or datasets (For example using Copilot, ChatGPT, Gemini, etc.) for any purpose.
If approved:
- Prohibit employees from submitting any non-public data (including but not limited to any confidential bank and customer/member information) or content into AI prompts or datasets.
- Thoroughly review AI-generated content to ensure the information is accurate and does not contain inherent bias.
- Verify any facts presented in AI-generated content independently to ensure that the data presented is objectively true.
- Content generated with AI tools must not be considered final work without a thorough human review. Generally, AI-generated content should be considered a starting point rather than a finishing point.
- Warn employees about the risk of AI as a tool for malicious actors to impersonate individuals (including employees or customers/members of the institution) to commit fraud. (Also, known as “Deep Fakes”).
- Assess your current social engineering (phishing) testing as grammatical errors and wordsmithing have been greatly improved by AI. Malicious emails may not be as easy to spot now thanks to AI.
While AI offers numerous advantages and can yield beneficial insights, protecting NPI remains paramount. As we leverage AI’s benefits across the financial industry, maintaining this focus ensures ongoing security and ethical use.
Related Resources
Ask a Question, Get an Answer!
Ask a question and our compliance experts will email you back!
Explore Other Risk Management Articles