Tag: Exams

16 Jun 2022
E-Banking Booklet

FFIEC Cancels E-Banking Handbook

On May 13, 2022, the FFIEC very quietly rescinded the FFIEC Information Technology Examination Handbook (IT Handbook) booklet entitled E-Banking.  The original booklet was released in 2003 and was accompanied by a flurry of activity by financial institutions to come up with a separate E-banking policy and risk assessment.  In effect, the FFIEC is now declaring (admitting?) that these are no longer necessary because all the basic risk management principles that apply to E-Banking are already addressed in other Handbooks.  Operational risk is addressed in the Business Continuity Management Handbook, information security risk is addressed in the Information Security Handbook, cyber risk is assessed in the Cybersecurity Assessment Tool, and third-party risk is addressed here, here, and here

We agree with this approach, and have long held that separately addressing each new emerging or evolving technology was cumbersome, duplicative, and unnecessary.  In our opinion, shifting the focus of the handbooks to basic risk management principles and best practices that can apply to all business processes makes more sense and is long overdue. Could the Wholesale and Retail Payment Systems handbooks be phased out next?  How about the Cybersecurity Assessment Tool?  Since cybersecurity is simply a subset of information security more broadly, could we see a phase-out of a separate cyber assessment?  Or even better, could we see the Information Security Handbook include a standardized risks and controls questionnaire that includes cyber?

Admittedly this is only one less policy and one less risk assessment, but we’ll be watching this trend with great interest. Anything that can help ease the burden on overworked compliance folks is a welcome change!

06 Dec 2021
New Proposed Cyber Incident Notification Rules Finalized

UPDATE – New Proposed Cyber Incident Notification Rules Finalized

Last updated March 30, 2022.

Currently, financial institutions are required to report a cyber event to their primary federal regulator under very specific circumstances. This requirement dates back to GLBA, Appendix B to Part 364 and states that FI incident response plans (IRP’s) should contain procedures for: “Notifying its primary Federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information…”. Customer notification guidance is very similar. Institutions should provide notice to their customers as soon as possible: “If the institution determines that misuse of its information about a customer has occurred or is reasonably possible.” (It’s important to note here that a strict interpretation of “…access to or use of…” would generally not include a denial of access (DDoS) type of attack, or a ransomware attack that locks files in place. We strongly suggest modifying the definition of “misuse” in your incident response plan to say “…access to, denial of access to, or unauthorized use of…”.) However, with the issuance of the final rule (officially called “Computer-Security Incident Notification Requirements for Banking Organizations and Their Bank Service Providers”) institutions will have additional considerations that will require changes to your policies and procedures.

Background

Late in 2020 the FDIC issued a joint statement press release with the OCC and the Federal Reserve announcing the proposed changes. As is the case for all new regulations, they were first published in the Federal Register, which started the clock on a 90-day comment period, which ended on April 12 of 2021. (We took an early look at this back in July.)

The new rule was approved on November 2021 by the OCC, Federal Reserve, and FDIC1 collectively, with a proposed effective date of April 1, 2022, and a compliance date of May 1, 2022. Simply put, it will require “…a banking organization to provide its primary federal regulator with prompt notification of any “computer-security incident” that rises to the level of a “notification incident.”

To fully understand the requirements and new expectations of this rule, there are actually three terms we need to understand; a computer security incident, a notification incident, and “materiality”.

Keys to Understanding the New Rule

A computer-security incident could be anything from a non-malicious hardware or software failure or the unintentional actions of an employee, to something malicious and possibly criminal in nature. The new rule defines computer security incidents as those that result in actual or potential harm to the confidentiality, integrity, or availability of an information system or the information the system processes, stores, or transmits.

A notification incident is defined as a significant computer-security incident that has materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade, a banking organization’s:

  1. Ability to carry out banking operations, activities, or processes, or deliver banking products and services to a material portion of its customer base, in the ordinary course of business
  2. Business line(s), including associated operations, services, functions, and support, that upon failure would result in a material loss of revenue, profit, or franchise value; or
  3. Operations, including associated services, functions and support, as applicable, the failure or discontinuance of which would pose a threat to the financial stability of the United States.

The third term that needs to be understood is “materiality“. This term is used 97 times in the full 80 page press release, so it is clearly something the regulators expect you to understand and establish; for example, what is a “material portion of your customer base”, or “material loss of revenue”, or a “material disruption” of your operations? Unfortunately the regulation does not provide a universal definition of materiality beyond agreeing that it should be evaluated on an enterprise-wide basis. Essentially, each banking organization should evaluate whether the impact is material to their organization as a whole. This would seem to suggest that these material threshold levels would need to be defined ahead of time, perhaps as a function of establishing Board-approved risk appetite levels or perhaps it could be tied to the business impact analysis? Future clarification may be necessary on the best approach to establishing the determination of materiality in your organization, but since the term is at the centerpiece of the rule, and initiation of the 36 hour threshold for notification doesn’t begin until it has been established, we can definitely expect materiality to be a part of the discussion in the event of regulator scrutiny in this area.

Any event that meets the criteria of a notification incident would require regulator notification “as soon as possible”, and no later than 36 hours after you’ve determined that a notification event has occurred. It’s important to understand that the 36 hour clock does not start until there has been a determination that the incident has been classified as a notification event, which only happens after you’ve determined you’ve experienced a computer-security incident.

The Safe Systems Compliance Team has created a detailed decisioning flowchart to assist with your understanding of this new rule. Click here for a copy of the flowchart.

Notification can be provided to the “…appropriate agency supervisory office, or other designated point of contact, through email, telephone, or other similar method that the agency may prescribe.” No specific information is required in the notification other than that a notification incident has occurred. The final rule also does not prescribe any specific form or template that must be used, and there are no recordkeeping requirements beyond what may be in place if a Suspicious Activity Report (SAR) is filed in connection with the incident. The agencies have all issued additional “point-of-contact” guidance:

For FDIC institutions:

Notification can be made to your case manager (your primary contact for all supervisory-related matters), to any member of an FDIC examination team if the event occurs during an examination, or if our primary contact is unavailable, you may notify the FDIC by email at: incident@fdic.gov.

For OCC Institutions:

Notification may be done by emailing or calling the OCC supervisory office. Communication may also be made via the BankNet website, or by contacting the BankNet Help Desk via email (BankNet@occ.treas.gov) or phone (800) 641-5925.

For Federal Reserve Institutions:

Notification may be made by communicating with any of the Federal Reserve supervisory contacts or the central point of contact at the Board either by email to incident@frb.gov or by telephone to (866) 364-0096

One final note, we’ve received indications that at least some State Banking regulators will require concurrent notification of any incident that rises to the level of a notification incident. Check with your State regulators on if (and how) they plan to coordinate with this new rule.

Third-party Notification Rules

In addition to FI notification changes, there will also be new expectations for third-party service providers, like core providers and significant technology service providers (as defined in the BSCA). Basically, it would require a service-provider to “…notify at least one bank-designated point of contact at affected banking organization customers immediately after experiencing a computer-security incident that it believes in good faith could disrupt, degrade, or impair services provided subject to the BSCA for four or more hours.”

Furthermore, if you are notified by a third-party that an event has occurred, and the event has or is likely to result in your customers being unable to access their accounts (i.e. it rises to the level of a notification incident), you would also be required to report to your regulator. However, it’s important to note here that not all third-party notification incidents will also be considered bank regulator notification incidents. It is also significant that the agencies will most likely not cite your organization because a bank service provider fails to comply with its notification requirement, so you will likely not be faulted if a third-party fails to notify you.

Next Steps

There will undoubtedly be clarification on the specifics of rule implementation as we digest feedback from regulatory reviews next year, and we’ll keep you posted as we know more. In the meantime, aside from having internal discussions about what constitutes “materiality” in your organization, the new rules will likely also require some modifications to your Incident Response Plan (IRP), and possibly to key vendor contracts. For FDIC institutions, the “as soon as possible” regulator notification provisions of FIL-27-2005 already in your IRP will have to be amended. For all critical vendors, ensure that contracts contain verbiage committing them to the 4 hour outage criteria for notification, and that you’ve identified a contact person or persons within your organization to receive the alert.

1 As of this date the NCUA has not signed off on these rules, although they may at some point.
22 Jul 2021
To Notify or Not to Notify

New Proposed Cyber Incident Notification Rules

We first wrote about incident notification over ten years ago, and based on feedback from our cyber testing experience, financial institutions are still struggling with the issue of whether or not to notify their customers and primary regulators. The conversation often comes down, to “do we have to notify?” Some institutions may choose to notify out of an abundance of caution, but most won’t unless it’s absolutely required, as regulator notification opens the door to additional examiner scrutiny, and customer notification may result in increased reputation risk. To confuse the issue a bit more, notification requirements are currently defined differently for a regulator than for a customer. And all this is about to change!

Notification Rules Background

Financial institutions are currently required to report an event to their primary federal regulator under very specific circumstances. This requirement dates back to GLBA, Appendix B to Part 364 and states that FI incident response plans (IRPs) should contain procedures for: “Notifying its primary Federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information…”

Customer notification guidance is very similar. Institutions should provide notice to their customers as soon as possible: “If the institution determines that misuse of its information about a customer has occurred or is reasonably possible.” (It’s important to note here that a strict interpretation of “…access to or use of…” would generally not include a denial of access (DDoS) type of attack or a ransomware attack that locks files in place. We suggest modifying the language of “misuse” to “…access to, denial of access to, or use of…”.)

Announcement of New Proposed Notification Rules

Late last year the FDIC issued a joint press release with the OCC and the Federal Reserve1 announcing the proposed changes. The working title is a mouthful: Computer-Security Incident Notification Requirements for Banking Organizations and Their Bank Service Providers. As is the case for all new regulations, the proposed notification rules were first published in the Federal Register, which started the clock on a 90 day comment period that ended on April 12 of this year. When (or if) the rules will become law will depend on how long it takes regulators to compile, digest, and reconcile the comments received, which can take as long as 6 months to a year from the end of the comment period.

3 Key Terms of the New Regulator Notification Rule

One of the new rules “…would require a banking organization to provide its primary federal regulator with prompt notification of any computer-security incident that rises to the level of a notification incident.” There are actually three terms we need to understand here: a computer security incident, a significant security incident, and a notification incident.

A computer security incident could be anything from a non-malicious hardware or software failure or the unintentional actions of an employee to something malicious and possibly criminal in nature. Computer security incidents are those that:

  • Result in actual or potential harm to the confidentiality, integrity, or availability of an information system or the information the system processes, stores, or transmits; or
  • Constitute a violation or imminent threat of violation of security policies, security procedures, or acceptable use policies.

In addition to the GLBA NPI guidance, banking organizations are already required to report certain instances of disruptive cyber-events and cyber-crimes through the filing of Suspicious Activity Reports (SARs) within 30 days, but no regulator notification is required unless these criteria are met. Even so, if notification is provided, the concern is that the 30-day window may not be timely enough to prevent other events.

This new rule would define a significant computer security incident as one that meets any of these criteria:

  1. Could jeopardize the viability of the operations of an individual banking organization
  2. Result in customers being unable to access their deposit and other accounts
  3. Impact the stability of the financial sector

The proposed rule refers to these significant computer security incidents as notification incidents — the two terms are synonymous, so any event that meets the above criteria would require regulator notification “as soon as possible”, and no later than 36 hours after you’ve determined that a notification event has occurred.

We’ll see what the final rules look like, but at the moment there are no proposed changes to the customer notification requirements.

New Third-Party Expectations

In addition to FI notification changes, there will also be new expectations for third-party service providers, like core providers and significant technology service providers (as defined in the BSCA). Because these vendors are “…also are vulnerable to cyber threats, which have the potential to disrupt, degrade, or impair the provision of banking services to their banking organization customers,” it would require a service-provider to “…notify at least two individuals at affected banking organization customers immediately after experiencing a computer-security incident that it believes in good faith could disrupt, degrade, or impair services provided subject to the BSCA for four or more hours.” Presumably, if you are notified by a third party that an event has occurred, and the event has or is likely to result in your customers being unable to access their accounts, you would also be required to report to your regulator.

Reviewing the submitted comments, there are still many questions to be answered and terms to be clarified, but with cybersecurity dominating the news recently we can definitely count on regulatory changes to the “do we have to notify?” discussion coming fairly soon.

1 As of this date the NCUA has not signed off on these proposed rules changes, although they may at some point.
20 Oct 2020

Compliance Quick Bites – Tests vs. Exercises, and the Resiliency Factor

One of several changes implemented in the 2019 FFIEC BCM Examination Handbook is a subtle but important differentiation between a BCMP “test” and an “exercise”. I discussed some of the more material changes here, but we’re starting to see examiner scrutiny into not just if, but exactly what and how you’re testing.

According to the Handbook:

  • “An exercise is a task or activity involving people and processes that is designed to validate one or more aspects of the BCP or related procedures.”
  • “A test is a type of exercise intended to verify the quality, performance, or reliability of system resilience in an operational environment.”

Essentially, “…the distinction between the two is that exercises address people, processes, and systems whereas tests address specific aspects of a system.” Simply put, think of an exercise as a scenario-based simulation of your written process recovery procedures (a table-top exercise, for example), and a test as validation of the interdependencies of those processes, such as data restoration or circuit fail-over.

The new guidance makes it clear that you must have a comprehensive program that includes both exercises and tests, and that the primary objective should be to validate the effectiveness of your entire business continuity program. In the past, most FI’s have conducted an annual table-top or structured walk-through test, and that was enough to validate their plan. It now seems that this new differentiation requires multiple methods of validation of your recovery capabilities. Given the close integration between the various internal and external interdependencies of your recovery procedures, this makes perfect sense.

An additional consideration in preparing for future testing is the increased focus on resiliency, defined as any proactive measures you’ve already implemented to mitigate disruptive events and enhance your recovery capabilities. The term “resiliency” is used 126 times in the new Handbook, and you can bet that examiners will be looking for you to validate your ability to withstand as well as recover in your testing exercises. Resilience measures can include fire suppression, auxiliary power, server virtualization and replication, hot-site facilities, alternate providers, succession planning, etc.

One way of incorporating resilience capabilities into future testing is to evaluate the impact of a disruptive event after consideration of your internal and external process interdependencies and accounting for any existing resilience measures. For example, let’s say your lending operations require 3 external providers and 6 internal assets, including IT infrastructure, scanned documents, paper documents, and key employees. List any resilience capabilities you already have in place, such as recovery testing results from your third-parties, data replication and restoration, and cross-training for key employees, then evaluate what the true impact of the disruptive event would be in that context.

In summary, conducting both testing and exercises gives all stakeholders a high level of assurance that you’ve thoroughly identified and evaluated all internal and external process interdependencies, built resilience into each component, and can successfully restore critical business functions within recovery time objectives.