Tag: cybersecurity

06 Dec 2021
New Proposed Cyber Incident Notification Rules Finalized

UPDATE – New Proposed Cyber Incident Notification Rules Finalized

Last updated March 30, 2022.

Currently, financial institutions are required to report a cyber event to their primary federal regulator under very specific circumstances. This requirement dates back to GLBA, Appendix B to Part 364 and states that FI incident response plans (IRP’s) should contain procedures for: “Notifying its primary Federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information…”. Customer notification guidance is very similar. Institutions should provide notice to their customers as soon as possible: “If the institution determines that misuse of its information about a customer has occurred or is reasonably possible.” (It’s important to note here that a strict interpretation of “…access to or use of…” would generally not include a denial of access (DDoS) type of attack, or a ransomware attack that locks files in place. We strongly suggest modifying the definition of “misuse” in your incident response plan to say “…access to, denial of access to, or unauthorized use of…”.) However, with the issuance of the final rule (officially called “Computer-Security Incident Notification Requirements for Banking Organizations and Their Bank Service Providers”) institutions will have additional considerations that will require changes to your policies and procedures.

Background

Late in 2020 the FDIC issued a joint statement press release with the OCC and the Federal Reserve announcing the proposed changes. As is the case for all new regulations, they were first published in the Federal Register, which started the clock on a 90-day comment period, which ended on April 12 of 2021. (We took an early look at this back in July.)

The new rule was approved on November 2021 by the OCC, Federal Reserve, and FDIC1 collectively, with a proposed effective date of April 1, 2022, and a compliance date of May 1, 2022. Simply put, it will require “…a banking organization to provide its primary federal regulator with prompt notification of any “computer-security incident” that rises to the level of a “notification incident.”

To fully understand the requirements and new expectations of this rule, there are actually three terms we need to understand; a computer security incident, a notification incident, and “materiality”.

Keys to Understanding the New Rule

A computer-security incident could be anything from a non-malicious hardware or software failure or the unintentional actions of an employee, to something malicious and possibly criminal in nature. The new rule defines computer security incidents as those that result in actual or potential harm to the confidentiality, integrity, or availability of an information system or the information the system processes, stores, or transmits.

A notification incident is defined as a significant computer-security incident that has materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade, a banking organization’s:

  1. Ability to carry out banking operations, activities, or processes, or deliver banking products and services to a material portion of its customer base, in the ordinary course of business
  2. Business line(s), including associated operations, services, functions, and support, that upon failure would result in a material loss of revenue, profit, or franchise value; or
  3. Operations, including associated services, functions and support, as applicable, the failure or discontinuance of which would pose a threat to the financial stability of the United States.

The third term that needs to be understood is “materiality“. This term is used 97 times in the full 80 page press release, so it is clearly something the regulators expect you to understand and establish; for example, what is a “material portion of your customer base”, or “material loss of revenue”, or a “material disruption” of your operations? Unfortunately the regulation does not provide a universal definition of materiality beyond agreeing that it should be evaluated on an enterprise-wide basis. Essentially, each banking organization should evaluate whether the impact is material to their organization as a whole. This would seem to suggest that these material threshold levels would need to be defined ahead of time, perhaps as a function of establishing Board-approved risk appetite levels or perhaps it could be tied to the business impact analysis? Future clarification may be necessary on the best approach to establishing the determination of materiality in your organization, but since the term is at the centerpiece of the rule, and initiation of the 36 hour threshold for notification doesn’t begin until it has been established, we can definitely expect materiality to be a part of the discussion in the event of regulator scrutiny in this area.

Any event that meets the criteria of a notification incident would require regulator notification “as soon as possible”, and no later than 36 hours after you’ve determined that a notification event has occurred. It’s important to understand that the 36 hour clock does not start until there has been a determination that the incident has been classified as a notification event, which only happens after you’ve determined you’ve experienced a computer-security incident.

The Safe Systems Compliance Team has created a detailed decisioning flowchart to assist with your understanding of this new rule. Click here for a copy of the flowchart.

Notification can be provided to the “…appropriate agency supervisory office, or other designated point of contact, through email, telephone, or other similar method that the agency may prescribe.” No specific information is required in the notification other than that a notification incident has occurred. The final rule also does not prescribe any specific form or template that must be used, and there are no recordkeeping requirements beyond what may be in place if a Suspicious Activity Report (SAR) is filed in connection with the incident. The agencies have all issued additional “point-of-contact” guidance:

For FDIC institutions:

Notification can be made to your case manager (your primary contact for all supervisory-related matters), to any member of an FDIC examination team if the event occurs during an examination, or if our primary contact is unavailable, you may notify the FDIC by email at: incident@fdic.gov.

For OCC Institutions:

Notification may be done by emailing or calling the OCC supervisory office. Communication may also be made via the BankNet website, or by contacting the BankNet Help Desk via email (BankNet@occ.treas.gov) or phone (800) 641-5925.

For Federal Reserve Institutions:

Notification may be made by communicating with any of the Federal Reserve supervisory contacts or the central point of contact at the Board either by email to incident@frb.gov or by telephone to (866) 364-0096

One final note, we’ve received indications that at least some State Banking regulators will require concurrent notification of any incident that rises to the level of a notification incident. Check with your State regulators on if (and how) they plan to coordinate with this new rule.

Third-party Notification Rules

In addition to FI notification changes, there will also be new expectations for third-party service providers, like core providers and significant technology service providers (as defined in the BSCA). Basically, it would require a service-provider to “…notify at least one bank-designated point of contact at affected banking organization customers immediately after experiencing a computer-security incident that it believes in good faith could disrupt, degrade, or impair services provided subject to the BSCA for four or more hours.”

Furthermore, if you are notified by a third-party that an event has occurred, and the event has or is likely to result in your customers being unable to access their accounts (i.e. it rises to the level of a notification incident), you would also be required to report to your regulator. However, it’s important to note here that not all third-party notification incidents will also be considered bank regulator notification incidents. It is also significant that the agencies will most likely not cite your organization because a bank service provider fails to comply with its notification requirement, so you will likely not be faulted if a third-party fails to notify you.

Next Steps

There will undoubtedly be clarification on the specifics of rule implementation as we digest feedback from regulatory reviews next year, and we’ll keep you posted as we know more. In the meantime, aside from having internal discussions about what constitutes “materiality” in your organization, the new rules will likely also require some modifications to your Incident Response Plan (IRP), and possibly to key vendor contracts. For FDIC institutions, the “as soon as possible” regulator notification provisions of FIL-27-2005 already in your IRP will have to be amended. For all critical vendors, ensure that contracts contain verbiage committing them to the 4 hour outage criteria for notification, and that you’ve identified a contact person or persons within your organization to receive the alert.

1 As of this date the NCUA has not signed off on these rules, although they may at some point.
14 Jan 2021
Looking Ahead to 2021

A Look Back at 2020 and a Look Ahead to 2021: A Regulatory Compliance Update

From SafeSystems.com/Safe-Systems-Blog

Safe Systems recently published a two-part regulatory compliance blog series that looked back at 2020 and ahead to 2021. In Part 1, we explored how regulations related to the Pandemic dominated the compliance landscape early in 2020 forcing financial institutions to make adjustments to their procedures and practices on the fly. In Part 2, we summarized the regulatory focus on cybersecurity (particularly ransomware) and looked ahead to 2021.

30 Sep 2020
Ask the Guru – Can We Apply Similar Controls to Satisfy Both GLBA and GDPR

Can We Apply Similar Controls to Satisfy Both GLBA and GDPR?

Hey Guru!

Are the Gramm–Leach–Bliley Act (GLBA) and the General Data Protection Regulation (GDPR) similar enough to apply the same or equivalent set of layered controls? My understanding is that GDPR has placed a higher premium on the protection of a narrower definition of data. So, my question is more about whether FFIEC requirements for the protection of data extends equally to both Confidential PII and the narrow data type called out by GDPR.


Hi Steve, and thanks for the question! Comparing Gramm–Leach–Bliley Act (GLBA) and the General Data Protection Regulation (GDPR) is instructive as they both try to address the same challenge; privacy and security. Specifically, protecting information shared between a customer and a service provider. GLBA is specific to financial institutions, while GDPR defines a “data processor” as any third-party that processes personal data. However, they both have a very similar definition of the protected data. GDPR uses the term “personal data” as any information that relates to an individual who can be directly or indirectly identified, and GLBA uses the term non-public personal information (or NPI) to describe the same type of data.

To answer the question of whether the two are similar enough to apply the same or similar set of layered controls, my short answer is since using layering controls is a risk mitigation strategy best practice, it would apply equally to both.

Here’s a bit more. The most important distinction between GLBA and GDPR is that GLBA has two sections; 501(a) and 501(b). The former establishes the right to privacy and the obligation that financial institutions must protect the security and confidentiality of customer NPI. 501(b) empowers the regulators to require FI’s to establish safeguards to protect against any threats to NPI. Simply put, 501(a) is the “what”, and 501(b) is the “how”. Of course, the “how” has given us the 12 FFIEC IT Examination Handbooks, cybersecurity regulations, PEN tests, the IT audit, and lots of other stuff with no end in sight.

By contrast, GDPR is more focused on “what” (what a third-party can and can’t do with customer data, as well what the customer can control; i.e. right to have their data deleted, etc.) and much less on the “how” it is supposed to be done.

My understanding is that the scope of GLBA (and all the information security standards based thereon) is strictly limited to customer NPI, it does not expend to confidential or PII. One distinguishing factor between NPI and PII is that in US regulations NPI always refers to the “customer”, and PII always refers to the “consumer”. (Frankly there isn’t really any difference between data obtained from a customer or consumer by a financial institution during the process of either pursuing or maintaining a business relationship.) We have always taken the position that for the purposes of data classification, NPI and confidential (PII) data share the same level of sensitivity, but guidance is only concerned about customer NPI. GDPR does not make that distinction.

In my opinion, our federal regulations will move towards merging NPI and PII, and in fact some states are already there. So, although it’s not strictly a requirement to protect anything other than NPI, it’s certainly a best practice, and combining both NPI and PII / confidential data in the same data sensitivity classification will do that.

One last thought about enforcement… So far, we have not heard of US regulators checking US based FI’s for GDPR compliance, but since our community-based financial institutions have very little EU exposure, your experience may be different.

01 May 2018

FFIEC Issues Joint Statement on Cyber Insurance

The statement is here, and is intended to provide additional awareness about the possible use of cyber insurance to off-set financial losses resulting from cyber incidents. Here are a few high-level observations:

  • First of all, we’ve seen several announcements from various organizations stating that “the FFIEC has released new guidance…”. The statement makes it clear in the second sentence that “This statement does not contain any new regulatory expectations.” The statement goes on to reference the existing Information Technology (IT) Examination Handbook booklets for specific regulatory expectations. Again, this statement does not change existing regulatory expectations.
  • Second, this is a joint statement from all members, so we don’t expect any of the individual regulatory bodies to issue separate guidance. This is good, as we will not have to deal with any interpretation deviations. In fact, the FDIC just issued FIL-16-2018, which just links directly to the FFIEC page.
  • Third, the statement makes the same point we’ve already learned from the Incident Response Tests we facilitate with our customers; cyber insurance coverage is all over the map right now (or as the statement points out, “Many aspects of the cyber insurance marketplace…continue to evolve”). In other words, “Buyer Beware”*.

So how does this statement change your current approach to managing cyber risk? Probably not much. The 2015 FFIEC Management Handbook already provides guidance on the general use of insurance policies as a part of your risk mitigation strategy. Regarding cyber, they state that “These policies generally exclude, or may not include, liability for all areas of IT operations and cybersecurity.” Again, that has been our experience as we’ve conducted cyber incident response testing for FI’s, and you can try this for yourself next time you test. Whatever scenario you simulate; whether it’s malware, or customer account takeover, or a third-party breach, bring cyber insurance into the discussion. If you have (or think you have) cyber coverage, check with your agent to see if it would cover the estimated costs of the incident you’re simulating. If you don’t currently have coverage, this is a good opportunity to decide if it’s justified by evaluating costs vs. coverage limitations and exclusions using a real-life scenario.

In summary, if you already have cyber insurance coverage, the statement really doesn’t change anything. Just make sure it will be there for you if and when you need it. If you don’t currently have cyber insurance, the statement makes it clear that it’s not a requirement, but you should make sure any future consideration utilizes the framework they provide for weighing the benefits and costs.

One final thought…risk management is all about reducing risk to acceptable levels, and insurance should be the last control considered. As the Management Handbook states, “Insurance complements, but does not replace, an effective system of controls.” In our opinion, it’s a last resort, and utilized only if avoidance and mitigation efforts aren’t sufficient.

*UPDATE – Warren Buffett of Berkshire Hathaway Inc. recently confirmed this, stating “I don’t think we or anybody else really knows what they’re doing when writing cyber insurance. We don’t want to be a pioneer on this… Anyone who claims to know the base case or worst case for losses is kidding themselves”.

13 Jun 2017
Banker looking over the CAT

FFIEC Cybersecurity Assessment Tool Update

The FFIEC recently released a long-awaited update to the Cybersecurity Assessment Tool, and we think overall it is a relatively minor but useful evolution. But before we get into the details of what the update does address, it’s important to note that it did not address the ambiguity issues that plague the current assessment. One example…in the Inherent Risk section, there are a plethora of semicolons. Are they supposed to be interpreted as “or” or “and”? Take the question about personal devices being allowed to connect to the corporate network (4th question in the Technologies and Connection Types category).

The minimal risk level states the following:

“Only one device type available; available to <5% of employees (staff, executives, managers); e-mail access only.”

If the semicolons are interpreted as “or,” the statement reads like this:

“Only one device type available OR available to <5% of employees (staff, executives, managers) OR e-mail access only”.

This is considerably different than:

“Only one device type available AND available to <5% of employees (staff, executives, managers) AND e-mail access only”.

Unfortunately, the update did not offer any clarification on this, and as a result we are left to guess what the regulator’s intentions are. Our approach has been to risk-rank each question segment individually. So in the example above, what is the greater risk? The number of device types, the number of employees using them, or what they are allowed to access? We rank the risk of what employees are allowed to access highest, followed by the number of employees accessing, followed by the device types. And this is just one example, 18 of the 39 inherent risk questions require this type of interpretive challenge, and correct interpretation is absolutely critical, because your gap analysis and subsequent cyber action plan depend on an accurate inherent risk assessment.

Appendix A

However, the FFIEC CAT update does impact 2 areas; the first is a more detailed cross-reference in Appendix A mapping the baseline statements to the 2 recently released IT Handbooks (Management and Information Security), and the second will give most FI’s more flexibility when evaluating declarative statements.

First, the changes to Appendix A. Compare the original Risk Management/Audit section…

Risk Management/Risk Assessment: The risk assessment is updated to address new technologies, products, services, and connections before deployment.

Source: IS.B.13: Risk assessments should be updated as new information affecting information security risks is identified (e.g., a new threat, vulnerability, adverse test result, hardware change, software change, or configuration change). IS.WP.I.3.3: Determine the adequacy of the risk assessment process.
* Information Security, E-Banking, Management, Wholesale Payments

…with the updated section:

Risk Management/Risk Assessment: The risk assessment is updated to address new technologies, products, services, and connections before deployment.

Source: IS.II.A: pg7: External events affecting IT and the institution’s ability to meet its operating objectives include natural disasters, cyber attacks, changes in market conditions, new competitors, new technologies, litigation, and new laws or regulations. These events pose risks and opportunities, and the institution should factor them into the risk identification process.

IS.II.C:pg11: Additionally, management should develop, maintain, and update a repository of cybersecurity threat and vulnerability information that may be used in conducting risk assessments and provide updates to senior management and the board on cyber risk trends.

IS.WP.8.3.d: Determine whether management has effective threat identification and assessment processes, including the following: Using threat knowledge to drive risk assessment and response.

This more detailed and expanded set of cross-refences will be useful for both institutions and consultants as they navigate their way through this interpretive minefield.

However, this could be the most significant change:

“The updated Assessment will also provide additional response options, allowing financial institution management to include supplementary or complementary behaviors, practices and processes that represent current practices of the institution in supporting its cybersecurity activity assessment.” (Emphasis added)

It took us a while to find how this one was implemented because we were looking for a whole new section, but all the FFIEC has done is add a third option to your response to the declarative statements in the Control Maturity section. Prior to this update, you could only answer either “Y” or “N”. Now there is a third option; “Y(C)”, or Yes with Compensating Controls:

CAT Yes/No Controls

The FFIEC defines a Compensating Control as:

“A management, operational, and/or technical control (e.g., safeguard or countermeasure) employed by an organization in lieu of a recommended security control in the low, moderate, or high baselines that provides equivalent or comparable protection for an information system.”

Essentially what this means is now institutions will be able to document adherence to a declarative statement using either direct off-set (primary) controls, or alternative compensating controls, IF they are able to properly identify them. Because these controls are “in lieu of” recommended controls, they are necessarily more difficult to identify and document, much more so than a primary control.

That said, having a way for institutions to document their adherence to a particular declarative statement using either direct or compensating controls is a significant improvement, and should ultimately result in more declarative statements being marked as achieved. Be careful though, although we haven’t seen any IT exams since the update, a “Y(C)” response may very well prompt additional regulatory scrutiny precisely because it requires more documentation.

Safe Systems has assisted almost 100 customers through the CAT so far, helping to document their responses, producing stakeholder reports, and crafting action plans. Let us know if we can help you.

21 Mar 2017
Late Night Exam Questions

Ask the Guru: How Can I Best Determine My Cyber Risk Profile?

Hey Guru!

We just completed the Cybersecurity Assessment, so now we have our current risk and control maturity levels identified.  Can we draw any conclusions about our average risk and control levels?  For example, most of our risks are in the Least and Minimal areas, but we do have a few Moderate as well.  Can we just average them and conclude that our overall cyber risk levels are minimal?


Towards the end of last year the FFIEC released a Frequently Asked Questions document about the Cybersecurity Assessment Tool, and item #6 directly addressed your question.  The Council stated that “…when a majority of activities, products, or services fall within the Moderate Risk Level, management may determine that the institution has a Moderate Inherent Risk Profile.”

This would seem to validate the approach of using the average1 of all risk levels to identify your overall risk level.  However, they go on to state that each risk category may pose a different level of risk. “Therefore, in addition to evaluating the number of times an institution selects a specific risk level, management may also consider evaluating whether the specific category poses additional risk that should be factored into the overall assessment of inherent risk.”  This would appear to directly contradict the averaging approach, indicating (correctly, in my opinion) that since all risks are NOT equal, you should NOT determine overall risk based on an average.

For example, let’s say that all of your risks in the Technologies and Connection Types category are in the Least and Minimal level except for Unsecured External Connections, which is at the Moderate level.  So you have 13 items no higher than minimal, and 1 item moderate.  Sounds like an overall minimal level of risk, right?  Except a Moderate level of risk for Unsecured External Connections indicates that you have several (6-10) unsecured connections.  As any IT auditor will tell you, even 1 unsecured connection can be a serious vulnerability!

So although the FFIEC says that “…you may determine…” you’re at one level if the majority of your responses fall within that level, they go on to say you really shouldn’t really draw that conclusion without additional evaluation.

This is just one of many examples of confusing, conflicting, and occasionally misleading elements in the CAT, and a very good reason to have assistance filling it out (shameless plug).

 

1 There are 3 primary ways of defining “average”; mean, mode and median.  If you’ve assigned 1-5 numeric values to the risk levels, we can define average as “mean”.  If we’re assuming average is “mode”, it’s simply the value that occurs most often.  This would appear the way the FFIEC is approaching it.  Regardless how you define “average”, it leads to the same (inaccurate) conclusion.