Category: Ask the Guru

17 Jul 2019

Ask the Guru: Is it Legal to Share Exam Findings?

Hey Guru!

We contracted with Safe Systems to help remediate exam findings, but we were told by the examiner that we are not allowed to share examination findings “under penalty of law”. How do we share this critical information with you without getting into legal trouble?


Thanks for the question, here is where this issue is coming from. The front cover of all examinations contains the following verbiage:

“The report is the property of the FDIC, and is furnished to the bank examined for their confidential use. Under no circumstances shall the registrant, or any of its directors, officers, or employees disclose or make public in any manner the report or any portion thereof.”

It goes on to say that doing so would violate Part 309 of the FDIC Rules and Regulations.

FDIC 12 CFR Part 309 is titled “Disclosure of Information”, which governs information the FDIC maintains on all financial institutions (including examination reports), and the procedures for obtaining access to such information. Subsection 309.6 (a) states:

“…no person shall disclose or permit the disclosure of any exempt records, or information contained therein, to any persons other than those officers, directors, employees, or agents of the Corporation who have a need for such records in the performance of their official duties.” (Emphasis added)

I have always taken the opinion that if we are contracted to assist in the remediation of examination findings, we are considered an “agent” (acting on behalf of the institution) and require the examination report or the information contained therein, in order to perform our “official duties”. Of course as their agent, we are now bound by Part 309 and restricted from any further sharing of the information.

One additional thought… It’s important to see examination findings in the context of the entire report as opposed to simply being restated or copy/pasted. There are several reasons for this, primarily because often we can derive additional meaning from the broader context, allowing us to “connect the dots” between separate findings. Also because sometimes we can get additional clarity by reading “between the lines” of the report. For example, we recently assisted a customer with a finding to “Improve the Pandemic Plan within the BCP Plan”.

They went on to state that “Management should establish a clear action plan…for Pandemic.” Taken out of context, this would seem to indicate examiners wanted additional general recovery procedures in case of Pandemic. But they went on to mention “key personnel” and “employee training”, and so taken in the broader context what they were really looking for was a succession plan. Because the finding never specifically mentioned a succession plan, we may have gone in a different direction if not for seeing the entire report.

Hope this gives you a little insight into this Part 309 issue. Feel free to reach out any time with other compliance questions!

03 Jul 2019
Addressing BCP and Incident Response in a Vendor Contract

Ask the Guru: Addressing BCP and Incident Response in Vendor Contracts

Hey Guru!

I’m looking at an FIL that came out recently (FIL-19-2019), and trying to figure out how to react to it. In your opinion, how do we “ensure that business continuity and incident response risks are adequately addressed” in our contracts? We do get copies of their BCP/IRP plans and their insurance, and we try to make sure things like IRTs in their documents match ours. Is there anything additional that you guys are suggesting we should do?


Based on the FIL, to successfully ensure that contracts with significant third-party providers* properly address business continuity and incident response, Financial Institutions should act to eliminate gaps with their key providers.

Here are the contractual specifics that examiners have identified as potential gaps in recent examinations:

  1. Some contracts do not require the service provider to maintain a business continuity plan, establish recovery standards, or define contractual remedies if the technology service provider misses a recovery time objective.
  2. Other contracts did not sufficiently detail the technology service provider’s security incident responsibilities. For example, details such as notifying the financial institution, regulators, or law enforcement when there was an event of a security or cybersecurity incident were not specified.
  3. Additionally, some contracts did not clearly define key contract terms used in contractual documentation relating to business continuity and incident response. An example of their use of undefined/unclear key contract terms is deciding what constitutes as a “security event” or a “service interruption”.

The FIL goes on to state that:

“When contracts leave gaps in business continuity and incident response, it is prudent for the financial institution to assess any resultant risks and implement compensating controls to mitigate them. For example, a financial institution may obtain supplementary business continuity documentation from the service provider or modify the financial institution’s own business continuity plan to address contractual uncertainties.”

The FIL concludes by reminding FI’s that under Section 3 of the Bank Service Company Act (BSCA), FI’s have a responsibility to report all contracts and relationships with certain service providers. The FI is responsible for notifying regulatory agencies of a relationship with a new vendor within 30 days after the service contract is created or the performance of the service, whichever occurs first. The actual reporting form is here. I provide more information on this (and have even quoted Don Saxinger, who is still with the FDIC and listed as the agency contact on the FIL!) in a previous blog post.

In summary, I think examiners expect you to more closely scrutinize your critical vendor contracts, looking for gaps that might indicate unmitigated risks. One way we address this for our customers is through testing. When we conduct testing, whether it’s a traditional disaster or a cyber incident scenario, we incorporate discussion of the actual vendor contract specifics. I.e., what does the contract say about the vendor meeting their recovery time objectives, and are their RTO’s within ours? What does the contract say about incident notification if the vendor has a cyber incident involving our data? How do they define a “recovery incident” or a “security incident”, and how does that compare to our definition? These details matter because your recovery procedures depend on what your provider is, and is not, legally obligated to do…and all that should be spelled out in the contract!


*According to regulators: “A third-party relationship should be considered significant if the institution’s relationship with the third party is a new relationship or involves implementing new bank activities; the relationship has a material effect on the institution’s revenues or expenses; the third party performs critical functions; the third party stores, accesses, transmits, or performs transactions on sensitive customer information; the third party markets bank products or services; the third party provides a product or performs a service involving subprime lending or card payment transactions; or the third party poses risks that could significantly affect earnings or capital.”

28 Feb 2019

Ask the Guru: Do We Need to Perform a review on a New Vendor in a Foreign Country?

Hey Guru!

Our institution works with a third-party that has recently engaged with a company in a foreign county to begin assisting them in taking care of our institution’s IT matters. Do we need to perform a review on this new foreign third-party?


When evaluating this situation, the first step is to understand the parties involved:

  1. Your Financial Institution
  2. Your current provider (your institution’s third-party)
  3. The foreign company your provider outsources to (fourth-party to your institution)

Typically, your institution would manage your third-parties through your vendor management program, and your third-party is responsible for managing their providers. This works well when the third-party has had a SOC 2 using the SSAE 18 standard. There is a section in the SOC 2 called Complementary Subservice Organization Controls (CSOC), which describes how the provider manages their providers. If the third-party has a SOC 2 on their provider that follows the SSAE 18 standard, your institution should have the necessary assurances that your current provider is effectively managing their third-parties.

However, without this assurance, your institution is on its own to determine what risks are presented by the fourth-party, and how best to address them. When performing the risk assessment process, your institution should ask yourselves – Does the foreign fourth-party have any (even incidental) access to our customer or confidential information? In other words, is any of our customer or confidential information transmitted, stored, or processed outside the U.S.?

At this point, foreign providers present all the same risks as any other outsourced relationship, PLUS a whole additional layer of risks. The FFIEC states:

 

“…this practice raises country, compliance, contractual, reputation, operational (e.g., transactional), and strategic issues in addition to those presented by use of a domestic service provider. In managing these issues, management should conduct appropriate risk assessments and due diligence procedures and closely evaluate all contracts. Additionally, management should establish ongoing monitoring and oversight procedures.” (emphasis added)

So in addition to the risks you already consider for your other outsourced relationships, foreign providers may also include issues such as choice-of-law and jurisdictional considerations, as these parties may not fall under the jurisdiction of domestic laws and regulations. This could present regulatory problems complying with consumer protection, privacy (Section 501(b) of GLBA), and information security laws. They may also have other contractual concerns such as data-breach notification issues, if the third-party contract stipulates a procedure the fourth-party can’t (or won’t) comply with. Finally, there’s also this.

In short, third-party relationship management is challenging, and managing fourth-parties is even more so. Add a foreign provider (third or fourth) into the mix and the challenge goes way up. I would strongly recommend your institution try to obtain assurances (via the CSOC section of the SOC 2) that your third-party provider is adequately managing their relationships, but even with that (and certainly without it) you may want to establish increased ongoing monitoring of this relationship.

18 Jun 2018
Best GDPR Practices for Financial Institutions

Ask the Guru: GDPR

Hey Guru!

I have heard a lot about GDPR recently, but I am not terribly familiar with it. I already break my back to stay in compliance with FFIEC guidance. Do I have anything more to worry about here?


GDPR has certainly been in the news for the past few months as implementation was required as of 5/25, but interpretations have varied as to how this will influence US-based entities with no real European presence. While it is still too early to know exactly how GDPR will be applied and enforced, the basic framework still leaves us with plenty to discuss.

The Basics

The General Data Protection Regulation was designed by the European Union (EU) member nations to create a uniform standard of consumer data privacy protection for all companies that do business in the EU. Think of this as GLBA with a few added features.

In the context of GDPR, protected parties (consumers) are called “data subjects”, any party in control of data (like a financial institution) is called a “controller,” and any entity that interacts with that data on behalf of the controller (like a technology service providers) is referred to as a “processor.” The regulation concerns itself with the privacy of EU citizen data. In this respect, it has the identical goal of GLBA.

The Scope

Naturally, GDPR applies to entities physically located within the EU member countries, but the regulation could, under certain circumstances, reach “across the pond”. Included are organizations based outside the EU that provide goods/services, or track the behavior of, data subjects within the EU.

In theory this COULD include your institution, but it’s a bit of a long shot. It really boils down to this: do you have any EU citizens, or dual US/EU citizens, on your list of customers/members? If the answer is “no”, then in all likelihood you can rest easy unless you are actively marketing to EU citizens.

Notable Rules

Financial institutions that are required to comply with GDPR have a head start. As I mentioned, many of the basic privacy and security principles of GLBA translate to GDPR; however, there are a few key areas where GDPR takes things a step further:

  1. Right to be forgotten/Right to Erasure – EU citizens covered by GDPR have a right to request that any and all personal data you have on them be corrected (if inaccurate), or deleted entirely. A key note here is that this is only required if the individual makes such a request, and even then, the institution would have 30 days to respond. Since Core processors are likely to be your single largest data store of customer information, you may want to check with them on their GDPR compliance efforts. Other places this type of data might be located is email, including hosted email services (such as Exchange Online). Responding to a Right to Erasure request may be a bit tricky in a cloud environment, as email cloud companies would need to work with their 3rd party vendors to find and purge the email, as well as all backups and archives.
  2. 72 hour breach notification – If your institution experiences a breach, you’ll have 72 hours from that point to notify the supervisory authority in the member state in which your customer/member resides. GLBA does not have specific timeframes on notification actions, specifying instead that “…a financial institution should provide a notice to its customers whenever it becomes aware of an incident of unauthorized access to customer information and, at the conclusion of a reasonable investigation, determines that misuse of the information has occurred or it is reasonably possible that misuse will occur.”
  3. Explicit Opt In requirements – This is where GDPR really deviates from GLBA. Essentially, EU citizens must both opt in to what information will be collected about them, and must agree to every way in which that data is used prior to those actions taking place. GLBA regulations are generally more reactive here, they allow institutions to default to an opt-in position, requiring notification to opt-out.
  4. Contracts – Similar to GLBA, contracts with third-parties transmitting, processing or storing EU citizen data should spell out how the exchange and use of data will work with data processors, and for what, exactly, each party is responsible. In practice, this would just become a deeper dive during due diligence/on-going monitoring of your Technology Service Providers.

Enforcement

Let’s put things back in perspective. While GDPR provides for rather severe penalties for non-compliance, there is still the matter of enforcement. Even if EU regulators wanted to take enforcement action against a US financial institution – would they, and more importantly, could they?

First of all, the EU has made it clear they’re only going after the worst and highest profile abusers of the regulations. Since most US-based financial institutions do not have any direct business presence within the EU, and few if any EU citizens as customers/members, they just aren’t likely to be a target for EU regulators.

Second, it is very unclear how a US institution could even be sanctioned or penalized by EU supervisory agencies, even if the institution has EU citizens as customers/members and has run afoul of the regulations. Worst case is you’ll have to terminate your business relationship with the EU citizen.

In the meantime, we’re definitely going to keep a close watch on how (and if) the US financial regulators react to this, and act accordingly. Up to now they have been completely silent on this matter, which speaks volumes to me. So until then, I think you can rest easy, or at least easier!

30 May 2018
Digital Files

Ask the Guru: A Prospective Vendor Either Won’t or Can’t Provide the Documentation We Need. What Should We Do?

Hey Guru!

We’re doing our due diligence on a new HR software package. We’ve requested the vendor’s financials and a SOC 2 report, but they told us they don’t provide financials (they are privately held), and their SOC 2 won’t be completed until the end of the year. They do have a SOC 1. What are your thoughts on this?


As with almost everything else, this starts with the risk assessment. What are your primary concerns with this vendor? They probably fall in 2 main categories; the security of the confidential data they store and process, and the criticality of the service they provide. A sound set of financials will give you some assurance that they can continue as an on-going concern and fulfill the terms of their contract. A SOC report will give you assurance that they have an effective control system in place for your confidential data. SO without either, how do you assure yourself? You’ll need to find alternate assurances, otherwise known as compensating controls.

In the absence of audited financials, one way to gain at least some assurance about the financial health of the company is to pull a D&B report. Another way is to ask the company for their banking contact as a reference, but as a private company they may be reluctant to provide that. In the end, if you aren’t able to gain sufficient assurance of their ability to continue to function, you’ll need to identify alternative vendors that can step in if needed.

Regarding assurances of their control environment in the absence of a SOC 2 report, this is a bit more difficult because there are potentially 5 criteria covered in a SOC 2 report; confidentiality, data integrity, data availability, privacy and security. Their SOC 1 may speak to data processing integrity, but compensating controls for the other criteria will have to be pieced together. BCP plans and testing results can speak to data availability. InfoSec policies, vulnerability assessments and PEN test results can speak to the security criteria. The contract and/or non-disclosure agreement (NDA) may contain privacy and confidentiality elements.

In the end, you’ll need to decide if the compensating controls in these areas result in a residual risk level within your risk appetite. If not, you may be better of waiting until the SOC 2 is released.

21 Mar 2017
Late Night Exam Questions

Ask the Guru: How Can I Best Determine My Cyber Risk Profile?

Hey Guru!

We just completed the Cybersecurity Assessment, so now we have our current risk and control maturity levels identified.  Can we draw any conclusions about our average risk and control levels?  For example, most of our risks are in the Least and Minimal areas, but we do have a few Moderate as well.  Can we just average them and conclude that our overall cyber risk levels are minimal?


Towards the end of last year the FFIEC released a Frequently Asked Questions document about the Cybersecurity Assessment Tool, and item #6 directly addressed your question.  The Council stated that “…when a majority of activities, products, or services fall within the Moderate Risk Level, management may determine that the institution has a Moderate Inherent Risk Profile.”

This would seem to validate the approach of using the average1 of all risk levels to identify your overall risk level.  However, they go on to state that each risk category may pose a different level of risk. “Therefore, in addition to evaluating the number of times an institution selects a specific risk level, management may also consider evaluating whether the specific category poses additional risk that should be factored into the overall assessment of inherent risk.”  This would appear to directly contradict the averaging approach, indicating (correctly, in my opinion) that since all risks are NOT equal, you should NOT determine overall risk based on an average.

For example, let’s say that all of your risks in the Technologies and Connection Types category are in the Least and Minimal level except for Unsecured External Connections, which is at the Moderate level.  So you have 13 items no higher than minimal, and 1 item moderate.  Sounds like an overall minimal level of risk, right?  Except a Moderate level of risk for Unsecured External Connections indicates that you have several (6-10) unsecured connections.  As any IT auditor will tell you, even 1 unsecured connection can be a serious vulnerability!

So although the FFIEC says that “…you may determine…” you’re at one level if the majority of your responses fall within that level, they go on to say you really shouldn’t really draw that conclusion without additional evaluation.

This is just one of many examples of confusing, conflicting, and occasionally misleading elements in the CAT, and a very good reason to have assistance filling it out (shameless plug).

 

1 There are 3 primary ways of defining “average”; mean, mode and median.  If you’ve assigned 1-5 numeric values to the risk levels, we can define average as “mean”.  If we’re assuming average is “mode”, it’s simply the value that occurs most often.  This would appear the way the FFIEC is approaching it.  Regardless how you define “average”, it leads to the same (inaccurate) conclusion.