Tag: Vendor Management

05 Nov 2013

The OCC Sets a New Standard for Vendor Management…

…but will it become the new standard for institutions with other regulators?  UPDATE – The answer is yes, at least for the Federal Reserve Readers of this blog know that I’ve been predicting an increase in vendor management program scrutiny since early 2010.  And although the FFIEC has been very active in this area, issuing multiple updates to outsourcing guidance in the past 2 years, it appears that the OCC is the first primary federal regulator (PFR) to formalize it into a prescriptive methodology.

So if you are a national bank or S&L regulated by the OCC, what you’ll want to know is “what changed”?  They’ve been looking at your vendor management program for years as part of your safety & soundness exams, exactly what changes will they expect you to make going forward?  The last time the OCC updated their vendor management guidance was back in 2001, so chances are you haven’t made many substantial changes in a while.  That will change.

However if you are regulated by the FDIC or the Federal Reserve or the NCUA, so what?  Nothing has changed, right?  Well no…not yet anyway.  Except for a change adding a “Vendor Management and Service Provider Oversight” section in the IT Officer’s Questionnaire back in 2007, the FDIC hasn’t issued any new or updated guidance since 2001.  Similarly, the NCUA last issued guidance in 2007 but it was really a re-statement of existing guidance that was first issued in 2001.  So considering the proliferation of outsourcing in the last 10 years, I believe all of the other regulators are overdue for updates.  Furthermore, I believe the OCC did a very good job with this guidance, and all financial institutions regardless of regulator would be wise to take a close look.

So what’s changed?  I compared the original 2001 bulletin (OCC 2001-47) side-by-side with the new one (OCC 2013-29), and although most of the content was very similar, there were some significant differences.  Initially they both start out the same way; stating that banks are increasing both the number and the complexity of outsourced relationships.  But the updated guidance goes on to state that…

“The OCC is concerned that the quality of risk management over third-party relationships may not be keeping pace with the level of risk and complexity of these relationships.”

They specifically cited failure to assess the direct and indirect costs, failure to perform adequate due diligence and monitoring, and multiple contract issues, as troublesome trends.

Conceptually, the new guidance focuses around a 5-phase “life-cycle” process of risk management.  The life-cycle consists of:

  • Planning,
  • Due diligence and third-party selection,
  • Contract negotiation,
  • Ongoing monitoring, and
  • Termination

First of all, a “cycle” concept strongly suggests that a once-a-year approach to program updates is not sufficient.  Secondly, I think the planning, or pre-vendor, phase is potentially the most significant in terms of the changes that regulators will expect going forward.  For one thing, beginning the vendor management process BEFORE beginning the relationship (i.e. before the vendor becomes a vendor) seems like a contradiction in terms (although it is not entirely new to readers of this blog), so many institutions may have skipped this phase entirely.  But it is at this planning stage that elements like strategic justification and complexity and impact on existing customers are assessed.  Those are only a few of the considerations in the planning phase, the guidance lists 13 in all.

The due diligence and contract phases are clearly defined and also expanded, but fairly consistent with existing guidance*.  And although termination is now defined as a separate phase, the expectations really haven’t changed much there either.

On-going monitoring (the traditional oversight phase) has been greatly expanded however.  The original guidance had 3 oversight activities; the third party’s financial condition, its controls, and the quality of its service and support.  The new guidance still has those 3…and adds 11 more.  Everything from insurance coverage, to regulatory compliance, to business continuity and managing customer complaints.

But perhaps the biggest expansion of expectations in the new guidance is the banks’ responsibility to understand how the vendor manages their subcontractors.  Banks are expected to…

“Evaluate the third party’s ability to assess, monitor, and mitigate risks from its use of subcontractors and to ensure that the same level of quality and controls exists no matter where the subcontractors’ operations reside.” (Bold added)

Shorter version: “Know your vendor…and your vendor’s vendor”.  And this expectation impacts all phases of the risk management life-cycle.  Subcontractor concerns start in the planning stage, continue through due diligence and contract considerations, add control expectations to on-going monitoring, and even impact termination considerations.

In summary, everything expands.  Your pre-vendor & pre-contract due diligence expands, oversight requirements (and the associated controls) increase, and of course everything must be documented…which also expands!  The original guidance listed 5 items typically contained in proper documentation, the updated guidance lists 8 items. But it’s the very first item on the list that caught my attention because it would appear to actually re-define a vendor.  Originally the vendor listing was expected to consist of simply “a list of significant vendors or other third parties”, which, depending on the definition of “significant”, was a fairly short list for most institutions.  Now it must consist of “a current inventory of all third-party relationships”, which leaves nothing to interpretation and expands your vendor list considerably.**

So if you are regulated by the OCC you can expect these new requirements to be incorporated into the examination process fairly soon.  If not, use this as a wake-up call.  I think you can expect the other federal regulators to follow suit with their own revised guidance.  The OCC has just set the gold standard.  Use this opportunity to get ahead of your regulator by revisiting and enhancing your vendor management program now.

 

* Safe Systems customers can get updated due diligence and contract checklists from their account manager.

** All vendors on the list must be risk assessed, and although the risk categories didn’t change (operational, compliance, reputation, strategic and credit) some of the risk elements did.  Matt Gunn pointed out one of the more interesting changes in his recent TechComply post.  I’ll cover that and others in a future post.

17 Sep 2013

Data Classification and the Cloud

UPDATE –  In response to the reluctance of financial institutions to adopt cloud storage, vendors such as Microsoft and HP have announced that they are building “hybrid” clouds.  These new models are designed to allow institutions to simultaneously store and process certain data in the cloud, while a portion of the processing or storage is done locally on premise.  For example, the application may reside in the cloud, but the customer data is stored locally.  This may make the decision easier, but only makes classification of data more important, as the decision to utilize a “hybrid” cloud must be justified by your assessment of the privacy and criticality of the data.

I get “should-we-or-shouldn’t-we” questions about the Cloud all the time, and because of the high standards for financial institution data protection, I always advise caution.  In fact, I recently outlined 7 cloud deal-breakers for financial institutions.  But could financial institutions still justify using a cloud vendor even if they don’t seem to meet all of the regulatory requirements?  Yes…if you’ve first classified your data.

The concept of “data classification” is not new, it’s mentioned several times in the FFIEC Information Security Handbook:

“Institutions may* establish an information data classification program to identify and rank data, systems, and applications in order of importance. Classifying data allows the institution to ensure consistent protection of information and other critical data throughout the system.”

“Data classification is the identification and organization of information according to its criticality and sensitivity. The classification is linked to a protection profile. A protection profile is a description of the protections that should be afforded to data in each classification.”

The term is also mentioned several times in the FFIEC Operations Handbook:

“As part of the information security program, management should* implement an information classification strategy appropriate to the complexity of its systems. Generally, financial institutions should classify information according to its sensitivity and implement
controls based on the classifications. IT operations staff should know the information classification policy and handle information according to its classification.”

 But the most relevant reference for financial institutions looking for guidance about moving data to the Cloud is a single mention in the FFIEC Outsourcing Technology Services Handbook, Tier 1 Examination Procedures section:

“If the institution engages in cloud processing, determine that inherent risks have been comprehensively evaluated, control mechanisms have been clearly identified, and that residual risks are at acceptable levels. Ensure that…(t)he types of data in the cloud have been identified (social security numbers, account numbers, IP addresses, etc.) and have established appropriate data classifications based on the financial institution’s policies.”

So although data classification is a best practice even before you move to the cloud, the truth is that most institutions aren’t doing it (more on that in a moment).   However examiners are expected to ensure (i.e. to verify) that you’ve properly classified your data afterwards…and that regardless of where data is located, you’ve protected it consistent with your existing policies.  (To date I have not seen widespread indications that examiners are asking for data classification yet, but I expect as cloud utilization increases, they will.  After all, it is required in their examination procedures.)

Most institutions don’t bother to classify data that is processed and stored internally because they treat all data the same, i.e. they have a single protection profile that treats all data at the highest level of sensitivity.  And indeed the guidance states that:

“Systems that store or transmit data of different sensitivities should be classified as if all data were at the highest sensitivity.”

But once that data leaves your protected infrastructure everything changes…and nothing changes.  Your policies still require (and regulators still expect) complete data security, privacy, availability, etc., but since your level of control drops considerably, so should your level of confidence.  And you likely have sensitive data combined with non-sensitive, critical combined with non-critical.  This would suggest that unless the cloud vendor meets the highest standard for your most critical data, they can’t be approved for any data.  Unless…

  1. You’ve clearly defined data sensitivity and criticality categories, and…
  2. You’re able to segregate one data group from another, and…
  3. You’ve established and applied appropriate protection profiles to each one.

Classification categories are generally defined in terms of criticality and sensitivity, but the guidance is not prescriptive on how you should label each category.  I’ve seen “High”, “Medium”, and “Low”, as well as “Tier 1”, “Tier 2” and “Tier 3”, and even a scale of 1 to 5,…whatever works best for your organization is fine.  Once that is complete, the biggest challenge is making sure you don’t mix data classifications.  This is easier for data like financials or Board reports, but particularly challenging for data like email, which could contain anything from customer information to yesterdays lunch plans.  Remember, if any part of the data is highly sensitive or critical, all data must be treated as such.

So back to my original question…can you justify utilizing the cloud even if the vendor is less than fully compliant?  Yes, if data is properly classified and segregated, and if cloud vendors are selected based on their ability to adhere to your policies (or protection profiles) for each category of data.

 

 

*In “FFIEC-speak”, ‘may’ means “should’, and ‘should’ means ‘must’.

20 Aug 2013

Ask the Guru: Vendor vs. Service Provider

Hey Guru
I recently had an FDIC examiner tell me that we needed to make a better distinction between a vendor and a service provider.  His point seemed to be that by lumping them together in our vendor management program we were “over-analyzing” them.  He suggested that we should be focused instead only on those few key providers that pose the greatest risk of identity theft.  Our approach has always been to assess each and every vendor.  Is this a new approach?


I don’t think so, although I think I know where the examiner is coming from on the vendor vs. service provider distinction.  First of all, let’s understand what is meant by a “service provider”.  The traditional definition of a service provider was one who provided services subject to the Bank Service Company Act (BSCA), which dates back to 1962.  As defined in Section 3 of the Act, these services include:

“…check and deposit sorting and posting, computation and posting of interest and other credits and charges, preparation and mailing of checks, statements, notices, and similar items, or any other clerical, bookkeeping, accounting, statistical, or similar functions performed for a depository institution.”

But lately the definition has expanded way beyond the BSCA, and today almost anything you can outsource can conceivably be provided by a “service provider”.  In fact according to the FDIC, the products and services provided can vary widely:

“…core processing; information and transaction processing and settlement activities that support banking functions such as lending, deposit-taking, funds transfer, fiduciary, or trading activities; Internet-related services; security monitoring; systems development and maintenance; aggregation services; digital certification services, and call centers.”

Furthermore, in a 2010 interview with BankInfoSecurity, Don Saxinger (Team Lead – IT and Operations Risk at FDIC) said this regarding what constitutes a service provider:

“We are not always so sure ourselves, to be quite honest…but, in general, I would look at it from a banking function perspective. If this is a function of the bank, where somebody is performing some service for you that is a banking function or a decision-making function, including your operations and your technology and you have outsourced it, then yes, that would be a technology service that is (BSCA) reportable.”

Finally, the Federal Reserve defines a service provider as:

“… any party, whether affiliated or not, that is permitted access to a financial institution’s customer information through the provision of services directly to the institution.   For example, a processor that directly obtains, processes, stores, or transmits customer information on an institution’s behalf is its service provider.  Similarly, an attorney, accountant, or consultant who performs services for a financial institution and has access to customer information is a service provider for the institution.”

And in their Guidance on Managing Outsourcing Risk

“Service providers is broadly defined to include all entities that have entered into a contractural relationship with a financial insitiution to provide business functions or activities”

So access to customer information seems to be the common thread, not necessarily the services provided.  Clearly the regulators have an expanded view of a “service provider”, and so should you.  Keep doing what you’re doing.  Run all providers through the same risk-ranking formula, and go from there!

One last thought…don’t get confused by different terms.  According the the FDIC as far back as 2001, other terms synonymous with “service providers” include vendors, subcontractors, external service provider (ESPs) and outsourcers.

04 Jun 2013

Incident Response in an Outsourced World

UPDATE – On June 6th the FFIEC formed the Cybersecurity and Critical Infrastructure Working Group, designed to enhance communications between and among the FFIEC members agencies as well as other key financial industry committees and councils.  The goal of this group will undoubtedly be to increase the defense and resiliency of financial institutions to cyber attacks, but the question is “what effect will this have on new regulatory requirements and best practices”?  Will annual testing of your incident response plan be a requirement, just as testing your BCP is now?  I think you can count on it…

I’ve asked the following question at several recent speaking engagements:  “Can you remember the last time you heard about a financial institution being hacked, and having its information stolen?”  No responses.  I then ask a second question:  “Can anyone remember the last time a service provider was hacked, and financial institution data stolen?”.  Heartland…TJX…FIS…almost every hand goes up.

As financial institutions have gotten pretty good at hardening and protecting data, cyber criminals are focusing more and more on the service providers as the weak link in the information security chain.  And wherever there are incidents making the news, the regulators are sure to follow with new regulations and increased reinforcement of existing ones.

The regulators make no distinction between your responsibilities for data within your direct control, and data outside your direct control;

“Management is responsible for ensuring the protection of institution and customer data, even when that data is transmitted, processed, stored, or disposed of by a service provider.” (Emphasis added)

In other words, you have 100% of the responsibility, and zero control.  All you have is oversight, which is at best predictive and reactive, and NOT preventive.  So you use the vendor’s past history and third-party audit reports to try to predict their ability to prevent security incidents, but in the end you must have a robust incident response plan to effectively react to the inevitable vendor incident.

The FFIEC last issued guidance on incident response plans in 2005 (actually just an interpretation of GLBA 501b provisions), stating that…

“…every financial institution should develop and implement a response program designed to address incidents of unauthorized access to sensitive customer information maintained by the financial institution or its service provider.” (Emphasis added)

The guidance specified certain minimum components for an incident response plan, including:

  • Assessing the nature and scope of an incident and identifying what customer information systems and types of customer information have been accessed or misused;
  • Notifying its primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information;
  • If required, filing a timely SAR, and in situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, promptly notifying appropriate law enforcement authorities;
  • Taking appropriate steps to contain and control the incident to prevent further unauthorized access to or use of customer information; and
  • Notifying customers when warranted in a manner designed to ensure that a customer can reasonably be expected to receive it.

The guidance goes on to state that even if the incident originated with a service provider the institution is still responsible for notifying their customers and regulator.  Although they may contract that back to the service provider, I have personally not seen notification outsourcing to be commonplace, and in fact I would not recommend it.  An incident could carry reputation risk, but mishandled regulator or customer notification could carry significant regulatory and financial risks.  In other words, while the former could be embarrassing and costly, the latter could shut you down.

So to summarize the challenges:

  • Financial institutions are outsourcing more and more critical products and services.
  • Service providers must be held to the same data security standards as the institution, but…
  • …the regulators are only slowly catching up, resulting in a mismatch between the FI’s security, and the service provider’s.
  • Cyber criminals are exploiting that mismatch to increasingly, and successfully, target institutions via their service providers.

What can be done to address these challenges?  Vendor selection due diligence and on-going oversight are still very important, but because of the lack of control, an effective incident response plan is the best, and perhaps only, defense.  Yes, preventive controls are always best, but lacking those, being able to quickly react to a service provider incident is essential to minimizing the damage.  When was the last time you reviewed your incident response plan?  Does it contain all of the minimum elements listed above?  Better yet, when was the last time you tested it?

Just as with disaster recovery, the only truly effective plan is one that is periodically updated and tested.  But unlike DR plans, most institutions don’t even update their incident responses plans, let alone test them.  And while there are no specific indications that regulators have increased scrutiny of incident response plans just yet, I would not be at all surprised if they do so in the near future.  Get ahead of this issue now by updating your plan and testing it.  Use a scenario from recent events, there are certainly plenty of real-world examples to choose from.  Gather the members of your incident response team together and walk through your response, the entire test shouldn’t take more than an hour or so.  Answer the following questions:

  1. What went wrong to cause the incident?  Why (times 5…root cause)?  If this is a vendor incident, full immediate disclosure of all of the facts to get to the root cause may be difficult, but request them anyway…in writing.
  2. Was our customer or other confidential data exposed?  If so, can it be classified as “sensitive customer information“?
  3. Is this a reportable incident to our regulators?  If so, do we notify them or does the vendor?  (Check your contract)
  4. Is this a reportable incident to our customers?  How do we decide if “misuse of the information has occurred or it is reasonably possible that misuse will occur“?
  5. Is this a reportable incident to law enforcement?
  6. What if the incident involved a denial of service attack, but no customer information was involved?  A response may not be required, but should you?
  7. What can we do to prevent this from happening again (see #1), and if we can’t prevent it, are there steps we should take to reduce the possibility?  Can the residual risk be transferred?

Make sure to document the test, and then test again the next time an incident makes the news.  It may not prevent the next incident from involving you, but it could definitely minimize the impact!

 

NOTE:  For more on this topic, Safe Systems will be hosting the webinar “How to Conduct an Incident Response Test” on 6/27.  The presentation will be open to both customers and non-customers and is free of charge, but registration is required.  Sign up here.

 

25 Feb 2013

Examination Downgrades Correlated with Poor Vendor Management

According to Donald Saxinger (senior examination specialist in FDIC’s Technology Supervision Branch) in a telephone briefing given to the ABA in December of last year, almost half of all CAMELS score downgrades in 2012 were related to poor vendor management.  The briefing was titled “Vendor Management: Unlocking the Value beyond Regulatory Compliance“, and in it Mr. Saxinger noted that in 46% of the FDIC IT examinations in which bank ratings were downgraded, inadequate vendor management was cited as a causal factor.  He went on to say that although poor vendor management may not have been the prime cause, it was frequently cited as a factor in the downgrade.

Mr. Saxinger recommends that banks request, receive, and review not just financials and third-party audits such as SOC reports and validation of disaster recovery capabilities, but also any examination reports on the provider.  Federal examiners have an obligation and a responsibility to monitor financial institution service providers using the same set of standards required of the institutions themselves, and they are doing so with increasing frequency.

In addition, consider that all of the FFIEC regulatory updates and releases issued last year were either directly or indirectly related to vendor management:

  • Changes to the Outsourcing Handbook to add references to cloud computing vendors, and managed security service providers.
  • Updates to the Information Security Handbook to accommodate the recently released Internet Authentication Guidance (with its strong reliance on third-parties).
  • Changes to all Handbooks to accommodate the phase-out of the SAS 70, and  replace with the term “third-party review”.
  • Updated guidance on the URSIT programs for the supervision and scoring of Technology Service Providers.
  • Completely revised and updated  Supervision of Technology Service Providers Handbook.

So regulators see inadequate vendor management as a contributing factor in examination downgrades, and virtually all new regulations issued by the FFIEC are related to it as well.  As a service provider to financial institutions we are prepared for, and expecting, added scrutiny.  As a financial institution looking to optimize examination results and stay ahead of the regulators, you should be too.

Here is a link to all vendor management related blog posts.

11 Dec 2012

Technology Service Providers and the new SOC reports

What do all of the 2012 changes to the IT Examination Handbooks have in common?  They are all, directly or indirectly, related to vendor management.  I had previously identified vendor management as a leading candidate for increased regulatory scrutiny in 2012, and boy was it.  (Not all of my 2012 predictions fared as well, I’ll take a closer look at the rest of them in a future post.)

So there is definitely more regulatory focus on vendors, and it’s a pretty safe bet that this will continue into 2013.  It usually takes about 6-12 months before new guidance is fully digested into the examination process, so expect additional scrutiny of your vendor management process during your 2013 examination cycle.  Since guidance is notoriously non-prescriptive we don’t know exactly what to expect, but we can be certain that third-party reviews will be more important than ever.  Third-party audit reports, such as the SAS 70 in previous years, and now the new SOC reports (particularly the SOC 1 & SOC 2), provide the best assurance that your vendors are in fact treating your data with the same degree of care that regulators expect from you.  As the FFIEC stated in their recent release on Cloud Computing:

“A financial institution’s use of third parties to achieve its strategic plan does not diminish the responsibility of the board of directors and  management to ensure that the third-party activity is conducted in a safe and sound manner and in compliance with applicable laws and regulations.”

Undoubtedly third-party audit reports will still be the best way for you to ensure that your vendors are compliant, but there seems to be considerable confusion about exactly which of the 3 new SOC reports are the “right” ones for you.  In fact, in a recent webinar we hosted with a leading accounting firm, one of the firm’s partners stated that “there are a few instances where you might receive a SOC 1 report where a SOC 2 might be more appropriate”.  And this is exactly what we are seeing, technology service providers are having a SOC 1 report prepared when what the financial institution really wants and needs is a SOC 2.

Why is it important for you to understand this?  Because the SOC 1 (also known as the SSAE 16) reporting standard specifically states that it be used only for assessing controls over financial reporting.  It is their auditor telling your auditor that the information they are feeding into your financial statements is reliable.  On the other hand the SOC 2 reporting standard is a statement from their auditor directly to you, and addresses the following criteria:

  1. Security – The service provider’s systems is protected against unauthorized access.
  2. Availability – The service provider’s system is available for operation as contractually committed or agreed.
  3. Processing Integrity – The provider’s system is accurate, complete, and trustworthy.
  4. Confidentiality – Information designated as confidential is protected as contractually committed or agreed.
  5. Privacy – Personal information (if collected by the provider) is used, retained, disclosed, and destroyed in accordance with the providers’ privacy policy.

If these sound familiar, they should.  The FFIEC Information Security Booklet lists the following security objectives that all financial institutions should strive to accomplish:

  1. Privacy &
  2. Security (elements of GLBA)
  3. Availability
  4. Integrity of data or systems
  5. Confidentiality of data or systems
  6. Accountability
  7. Assurance

As you can see, there is considerable overlap between what the FFIEC expects of you, and what the SOC 2 report tells you about your service provider.  So why are we seeing so many service providers prepare SOC 1 reports when the SOC 2 is called for?  I think there are two reasons; first, because they are functionally equivalent, the SOC 1 is an easier transition if they are coming from the SAS 70.  I can tell you from our transition experience that the SOC 2 reporting standard is not just different, it is substantially broader and deeper than the SAS 70.  So some vendors may simply be taking the path of least resistance.

But the primary reason is that if the vendor provides a service to you that directly impacts your financial statements (like the calculation of interest) they must produce a SOC 1.  But, if they additionally provide services unrelated to your financial statements, should they also produce a SOC 2?  In almost every case, the answer is “yes”, because for all of the above reasons, the SOC 1 simply will not address all of your concerns.

The next couple of years will be transitional ones for most technology service providers as they adjust to the new auditing standards, and for you as you begin to digest the new reports.  But will the examiners be willing to give you a transition period?  In other words, should you wait for your examiner to find fault with your vendor management program to start updating it?  I’m not sure that taking a wait-and-see attitude is prudent in this case.  The regulatory expectations are out there now, the reporting standards are out there, and the risk is real…you need to be pro-active in your response.

(NOTE:  This will be covered more completely in a future post, but the CFPB has also recently issued guidance on vendor management…and they are staffing up with new examiners.  Are there three scarier words to a financial institution than “entry-level examiners”?!)