Author: Tom Hinkel

As author of the Compliance Guru website, Hinkel shares easy to digest information security tidbits with financial institutions across the country. With almost twenty years’ experience, Hinkel’s areas of expertise spans the entire spectrum of information technology. He is also the VP of Compliance Services at Safe Systems, a community banking tech company, where he ensures that their services incorporate the appropriate financial industry regulations and best practices.
07 Jan 2014

A Look Back at 2013…and a Look Ahead – Part 1 (charts edition)

One thing that’s clear from the examination feedback I’ve received from financial institutions in 2013 is that examiners are spending less time in their safety & soundness examinations on the CAMELS “C”, “A”, & “L” (capital, asset quality and liquidity) issues, and more time on the “M” & “E” (management and earnings) issues.  (There was some additional guidance released on the “S” issue by the FDIC in October, but so far we haven’t seen “sensitivity to interest rates” become a big deal for examiners.)

I’ve taken a deep dive into the 2013 FDIC financial institution data, and the following charts explain why I believe the trend towards less C, A & L, and more M & E scrutiny will continue.  The first chart is a count of total failed institutions per year since 2007:

So 2013 saw a return to pre-crisis levels of bank failures, which, while still somewhat high by historical standards, definitely reduced the pressure somewhat.  In the next graph I plot the number of “problem banks” (defined here) over the same period , which should give us some indication of the overall health of the banking industry:

As you can see, problem banks are not quite at pre-crisis levels but do show a definite downward correlation with bank failures, and I believe we’ll see that trend continue.

This next chart depicts average net operating income (left scale) against total count of unprofitable institutions (right scale):

As you can see, both indicators are trending in the right direction, which should indicate a continued de-emphasis on C, A & L in future examinations…and increased earnings pressure.  Notably however, smaller institutions are likely to face more earnings scrutiny than larger institutions, because although they did not experience the same level of losses early on as larger institutions, they are also taking longer to return to profitability:

So how will all of this impact institutions going forward?  If you’ve had a federal examination in the last 6-9 months you’ve probably already heard some variation of the following from your examiner: “Great, your problem assets are under control, now why aren’t you profitable (or more profitable)?”  (Of course at this point you might be tempted to mention things like increased deposit insurance assessments, reduced fee income, and increased regulatory burden, but you know it won’t matter…)  So certainly the increased focus on “E” will continue, but because the number of institutions still losing money is inversely proportional to size, the smaller you are the more “E” scrutiny you’re likely to get.

However regardless of asset quality or earnings, I believe that increasingly “M” will begin to take center stage in 2014, because at the end of every banking crisis since 1980 there has been a post-mortem analysis of the causes and the regulatory gaps that should be addressed going forward.  And that always leads back to “M”, because ultimately regulators believe that all problems facing financial institutions should have be foreseen and avoided by competent management taking a more active role in the affairs of the institution.  More on that, and how to prepare for it, in a future post.

17 Dec 2013

FFIEC Issues Final Social Media Guidance…and Challenges Remain

Originally proposed back in January 2013, and following a comment period in which they received and evaluated 81 official comments, the FFIEC has at last released their final guidance for financial institutions engaging in social media activities.  I expect all the regulatory agencies to adopt it soon (the FDIC has already, and pretty much verbatim).

According to the FFIEC, this final guidance is “…substantially as proposed, but with some changes“.  I wrote about this when it was first proposed and I encourage you to read my original post for the specific components of a social media risk management program.  This post will focus only on the major changes between the two, and four main “grey” areas that I felt required clarification for institutions.

I did a word-for-word comparison of the verbiage in the proposed with the final, and there seemed to be some softening of the verbiage in some areas (no doubt due to the comments received).  For example, originally the guidance said that “…this form of customer interaction…occurs in a less secure environment, and presents some unique challenges…”.  This was changed to “…Since this form of customer interaction…MAY occur in a less secure environment, it CAN present some unique challenges…”.  Other  areas were expanded, for example the requirement to provide “guidance” for employees was expanded to “guidance AND TRAINING“.  Also, the risk management component that included “…A DUE DILIGENCE process for selecting and managing third-party service provider relationships” was changed to “…A RISK MANAGEMENT process for selecting and managing third-party relationships….”.

There were minor clarifications to Reg Z and UDAAP expectations, and a fairly considerable expansion of the CRA requirement to retain public comments.  Fortunately this was limited to comments received only through social media sites run by, or on behalf of, the institution.  Comments made elsewhere would not have to be retained, as they are “not deemed to have been received by the institution”.  (Unfortunately this “not deemed to have been received” concept applies only to CRA comments, not complaints or disputes.  See #2 below.)  Finally the guidance makes it clear that email and text messages on their own do not constitute social media…unless (presumably) they are facilitated through a social media platform.

Here are the four “grey” areas that I think needed the most clarification for financial institutions, and my interpretation of the guidance:

  1. Does the guidance impose a single standard of expectations for all institutions regardless of their degree of involvement in social media activities?
    • No.  Although all institutions are expected to implement a risk management program, it should be consistent with breadth of the institutions involvement in social media activities.  And it should be designed with input from folks in compliance, technology, information security, legal, human resources, and marketing.  However, even institutions who choose to not use social media should be aware of the risks of not being able to respond to negative comments or complaints that may arise elsewhere. (More on that in the next bullet.)  So it looks as if a policy and a risk assessment are required regardless of the level of your involvement in social media activities, even if you choose to opt out.
  2. Would institutions be required to monitor and respond to all communications about the institution throughout the Internet?
    • No, but institutions are expected to understand the risks of NOT being able to respond, particularly the reputation risks of not being able to respond to complaints or disputes originating from other channels.  They also mention the “challenge” for institutions to protect their brand identity by being aware of the risk of someone “spoofing”, or masquerading, as the institution.  All these risks exist regardless of the institutions decision to engage in social media activities.  In fact, responding to a negative comment or spoofing attack may be much more challenging if you’ve decided to not engage at all, or even not to engage on a particular platform.  For example, if a comment is made on Twitter and you don’t have a Twitter account.  The guidance still recommends the use of social media monitoring tools and techniques to identify potential risks but leaves the procedural specifics, and any actual response, up to the institution.
  3. How much control would be required over employee use of social media, both during business hours, but more specifically on their own time?
    • Not as much as the proposed guidance first indicated.  The final guidance makes a clear distinction between employee “official” use, and employee “personal” use.  Institutions must establish policies and training that clearly outline what employees are, and are not, allowed to communicate in their official capacity.  But the guidance stopped short of requiring institutions to impose any restrictions on employee personal use of social media, saying only that institutions evaluate the risks for themselves and determine appropriate policies.  Since the potential for reputation risk exists regardless of whether employees are posting officially or personally, I believe you should strongly consider including guidelines for employee personal use in your training, even if it’s not covered in your policies.
  4. How much due diligence is required by institutions for social media providers?
    • Plenty.  And in my opinion vendor management is where the biggest challenges lie for financial institutions.  The guidance states that “…Working with third parties to provide social media services can expose financial institutions to substantial reputation risk.”  (emphasis mine)  And they point out that this guidance “…does not impose any new requirements…”.  So the regulators require the same degree of due diligence for social media vendors that they require for all other potentially high-risk service providers, and just as with any other outsourced relationship, you are expected to complete it prior to engaging with the provider.

But selecting and risk-managing social media vendors is much more challenging.  First of all, unlike with other initiatives, once you’ve selected your platform you don’t have a choice of providers.  If you choose to utilize Facebook or LinkedIn or Twitter for example, the provider is the platform.  It’s not as if you can select among multiple Facebook vendors!  Furthermore you are expected to be aware of matters such as the vendor’s reputation, their policies regarding use of your (and your customers) information, how (and how often) their policies might change, and what (if any) control you have over the vendors policies and actions.  So let’s take a look at these expectations in order:

  • The vendor’s reputation?
  • Their policies?
    • Social media vendors exist to sell advertising.  Their policies exist to support their profit model, which is to try to get their users to disclose as much information as possible about themselves in order to better target advertising.  Regardless of what they may state in their privacy policy, contrast their business objectives with yours.
  • How often might social media vendors change policies?
    • As often as they like, and without prior notification.
  • What control do you have over the vendors’ policies and actions?
    • None.

Once you’ve assessed all potential risks, your next challenge is to try to mitigate them.  Standard vendor risk controls for vendors consist of requesting, obtaining, and reviewing documentation such as financial reports, third-party audits, contractual confirmation of GLBA adherence, BCP testing results, etc.  But often requests for this type of documentation are either ignored or refused by social media providers, and even when documentation is provided, it doesn’t directly address your privacy, confidentiality, and security concerns.  Social media service providers are simply not used to dealing with the unique regulatory reporting requirements of the financial industry.  And accord to the FFIEC “…a financial institution should thus weigh these (residual risk) issues against the benefits of using a third party to conduct social media activities.”  Unfortunately, social media is one activity that must be outsourced.

One more thing to consider is that all social media providers are also (by FFIEC definition*) cloud service providers, and as such subject to all of the guidelines for Outsourced Cloud Computing as well.  Given the risk management challenges of social media, institutions may want to remember what the FFIEC had to say about providers that are unfamiliar with the financial industry, or unwilling to implement changes to their policies or procedures to meet changing regulatory requirements:  “Under such circumstances, management may determine that the institution cannot employ the servicer.”

So in summary, the FFIEC seems to be telling financial institutions “proceed if you must, but proceed cautiously…and don’t take any shortcuts”.  And I will repeat what I first said back in 2011…the challenge of risk managing social media boils down to this:  You are accepting an either (at best) higher level of residual risk or an (at worst) unknown level of risk, to achieve an uncertain amount of benefit.  Oh, and risk avoidance is not an option.

*”…cloud computing is a migration from owned resources to shared resources in which client users receive information technology services, on demand, from third-party service providers via the Internet ‘cloud.'” – FFIEC Statement on Outsourced Cloud Computing, July 10, 2012

03 Dec 2013

Ask the Guru: The IT Audit “Scope”

Hey Guru
Our examiner is asking about the “scope” of our IT audits. What is she referring to, and how do we define a reasonable scope?


Audit results are one of the first things examiners want to see, and the “scope” of the audit is very important to examiners.  In fact, the term is used 74 times in the FFIEC Audit Handbook!  Scope generally refers to the depth and breadth of the audit, which is in turn determined by the objectives or what the audit is designed to accomplish.  The two broad objectives for any audit are control adequacy and control effectiveness*.  Control adequacy means that the controls you have in place (policies, procedures and practices) address all reasonably identifiable risks.  These audits are sometimes referred to as policy (and sometimes ITGC, or IT general controls) audits.  Although the standards used for these audits may differ (more on that later), the scope of these audits should ultimately address the requirements outlined in the 11 IT Examination Handbooks.

Once control adequacy is established, the next thing the examiners want to know is “OK the controls exist, but do the controls work?”, i.e. are they effective?  Are they indeed controlling the risks the way they were designed to?  Those types of audits are more commonly (and accurately) described as tests or assessments, usually referred to as penetration (PEN) tests, or vulnerability assessments (VA).  They may either be internal, external, or (preferably) both.

Sequentially, the audits must be conducted in that order.  In other words, you must first establish adequacy before you test for effectiveness.  It really doesn’t make sense to test controls that don’t directly address your risks. In fact although an auditor will sometime combine the 2 audits into a single engagement, I encourage folks to separate them so that any deficiencies in control adequacy can be corrected prior to the PEN testing.

One more thing to consider is the standard by which the auditor will conduct their audit, sometime referred to as their “work program”.  These are the guidelines the auditor will use to guide the project and conduct the audit.  While there are several industry established IT standards out there…COBIT, ITIL, COSO, ISO 27001, SAS 94, NIST, etc., there is no one single accepted standard.  The fact is most auditors use a customized hybrid work program, and the vast majority are perfectly acceptable to the examiners.  However at some point in your evaluation process with a new auditor you should ask them why they prefer one standard over another.  Whatever their preference, make sure that somewhere in their scope-of-work document they make reference to the FFIEC examination guidelines.  This assures that they are familiar with the unique regulatory requirements of financial institutions.

Regarding cost, there are often wide disparities between seemingly similar engagements, and it’s easy to see why.  In order to make a side-by-side comparison you’ll need to know a few things: Is the audit focused on control adequacy or control effectiveness (or both)?  If both, are they willing to break the engagement into 2 parts?  What is the audit standard they’ll be using, and why?  What methods will they use to test your controls; inquiry or inspection and sampling?  Are vulnerability assessments internal or external (or both)?  What are the certifications of the auditors and how much experience do they have with financial institutions?  Finally, if the examiners have questions or concerns about the auditor’s methodology, or if examiner findings seem to conflict with audit results, will the auditor work with you to respond to the examiner?

In summary, the scope of the audit is defined as either:

  • To assess and determine the adequacy (or design and operation) of our controls
  • To assess and determine the effectiveness of our controls
  • All of the above

So the examiner will want to know the scope, but it’s to your benefit for you to understand it too because examiners will often use the results of your audits to shape and possibly reduce* the scope of their examination!

* Some audits will break the first objective into two sections; design (are they designed properly), and operation (are they in place and operational).
** FFIEC IT Examination Handbook, Audit, Page 8

Related posts:

05 Nov 2013

The OCC Sets a New Standard for Vendor Management…

…but will it become the new standard for institutions with other regulators?  UPDATE – The answer is yes, at least for the Federal Reserve Readers of this blog know that I’ve been predicting an increase in vendor management program scrutiny since early 2010.  And although the FFIEC has been very active in this area, issuing multiple updates to outsourcing guidance in the past 2 years, it appears that the OCC is the first primary federal regulator (PFR) to formalize it into a prescriptive methodology.

So if you are a national bank or S&L regulated by the OCC, what you’ll want to know is “what changed”?  They’ve been looking at your vendor management program for years as part of your safety & soundness exams, exactly what changes will they expect you to make going forward?  The last time the OCC updated their vendor management guidance was back in 2001, so chances are you haven’t made many substantial changes in a while.  That will change.

However if you are regulated by the FDIC or the Federal Reserve or the NCUA, so what?  Nothing has changed, right?  Well no…not yet anyway.  Except for a change adding a “Vendor Management and Service Provider Oversight” section in the IT Officer’s Questionnaire back in 2007, the FDIC hasn’t issued any new or updated guidance since 2001.  Similarly, the NCUA last issued guidance in 2007 but it was really a re-statement of existing guidance that was first issued in 2001.  So considering the proliferation of outsourcing in the last 10 years, I believe all of the other regulators are overdue for updates.  Furthermore, I believe the OCC did a very good job with this guidance, and all financial institutions regardless of regulator would be wise to take a close look.

So what’s changed?  I compared the original 2001 bulletin (OCC 2001-47) side-by-side with the new one (OCC 2013-29), and although most of the content was very similar, there were some significant differences.  Initially they both start out the same way; stating that banks are increasing both the number and the complexity of outsourced relationships.  But the updated guidance goes on to state that…

“The OCC is concerned that the quality of risk management over third-party relationships may not be keeping pace with the level of risk and complexity of these relationships.”

They specifically cited failure to assess the direct and indirect costs, failure to perform adequate due diligence and monitoring, and multiple contract issues, as troublesome trends.

Conceptually, the new guidance focuses around a 5-phase “life-cycle” process of risk management.  The life-cycle consists of:

  • Planning,
  • Due diligence and third-party selection,
  • Contract negotiation,
  • Ongoing monitoring, and
  • Termination

First of all, a “cycle” concept strongly suggests that a once-a-year approach to program updates is not sufficient.  Secondly, I think the planning, or pre-vendor, phase is potentially the most significant in terms of the changes that regulators will expect going forward.  For one thing, beginning the vendor management process BEFORE beginning the relationship (i.e. before the vendor becomes a vendor) seems like a contradiction in terms (although it is not entirely new to readers of this blog), so many institutions may have skipped this phase entirely.  But it is at this planning stage that elements like strategic justification and complexity and impact on existing customers are assessed.  Those are only a few of the considerations in the planning phase, the guidance lists 13 in all.

The due diligence and contract phases are clearly defined and also expanded, but fairly consistent with existing guidance*.  And although termination is now defined as a separate phase, the expectations really haven’t changed much there either.

On-going monitoring (the traditional oversight phase) has been greatly expanded however.  The original guidance had 3 oversight activities; the third party’s financial condition, its controls, and the quality of its service and support.  The new guidance still has those 3…and adds 11 more.  Everything from insurance coverage, to regulatory compliance, to business continuity and managing customer complaints.

But perhaps the biggest expansion of expectations in the new guidance is the banks’ responsibility to understand how the vendor manages their subcontractors.  Banks are expected to…

“Evaluate the third party’s ability to assess, monitor, and mitigate risks from its use of subcontractors and to ensure that the same level of quality and controls exists no matter where the subcontractors’ operations reside.” (Bold added)

Shorter version: “Know your vendor…and your vendor’s vendor”.  And this expectation impacts all phases of the risk management life-cycle.  Subcontractor concerns start in the planning stage, continue through due diligence and contract considerations, add control expectations to on-going monitoring, and even impact termination considerations.

In summary, everything expands.  Your pre-vendor & pre-contract due diligence expands, oversight requirements (and the associated controls) increase, and of course everything must be documented…which also expands!  The original guidance listed 5 items typically contained in proper documentation, the updated guidance lists 8 items. But it’s the very first item on the list that caught my attention because it would appear to actually re-define a vendor.  Originally the vendor listing was expected to consist of simply “a list of significant vendors or other third parties”, which, depending on the definition of “significant”, was a fairly short list for most institutions.  Now it must consist of “a current inventory of all third-party relationships”, which leaves nothing to interpretation and expands your vendor list considerably.**

So if you are regulated by the OCC you can expect these new requirements to be incorporated into the examination process fairly soon.  If not, use this as a wake-up call.  I think you can expect the other federal regulators to follow suit with their own revised guidance.  The OCC has just set the gold standard.  Use this opportunity to get ahead of your regulator by revisiting and enhancing your vendor management program now.

 

* Safe Systems customers can get updated due diligence and contract checklists from their account manager.

** All vendors on the list must be risk assessed, and although the risk categories didn’t change (operational, compliance, reputation, strategic and credit) some of the risk elements did.  Matt Gunn pointed out one of the more interesting changes in his recent TechComply post.  I’ll cover that and others in a future post.

25 Oct 2013

Windows XP and Vendor Management

The FFIEC issued a joint statement recently regarding Microsoft’s discontinuation of support for Windows XP.  The statement requires financial institutions to identify, assess, and manage the risks of these devices in their institutions after April 8, 2014.   After this date Microsoft will no longer provide regular security patches or support for this product, potentially leaving those devices vulnerable to cyber-attack and/or incompatibility with other applications.

Identifying, assessing and managing these devices within your own organization is fairly straightforward.  Have your admin or support provider run an OS report and present it to the IT Committee for review and discussion of possible mitigation options.  But somewhat lost in the FFIEC guidance is the fact that you are also responsible for identifying and assessing these devices at your third-party service providers as well.  While the statement was written as if it was directed at both FI’s and TSP’s separately, the FFIEC makes it clear that:

A financial institution’s use of a TSP to provide needed products and services does not diminish the responsibility of the institution’s board of directors and management to ensure that the activities are conducted in a safe and sound manner and in compliance with applicable laws and regulations, just as if the institution were to perform the activities in-house.

So my interpretation of the expectations resulting from this guidance is that you must reach out to your critical service providers and ask about any XP devices currently in use at their organization.  If they aren’t using any, an affidavit from the CIO or similar person should suffice.  If they are, a statement about how they plan to mitigate the risk should be made a part of your risk assessment.  The fact that the FFIEC mentioned “TSP’s” five times in less than two pages indicates to me that they expect you to be pro-active about this.

One other thing that might have been overlooked in the guidance is this concept of operational risk.  Many IT risk assessments focus exclusively on the information security elements in their risk assessments, i.e. access to NPI/PII.  They only assess the GLBA elements of privacy and security.  Operational risk addresses the risk of failure, or of not performing to management’s expectations.  If your risk assessment is limited only to GLBA elements, expand it.  Make sure the criticality of the asset, product, or service is assessed as well.  And, when indicated by high residual risk, refer to your business continuity plan for further mitigation.

17 Sep 2013

Data Classification and the Cloud

UPDATE –  In response to the reluctance of financial institutions to adopt cloud storage, vendors such as Microsoft and HP have announced that they are building “hybrid” clouds.  These new models are designed to allow institutions to simultaneously store and process certain data in the cloud, while a portion of the processing or storage is done locally on premise.  For example, the application may reside in the cloud, but the customer data is stored locally.  This may make the decision easier, but only makes classification of data more important, as the decision to utilize a “hybrid” cloud must be justified by your assessment of the privacy and criticality of the data.

I get “should-we-or-shouldn’t-we” questions about the Cloud all the time, and because of the high standards for financial institution data protection, I always advise caution.  In fact, I recently outlined 7 cloud deal-breakers for financial institutions.  But could financial institutions still justify using a cloud vendor even if they don’t seem to meet all of the regulatory requirements?  Yes…if you’ve first classified your data.

The concept of “data classification” is not new, it’s mentioned several times in the FFIEC Information Security Handbook:

“Institutions may* establish an information data classification program to identify and rank data, systems, and applications in order of importance. Classifying data allows the institution to ensure consistent protection of information and other critical data throughout the system.”

“Data classification is the identification and organization of information according to its criticality and sensitivity. The classification is linked to a protection profile. A protection profile is a description of the protections that should be afforded to data in each classification.”

The term is also mentioned several times in the FFIEC Operations Handbook:

“As part of the information security program, management should* implement an information classification strategy appropriate to the complexity of its systems. Generally, financial institutions should classify information according to its sensitivity and implement
controls based on the classifications. IT operations staff should know the information classification policy and handle information according to its classification.”

 But the most relevant reference for financial institutions looking for guidance about moving data to the Cloud is a single mention in the FFIEC Outsourcing Technology Services Handbook, Tier 1 Examination Procedures section:

“If the institution engages in cloud processing, determine that inherent risks have been comprehensively evaluated, control mechanisms have been clearly identified, and that residual risks are at acceptable levels. Ensure that…(t)he types of data in the cloud have been identified (social security numbers, account numbers, IP addresses, etc.) and have established appropriate data classifications based on the financial institution’s policies.”

So although data classification is a best practice even before you move to the cloud, the truth is that most institutions aren’t doing it (more on that in a moment).   However examiners are expected to ensure (i.e. to verify) that you’ve properly classified your data afterwards…and that regardless of where data is located, you’ve protected it consistent with your existing policies.  (To date I have not seen widespread indications that examiners are asking for data classification yet, but I expect as cloud utilization increases, they will.  After all, it is required in their examination procedures.)

Most institutions don’t bother to classify data that is processed and stored internally because they treat all data the same, i.e. they have a single protection profile that treats all data at the highest level of sensitivity.  And indeed the guidance states that:

“Systems that store or transmit data of different sensitivities should be classified as if all data were at the highest sensitivity.”

But once that data leaves your protected infrastructure everything changes…and nothing changes.  Your policies still require (and regulators still expect) complete data security, privacy, availability, etc., but since your level of control drops considerably, so should your level of confidence.  And you likely have sensitive data combined with non-sensitive, critical combined with non-critical.  This would suggest that unless the cloud vendor meets the highest standard for your most critical data, they can’t be approved for any data.  Unless…

  1. You’ve clearly defined data sensitivity and criticality categories, and…
  2. You’re able to segregate one data group from another, and…
  3. You’ve established and applied appropriate protection profiles to each one.

Classification categories are generally defined in terms of criticality and sensitivity, but the guidance is not prescriptive on how you should label each category.  I’ve seen “High”, “Medium”, and “Low”, as well as “Tier 1”, “Tier 2” and “Tier 3”, and even a scale of 1 to 5,…whatever works best for your organization is fine.  Once that is complete, the biggest challenge is making sure you don’t mix data classifications.  This is easier for data like financials or Board reports, but particularly challenging for data like email, which could contain anything from customer information to yesterdays lunch plans.  Remember, if any part of the data is highly sensitive or critical, all data must be treated as such.

So back to my original question…can you justify utilizing the cloud even if the vendor is less than fully compliant?  Yes, if data is properly classified and segregated, and if cloud vendors are selected based on their ability to adhere to your policies (or protection profiles) for each category of data.

 

 

*In “FFIEC-speak”, ‘may’ means “should’, and ‘should’ means ‘must’.