Category: From the Field

01 Apr 2014

Say What You Do…But Do What You Say

Feedback from recent regulatory examinations indicates a potentially troublesome trend; regulators are actually reading your policies.  Traditionally, regulatory findings are concentrated in policy weaknesses.  Either polices don’t exist (social media and mobile banking for example), or they do exist but need “expansion”.  (“Expansion” is a vague and often used-term in examination findings to indicate a shortcoming of some sort.  Either the policy isn’t broad enough or detailed enough or doesn’t conform to current guidance.)

These problems are relatively easy to fix; draft the policy or expand it or enhance it, run it by the Board, and move on.  But every time you add a policy, or expand an existing one, you obligate yourself.  You say you’ll do something a certain way, or with a particular group, or with a specific frequency.   And this is where many recent exam findings have occurred.  Examiners are reading through your policies, and identifying deviations between your policies and procedures and your actual documented practices.

This is nothing new, I originally wrote about it back in 2011.  What is new however is the depth and breadth of the scrutiny.  What used to be primarily limited to Board reporting and testing and auditing, now seems to include almost any instance of “will” or “shall”.  Here is an example of the scope of this challenge.  The following is a short excerpt from the beginning of a typical information security program.*  I’ve highlighted all the areas that obligate the institution in some way or another:

This overall Information Security Program is critical to the safety and soundness of Sample National Bank. Therefore, the Board of Directors shall approve this written Information Security Program. The Board of Directors designates the Information Security Officer to develop, implement, and maintain the Program. Senior management in each functional area of Sample National Bank will be responsible for day-to-day monitoring of compliance with this Program. Any perceived breaches of security or potential risks to non-public information will be reported immediately to the Information Security Officer.

The Information Security Committee will coordinate an independent audit/examination for section 501b GLBA compliance on an annual basis. The independent audit/review will focus especially on areas that were identified as high or moderate risk during the risk assessment process that was coordinated by the Information Security Officer and the Information Security Committee.

Regular testing of key controls, systems, and procedures shall take place as necessary depending on the complexity of the process/system involved. Due to the critical need for independence between those who are responsible for operating a process/system and those who conduct or review the testing, the internal auditor/compliance officer or his/her designee shall be responsible for the review of all testing methodology and results.

So in this short three paragraph, 1/2 page excerpt I count no fewer than nine instances where the institution has obligated themselves to do something:

  1. The Board will approve the program,
  2. The Board will designate the ISO to manage the program,
  3. Senior management in each department will do the day-to-day compliance monitoring,
  4. Breaches of NPI will be reported to the ISO,
  5. There will be an annual GLBA audit,
  6. The audit will be risk-based,
  7. Regular testing will occur,
  8. Testing will be risk-based, and
  9. Internal audit will review all testing.

The policy goes on for another 15 pages, and a quick search for every instance of “will” or “shall” turns up 40 potential opportunities for actual practices to deviate from policies…and all this is in just one policy.

What can you do to avoid this policy vs. practice mismatch?  Use the opportunity of your next policy update to take an inventory of everything you are promising to do, and how you’re supposed to be doing it.  Start with a quick search of all occurrences of “will” or “shall”.  If you’re not doing it, or can’t prove you’re doing it, take it out of your policy.  You may get written up for a policy deficiency, but that’s easy to fix and better than having a finding that “…management states that they are doing <xx>, but examiners could find no documentation that it was being done.”  A “failure to follow policies” is an oversight weakness finding that speaks poorly of management, and often invites further scrutiny into other policy vs. practice deviations.

Of course there is a difference between not doing something, and not being able to prove you’re doing something, but it’s a difference without a distinction as far as the regulators are concerned.  If you can’t prove it, you’re not doing it.  So say what you’ll do, but make sure you can back that up with the meeting minutes, reports, checklists, test results, and other documentation to demonstrate that you’re doing what you say.

[poll id=”8″]

*Thanks to Jackie Marshall at Gladiator Technologies for the use of their InfoSec policy template verbiage.

22 Jan 2014

Windows XP and Electronic Banking

The FFIEC has previously issued a statement on Windows XP and the regulatory expectations for both financial institutions and TSP’s beyond April 8th, but so far the regulators have not weighed in on the implications to e-banking and RDC customers.  According to some estimates, as many as 30-40% of your business customers may still be using Windows XP.  Since Microsoft will discontinue support for WinXP after April 8th of this year, leaving these devices potentially exposed, what is your obligation to your high-risk Internet banking and RDC customers?  What do the regulators expect of you in this situation and better yet, what do your customers expect of you?  Would knowingly allowing your e-banking and RDC software to run on potentially insecure systems be considered “commercially reasonable” security?

According to the FFIEC E-Banking Handbook, financial institutions have an obligation to understand and manage the risks of the electronic banking environment, which includes the customer location.  Similarly, Remote Deposit Capture guidance makes it clear that institutions are required to understand how the risks of using the customer’s systems to engage in RDC impacts your legal, compliance, and operational risks.  That is why most institutions include site visits to the customer location as a part of the customer suitability process prior to approving them for RDC or commercial banking software.  But if your on-site assessment indicated the customer was using an insecure operating system, would you even allow your software to be installed?

Again, examiners may not be looking for this specifically (although I know of at least one auditor that has added it to their IT controls scope of work).  However I recommend that you at least make the effort to reach out to your high risk e-banking and RDC customers and remind them that according to the terms of their contract, you share responsibility for creating and maintaining a secure computing environment for electronic banking.  And then extend your awareness effort to ALL electronic banking customers.

UPDATE – Here is what Microsoft has to say about this.  You may want to reference this in your communications with customers:

Unsupported and unpatched environments are vulnerable to security risks. This may result in an officially recognized control failure by an internal or external audit body, leading to suspension of certifications, and/or public notification of the organization’s inability to maintain its systems and customer information.

07 Jan 2014

A Look Back at 2013…and a Look Ahead – Part 1 (charts edition)

One thing that’s clear from the examination feedback I’ve received from financial institutions in 2013 is that examiners are spending less time in their safety & soundness examinations on the CAMELS “C”, “A”, & “L” (capital, asset quality and liquidity) issues, and more time on the “M” & “E” (management and earnings) issues.  (There was some additional guidance released on the “S” issue by the FDIC in October, but so far we haven’t seen “sensitivity to interest rates” become a big deal for examiners.)

I’ve taken a deep dive into the 2013 FDIC financial institution data, and the following charts explain why I believe the trend towards less C, A & L, and more M & E scrutiny will continue.  The first chart is a count of total failed institutions per year since 2007:

So 2013 saw a return to pre-crisis levels of bank failures, which, while still somewhat high by historical standards, definitely reduced the pressure somewhat.  In the next graph I plot the number of “problem banks” (defined here) over the same period , which should give us some indication of the overall health of the banking industry:

As you can see, problem banks are not quite at pre-crisis levels but do show a definite downward correlation with bank failures, and I believe we’ll see that trend continue.

This next chart depicts average net operating income (left scale) against total count of unprofitable institutions (right scale):

As you can see, both indicators are trending in the right direction, which should indicate a continued de-emphasis on C, A & L in future examinations…and increased earnings pressure.  Notably however, smaller institutions are likely to face more earnings scrutiny than larger institutions, because although they did not experience the same level of losses early on as larger institutions, they are also taking longer to return to profitability:

So how will all of this impact institutions going forward?  If you’ve had a federal examination in the last 6-9 months you’ve probably already heard some variation of the following from your examiner: “Great, your problem assets are under control, now why aren’t you profitable (or more profitable)?”  (Of course at this point you might be tempted to mention things like increased deposit insurance assessments, reduced fee income, and increased regulatory burden, but you know it won’t matter…)  So certainly the increased focus on “E” will continue, but because the number of institutions still losing money is inversely proportional to size, the smaller you are the more “E” scrutiny you’re likely to get.

However regardless of asset quality or earnings, I believe that increasingly “M” will begin to take center stage in 2014, because at the end of every banking crisis since 1980 there has been a post-mortem analysis of the causes and the regulatory gaps that should be addressed going forward.  And that always leads back to “M”, because ultimately regulators believe that all problems facing financial institutions should have be foreseen and avoided by competent management taking a more active role in the affairs of the institution.  More on that, and how to prepare for it, in a future post.

03 Dec 2013

Ask the Guru: The IT Audit “Scope”

Hey Guru
Our examiner is asking about the “scope” of our IT audits. What is she referring to, and how do we define a reasonable scope?


Audit results are one of the first things examiners want to see, and the “scope” of the audit is very important to examiners.  In fact, the term is used 74 times in the FFIEC Audit Handbook!  Scope generally refers to the depth and breadth of the audit, which is in turn determined by the objectives or what the audit is designed to accomplish.  The two broad objectives for any audit are control adequacy and control effectiveness*.  Control adequacy means that the controls you have in place (policies, procedures and practices) address all reasonably identifiable risks.  These audits are sometimes referred to as policy (and sometimes ITGC, or IT general controls) audits.  Although the standards used for these audits may differ (more on that later), the scope of these audits should ultimately address the requirements outlined in the 11 IT Examination Handbooks.

Once control adequacy is established, the next thing the examiners want to know is “OK the controls exist, but do the controls work?”, i.e. are they effective?  Are they indeed controlling the risks the way they were designed to?  Those types of audits are more commonly (and accurately) described as tests or assessments, usually referred to as penetration (PEN) tests, or vulnerability assessments (VA).  They may either be internal, external, or (preferably) both.

Sequentially, the audits must be conducted in that order.  In other words, you must first establish adequacy before you test for effectiveness.  It really doesn’t make sense to test controls that don’t directly address your risks. In fact although an auditor will sometime combine the 2 audits into a single engagement, I encourage folks to separate them so that any deficiencies in control adequacy can be corrected prior to the PEN testing.

One more thing to consider is the standard by which the auditor will conduct their audit, sometime referred to as their “work program”.  These are the guidelines the auditor will use to guide the project and conduct the audit.  While there are several industry established IT standards out there…COBIT, ITIL, COSO, ISO 27001, SAS 94, NIST, etc., there is no one single accepted standard.  The fact is most auditors use a customized hybrid work program, and the vast majority are perfectly acceptable to the examiners.  However at some point in your evaluation process with a new auditor you should ask them why they prefer one standard over another.  Whatever their preference, make sure that somewhere in their scope-of-work document they make reference to the FFIEC examination guidelines.  This assures that they are familiar with the unique regulatory requirements of financial institutions.

Regarding cost, there are often wide disparities between seemingly similar engagements, and it’s easy to see why.  In order to make a side-by-side comparison you’ll need to know a few things: Is the audit focused on control adequacy or control effectiveness (or both)?  If both, are they willing to break the engagement into 2 parts?  What is the audit standard they’ll be using, and why?  What methods will they use to test your controls; inquiry or inspection and sampling?  Are vulnerability assessments internal or external (or both)?  What are the certifications of the auditors and how much experience do they have with financial institutions?  Finally, if the examiners have questions or concerns about the auditor’s methodology, or if examiner findings seem to conflict with audit results, will the auditor work with you to respond to the examiner?

In summary, the scope of the audit is defined as either:

  • To assess and determine the adequacy (or design and operation) of our controls
  • To assess and determine the effectiveness of our controls
  • All of the above

So the examiner will want to know the scope, but it’s to your benefit for you to understand it too because examiners will often use the results of your audits to shape and possibly reduce* the scope of their examination!

* Some audits will break the first objective into two sections; design (are they designed properly), and operation (are they in place and operational).
** FFIEC IT Examination Handbook, Audit, Page 8

Related posts:

20 Aug 2013

Ask the Guru: Vendor vs. Service Provider

Hey Guru
I recently had an FDIC examiner tell me that we needed to make a better distinction between a vendor and a service provider.  His point seemed to be that by lumping them together in our vendor management program we were “over-analyzing” them.  He suggested that we should be focused instead only on those few key providers that pose the greatest risk of identity theft.  Our approach has always been to assess each and every vendor.  Is this a new approach?


I don’t think so, although I think I know where the examiner is coming from on the vendor vs. service provider distinction.  First of all, let’s understand what is meant by a “service provider”.  The traditional definition of a service provider was one who provided services subject to the Bank Service Company Act (BSCA), which dates back to 1962.  As defined in Section 3 of the Act, these services include:

“…check and deposit sorting and posting, computation and posting of interest and other credits and charges, preparation and mailing of checks, statements, notices, and similar items, or any other clerical, bookkeeping, accounting, statistical, or similar functions performed for a depository institution.”

But lately the definition has expanded way beyond the BSCA, and today almost anything you can outsource can conceivably be provided by a “service provider”.  In fact according to the FDIC, the products and services provided can vary widely:

“…core processing; information and transaction processing and settlement activities that support banking functions such as lending, deposit-taking, funds transfer, fiduciary, or trading activities; Internet-related services; security monitoring; systems development and maintenance; aggregation services; digital certification services, and call centers.”

Furthermore, in a 2010 interview with BankInfoSecurity, Don Saxinger (Team Lead – IT and Operations Risk at FDIC) said this regarding what constitutes a service provider:

“We are not always so sure ourselves, to be quite honest…but, in general, I would look at it from a banking function perspective. If this is a function of the bank, where somebody is performing some service for you that is a banking function or a decision-making function, including your operations and your technology and you have outsourced it, then yes, that would be a technology service that is (BSCA) reportable.”

Finally, the Federal Reserve defines a service provider as:

“… any party, whether affiliated or not, that is permitted access to a financial institution’s customer information through the provision of services directly to the institution.   For example, a processor that directly obtains, processes, stores, or transmits customer information on an institution’s behalf is its service provider.  Similarly, an attorney, accountant, or consultant who performs services for a financial institution and has access to customer information is a service provider for the institution.”

And in their Guidance on Managing Outsourcing Risk

“Service providers is broadly defined to include all entities that have entered into a contractural relationship with a financial insitiution to provide business functions or activities”

So access to customer information seems to be the common thread, not necessarily the services provided.  Clearly the regulators have an expanded view of a “service provider”, and so should you.  Keep doing what you’re doing.  Run all providers through the same risk-ranking formula, and go from there!

One last thought…don’t get confused by different terms.  According the the FDIC as far back as 2001, other terms synonymous with “service providers” include vendors, subcontractors, external service provider (ESPs) and outsourcers.

23 Jul 2013

Ask the Guru: Fedline in the lobby

Hey Guru,

I have a question about Fedline.  Will regulators write us up for having Fedline on a PC in the lobby of the bank?

Possibly, I have seen that.  The issue is with the extreme sensitivity of data processed on that device, so if you want to leave it where it is, your response should focus on the physical and administrative controls in place.  For example how is the device physically secured?  Is it completely in the open or behind a barrier of some sort?  Could anyone simply walk up to it and sit down?  Is it clearly identified as the Fedline machine?  What about passwords and authentication devices?  Is it left logged in?  Is dual authentication required to access Fedline; one to the network and another one to the application?  What about dual control for transactions?  Who reviews activity reports?  How often?

SO they may say something, but if you have a response ready that addresses these questions they probably won’t write you up for it.  On the other hand if you just don’t want to deal with the hassle, you can put it behind the teller line.  Oh, and one more thing…wherever you decide to put the Fedline PC, don’t use a wireless keyboard!