Tag: IT Examination Handbooks

27 Sep 2016

FFIEC Rewrites the Information Security IT Examination Handbook

In the first update in over 10 years, the FFIEC just completely rewrote the definitive guidance on their expectations for managing information systems in financial institutions.  This was widely expected, as the IT world has changed considerably since 2006.

There is much to unpack in this new handbook, starting with what appears to be a new approach to managing information security risk. The original 2006 handbook put the risk assessment process up front, essentially conflating risk assessment with risk management.  But as I first mentioned almost 6 years ago, the risk assessment is only one step in risk management, and it’s not the first step.  Before risk can be assessed you must identify the assets to be protected and the threats and vulnerabilities to those assets.  Only then can you conduct a risk assessment.  The new guidance uses a more traditional approach to risk management, correctly placing risk assessment in the second slot:

  1. Risk Identification
  2. Risk Measurement (aka risk assessment)
  3. Risk Mitigation, and
  4. Risk Monitoring and Reporting

This is a good change, and it is also identical to the risk management structure in the 2015 Management Handbook.  Its also very consistent with the 4 phase process specified in the 2015 Business Continuity Handbook:

  1. Business Impact Analysis
  2. Risk Assessment
  3. Risk Management, and
  4. Risk Monitoring and Testing

Beyond that, here are a few additional observations (in no particular order):

More from Less:

  • The new handbook is about 40% shorter, consisting of 98 pages as contrasted with 138 in the 2006 handbook.

…HOWEVER…

  • The new guidance contains 412 references to the word “should”, as opposed to 341 references previously.  This is significant, because compliance folks know that every occurrence of the word “should” in the guidance, generally translates to the word “will” in your policies and procedures.  So the handbook is 40% shorter, but increases regulator expectations by 20%!

Cyber Focus:

  • “…because of the frequency and severity of cyber attacks, the institution should place an increasing focus on cybersecurity controls, a key component of information security.”  Cybersecurity is scattered throughout the new handbook, including an entire section.

Assess Yourself:

  • There are 17 separate references to “self-assessments”, increasing the importance of utilizing internal assessments to gauge the effectiveness of your risk management and control processes.

Take Your Own Medicine:

  • Technology Service Providers to financial institutions will be held to the same set of standards:
    • “Examiners should also use this booklet to evaluate the performance by third-party service providers, including technology service providers, of services on behalf of financial institutions.”

The Ripple Effect:

  • The impact of this guidance will likely be quite significant, and will be felt across all IT areas.  For example, the Control Maturity section of the  Cybersecurity Assessment Tool contains 98 references and hyperlinks to specific pages in the 2006 Handbook.  All of these are now invalid.  I’m sure we can expect an updated assessment tool  from the FFIEC at some point in the not-too-distant future.  (Which will also necessitate changes to certain online tools!)
  • The new FDIC IT Risk Examination procedures (InTREx) also contains several references to the IT Handbook, although they are not specific to any particular page.

Regarding InTREx, I was actually hoping that the new IT Handbook and the new FDIC exam procedures would be more closely coordinated, but perhaps that’s too much to ask at this point.  In any case, the similarity between the 3 recently released Handbooks indicates increased standardization, and I think that is a good thing.  We will continue to dissect this document and report observations as we find them.  In the meantime, don’t hesitate to reach out with your own observations.

11 Nov 2015

FFIEC Updates (and Greatly Expands) the Management Handbook

This latest update to the IT Examination Handbook series comes 11 years after the original version.  And although IT has changed significantly in the past 11 years, the requirement that financial institutions properly manage the risks of IT has not changed.  This new Handbook contains many changes that will introduce new requirements and new expectations from regulators.  Some of these changes are subtle, others are more significant.  Here is my first take on just a few differences between the original and the new Handbook:

Cybersecurity

  • The original Handbook contained only a single reference to “cyber”.  The revised Handbook contains 53 references.

IT Management

  • The Board and a steering committee are still responsible for overall IT management, but the guidance now introduces a new obligation for the Board, requiring that they provide a “credible challenge” to management.  Specifically, this means the Board must be “actively engaged, asking thoughtful questions, and exercising independent judgment”.  Simply put, no more “rubber stamps”.  The Board is expected to actually govern, and that means they need access to accurate, timely and relevant information.

The IT Management Structure has changed.  The 2004 Handbook listed the following structure:

  • Board of Directors / Steering Committee
  • Chief Information Officer / Chief Technology Officer
  • IT Line Management
  • Business Unit Management

The Updated Guidance is a bit more granular, and recommends the following structure (changes in bold):

  • Board of Directors  / Steering Committee
  • Executive Management
  • Chief Information Officer or Chief Technology Officer
  • Chief Information Security Officer
  • IT Line Management
  • Business Unit Management

“Risk Appetite”

  • The FFIEC Cybersecurity Assessment Tool introduced this new term (addressed here), and the Management Handbook makes an additional 11 references.  Institutions should understand this relatively new (for IT anyway) concept and incorporate it into their strategic planning process.

Managing Technology Service Providers

  • The 2004 guidance contained a separate section on best practices in this area.  The new guidance has removed the section, incorporating references to vendor management best practices throughout the document.  This reflects the reality of the prevalence and importance of outsourcing in today’s financial institutions.

Examination Procedures (Appendix A)

  • The 2004 Handbook had 8 pages containing 9 examination objectives.  The new guidance is almost completely re-written, and has 15 pages containing 13 objectives.  Several of these new objectives deal with internal governance and oversight, and a couple address the enterprise-wide nature of IT management.  All areas have been greatly expanded.  For example, the objective dealing with IT controls and risk mitigation (Objective 12) consists of 18 separate examination elements with 53 discrete items that examiners must check.




Free White Paper



Best Practices for Control and Management of Your Community Bank’s IT

A community bank’s digital assets are every bit as valuable as the money in the vault.



7 Reasons Why Small Community Banks Should Outsource IT Network Management



In summary, the updated Handbook represents a significant evolution in both the breadth and depth of IT management requirements.  It will set the standard for IT management best practices for both examiners and institutions for some time to come, and should be required reading for all Board members, CEO’s, CIO’s, ISO’s, and network administrators.

03 Dec 2013

Ask the Guru: The IT Audit “Scope”

Hey Guru
Our examiner is asking about the “scope” of our IT audits. What is she referring to, and how do we define a reasonable scope?


Audit results are one of the first things examiners want to see, and the “scope” of the audit is very important to examiners.  In fact, the term is used 74 times in the FFIEC Audit Handbook!  Scope generally refers to the depth and breadth of the audit, which is in turn determined by the objectives or what the audit is designed to accomplish.  The two broad objectives for any audit are control adequacy and control effectiveness*.  Control adequacy means that the controls you have in place (policies, procedures and practices) address all reasonably identifiable risks.  These audits are sometimes referred to as policy (and sometimes ITGC, or IT general controls) audits.  Although the standards used for these audits may differ (more on that later), the scope of these audits should ultimately address the requirements outlined in the 11 IT Examination Handbooks.

Once control adequacy is established, the next thing the examiners want to know is “OK the controls exist, but do the controls work?”, i.e. are they effective?  Are they indeed controlling the risks the way they were designed to?  Those types of audits are more commonly (and accurately) described as tests or assessments, usually referred to as penetration (PEN) tests, or vulnerability assessments (VA).  They may either be internal, external, or (preferably) both.

Sequentially, the audits must be conducted in that order.  In other words, you must first establish adequacy before you test for effectiveness.  It really doesn’t make sense to test controls that don’t directly address your risks. In fact although an auditor will sometime combine the 2 audits into a single engagement, I encourage folks to separate them so that any deficiencies in control adequacy can be corrected prior to the PEN testing.

One more thing to consider is the standard by which the auditor will conduct their audit, sometime referred to as their “work program”.  These are the guidelines the auditor will use to guide the project and conduct the audit.  While there are several industry established IT standards out there…COBIT, ITIL, COSO, ISO 27001, SAS 94, NIST, etc., there is no one single accepted standard.  The fact is most auditors use a customized hybrid work program, and the vast majority are perfectly acceptable to the examiners.  However at some point in your evaluation process with a new auditor you should ask them why they prefer one standard over another.  Whatever their preference, make sure that somewhere in their scope-of-work document they make reference to the FFIEC examination guidelines.  This assures that they are familiar with the unique regulatory requirements of financial institutions.

Regarding cost, there are often wide disparities between seemingly similar engagements, and it’s easy to see why.  In order to make a side-by-side comparison you’ll need to know a few things: Is the audit focused on control adequacy or control effectiveness (or both)?  If both, are they willing to break the engagement into 2 parts?  What is the audit standard they’ll be using, and why?  What methods will they use to test your controls; inquiry or inspection and sampling?  Are vulnerability assessments internal or external (or both)?  What are the certifications of the auditors and how much experience do they have with financial institutions?  Finally, if the examiners have questions or concerns about the auditor’s methodology, or if examiner findings seem to conflict with audit results, will the auditor work with you to respond to the examiner?

In summary, the scope of the audit is defined as either:

  • To assess and determine the adequacy (or design and operation) of our controls
  • To assess and determine the effectiveness of our controls
  • All of the above

So the examiner will want to know the scope, but it’s to your benefit for you to understand it too because examiners will often use the results of your audits to shape and possibly reduce* the scope of their examination!

* Some audits will break the first objective into two sections; design (are they designed properly), and operation (are they in place and operational).
** FFIEC IT Examination Handbook, Audit, Page 8

Related posts:

23 May 2012

Patch deployment – now or later? (with interactive poll!)

We recently saw an examination finding that recommended that “Critical Patches be deployed within 24 hours of notice (of patch release)”.  This would seem to contradict the FFIEC guidance in the Information Security Handbook that states that the institution:

Apply the patch to an isolated test system and verify that the patch…

(1) is compatible with other software used on systems to which the patch will be applied,

(2) does not alter the system’s security posture in unexpected ways, such as altering log settings, and

(3) corrects the pertinent vulnerability.”

If this testing process is followed correctly, it is highly unlikely that it will be completed within 24 hours of patch release.  The rational behind immediate patch release is that the risk of “zero-day exploits” is greater than the risk of installing an untested patch that may cause problems with your existing applications.  So the poll question is:
[poll id=”2″]
Regardless of your approach, you’ll have to document the risk and how you plan to mitigate it.  A “test first” approach might choose to increase end-user training and emphasize other controls such as firewall firmware, IPS/IDS, and Anti-virus/Anti-malware.  If you take a “patch first” approach you may want to leave one un-patched machine in each critical department to allow at least minimal functionality in case something goes wrong.  You should also test the “roll-back” capabilities of the particular patch prior to full deployment.

I’ll be watching to see if this finding appears in other examinations, and also to see if the guidance is updated online.  Until then, because of the criticality of your applications and the required up-time of your processes, I believe a “test-first” approach that adheres to the guidance is the most prudent approach…for now.  However you manage it though, be prepared to explain the why and how to the Board and senior management.  Not only are the results expected to be included in your annual Board report, it may help to explain repeat future examination findings if your current approach differs from examiner expectations.

09 Apr 2012

FFIEC Handbook Update – Outsourcing

The FFIEC has just added a section to the Outsourcing Technology Services IT Examination Handbook, and it should be required reading for financial institutions as well as any managed service providers.  The new section is Appendix D: Managed Security Service Providers, and it is the first significant change to the Handbook since it was released in 2004.  It addresses the fact that because of the increasing sophistication of the threat environment, and the lack of internal expertise, a growing number of financial institutions are (either partially or completely) outsourcing their security management functions to unaffiliated third-party vendors.

Because of the critical and sensitive nature of these security services, and the loss of control when these services are outsourced, the guidance stresses that institution must address additional risks beyond their normal vendor management responsibilities.  Specifically, more emphasis must be placed on the contract and on oversight of the vendor’s processes, infrastructure, and control environment.

The most interesting addition to the guidance for me is the “Emerging Risks” section, which is the first time the FFIEC has addressed cloud computing.  Although it is addressed from the perspective of the service provider, it defines cloud computing this way:

“…client users receive information technology services on demand from third-party service providers via the Internet “cloud.” In cloud environments, a client or customer will relocate their resources such as data, applications, and services to computing facilities outside the corporate firewall, which the end user then accesses via the Internet.”

Any data transmitted, stored or processed outside the security confines of the corporate firewall is considered higher risk data, and must have additional controls.  This would seem to infer that data in the cloud should be classified differently in your data-flow diagram, and have a correspondingly higher protection profile.*  It will be interesting to see if this will be the FFIEC’s approach when and if they address cloud computing in the future.

The guidance also has a useful MSSP Engagement Criteria matrix that institutions can use to evaluate their own service providers, as well as a set of MSSP Examination Procedures, which service providers (like mine) can use to prepare for future examinations.  In summary, financial institutions would be wise to familiarize themselves with the new guidance, after all to quote from the last line;

“As with all outsourcing arrangements FI management can outsource the daily responsibilities and expertise; however, they cannot outsource accountability.”

 

 

* A protection profile is a description of the protections that should be afforded to data in each classification.

02 Mar 2012

“Data-flow diagrams”

This request was seen in a recent State examiners pre-examination questionnaire, and although I usually like to see a request a couple of times from different examiners before identifying it as a legitimate trend, this one could prove so potentially problematic that I thought I needed to get ahead of it.

Before we go much farther, it’s important to distinguish between “data-flow diagrams” and the “work-flow analysis”, which is a requirement of the business continuity planning process.  They are completely separate endeavors designed to address two very different objectives.  The BCP “work-flow analysis” is designed to identify the interdependencies between critical processes in order to determine the order of recovery for the processes.  The “data-flow diagram” is designed to:

Supplement (management’s) understanding of information flow within and between network segments as well as across the institution’s perimeter to external parties.

It’s important to note here that what the examiner asked for actually wasn’t unreasonable, in fact it appears word-for-word in the FFIEC AIO Handbook:

Common types of documentation maintained to represent the entity’s IT and business environments are network diagrams, data flow diagrams, business process flow diagrams, and business process narratives. Management should document and maintain accurate representations of the current IT and business environments and should employ processes to update representations after the implementation of significant changes.

And although this particular examiner quoted from the Operations Handbook, the term “data flow” (in combination with “maps”, “charts” and “analysis”) actually appears 15 times in 5 different Handbooks; Development and Acquisition, Information Security, Operations, Retail Payment Systems, and Wholesale Payment Systems.

So this concept is certainly not unheard of, but previously this “understanding of information flow” objective was achieved via a network topology map, or schematic.  Sufficiently detailed, a network schematic will identify all internal and external connectivity, types of data circuits and bandwidth, routers, switches and servers.  Some may even include workstations and printers.  In the past this diagram, in combination with a hardware and software inventory, was always sufficient to document management’s understanding of information flow to examiners.  But in this particular case the examiner highlighted (in bold) this section of the guidance (and this was the most troublesome to me):

Data flow diagrams should identify:

  • Data sets and subsets shared between systems;
  • Applications sharing data; and
  • Classification of data (public, private, confidential, or other) being transmitted.

…and…

Data flow diagrams are also useful for identifying the volume and type of data stored on various media.  In addition, the diagrams should identify and differentiate between data in electronic format, and in other media, such as hard copy or optical images.

Data classification?  Differentiation between electronic, printed and digital data?  This seems to go way beyond what the typical network schematic is designed to do, way beyond what examiners have asked for in the past, and even possibly beyond the ability of most institutions to produce, at least not without significant effort.  Of course using the excuse of “unreasonable resource requirements” will usually not fly with examiners, so what is the proper response to a request of this nature?

Fortunately there may be a loophole here, at least for serviced institutions, and it’s found in the “size and complexity” predication.   The guidance initially states:

Effective IT operations management requires knowledge and understanding of the institution’s IT environment.

This is the underlying requirement, and the core issue to be addressed.  It then goes on to state that documentation of  management’s “knowledge and understanding” be “commensurate with the complexity of the institution’s technology operations”.  And depending on size and complexity, this may include “data-flow diagrams”.  So the examiner is effectively saying in this case that they feel that a “data-flow diagram” is the most appropriate way for the management of this outsourced institution to document adequate knowledge and understanding of their technology operations.  I suggested that the institution respectfully disagree, and state:

Our management believes, based on our size and complexity as a serviced institution, that an updated detailed schematic, and current hardware and software inventories, adequately demonstrates sufficient knowledge and understanding of our technology operations.

This directly addresses the core issue and I’m pretty sure the examiner will agree, but I’ll let you know.  In any case it’s worth pushing back on this because of the potentially enormous resource requirement that it would take to comply with the request, both now and going forward.

Now here is the real question…should you require the same level of documentation (i.e. data classification and data type differentiation) from your core vendor?  And if so, are you even likely to get it from them?