Tag: Enhanced Cybersecurity Assessment Tool

19 Oct 2016

Ask the Guru: “The Cybersecurity Assessment Tool… Do we have to?”

Hey Guru!

Management is asking why we have to complete the FFIEC Cybersecurity Assessment Tool when it is voluntary. They feel it is too much work if it is not mandatory. I think it is still needed even though it is voluntary. Is there any documentation as to why it is still necessary for OCC banks to complete the Assessment?


 The FFIEC issued a press release October 17, 2016, on the Cybersecurity Assessment Tool titled Frequently Asked Questions. This reiterated that the assessment is voluntary and an institution can choose to use either this assessment tool, or an alternate framework, to evaluate inherent cybersecurity risk and control maturity.

Since the tool was originally released in 2015, all the regulatory agencies have announced plans to incorporate the assessment into their examination procedures:

  • OCC Bulletin 2015-31 states “The OCC will implement the Assessment as part of the bank examination process over time to benchmark and assess bank cybersecurity efforts. While use of the Assessment is optional for financial institutions, OCC examiners will use the Assessment to supplement exam work to gain a more complete understanding of an institution’s inherent risk, risk management practices, and controls related to cybersecurity.”
  • Federal Reserve SR 15-9 states “Beginning in late 2015 or early 2016, the Federal Reserve plans to utilize the assessment tool as part of our examination process when evaluating financial institutions’ cybersecurity preparedness in information technology and safety and soundness examinations and inspections.”
  • FDIC FIL-28-2015 states “FDIC examiners will discuss the Cybersecurity Assessment Tool with institution management during examinations to ensure awareness and assist with answers to any questions.”
  • NCUA states “FFIEC’s cybersecurity assessment tool is provided to help them assess their level of preparedness, and NCUA examiners will use the tool as a guide for assessing cybersecurity risks in credit unions. Credit unions may choose whatever approach they feel appropriate to conduct their individual assessments, but the assessment tool would still be a useful guide.”

Even though the FFIEC format is officially voluntary, the institution still has to evaluate inherent risk and cybersecurity preparedness in some way. Therefore, unless you already have a robust assessment program in place, we strongly encourage all institutions to adopt the FFIEC Cybersecurity Assessment Tool format since this is what the examiners will use.

NOTE:  The FAQ also made it clear that the FFIEC does not intend to offer an automated version of the tool.  To address this, we have developed a full-featured cybersecurity service (RADAR) that includes an automated assessment, plus a gap analysis / action plan, cyber-incident response test, and several other components.

13 Oct 2015

Ask the Guru: Cybersecurity “Risk Appetite”

Hey Guru
I saw multiple references to the term “risk appetite” in the FFIEC Cybersecurity Assessment Tool.  What exactly is risk appetite, and how can I address this in my institution? They just released Management Handbook contains 10 new references to “risk appetite”, including a requirement that the Board  has defined the institution’s risk appetite and it’s risk tolerance levels.


There are 6 references to “risk appetite” in the FFIEC cybersecurity tool, and although it is not a new concept in risk management, this is a term I have not seen in regulatory guidance before outside of lending and credit practices.  Here are all references in context:

  • The institution has a cyber risk appetite statement approved by the board or an appropriate board committee.
  • The board or board committee approved cyber risk appetite statement is part of the enterprise-wide risk appetite statement.
  • The risk appetite is informed by the institution’s role in critical infrastructure.
  • The independent audit function regularly reviews management’s cyber risk appetite statement.
  • The independent audit function regularly reviews the institution’s cyber risk appetite statement in comparison to assessment results and incorporates gaps into the audit strategy.
  • Threat intelligence is viewed within the context of the institution’s risk profile and risk appetite to prioritize mitigating actions in anticipation of threats.

Risk tolerance is pretty well documented in current guidance, and although there are subtle differences between the terms, I see risk tolerance and risk appetite as largely synonymous for most institutions.  Here is a good working definition of risk appetite:

The amount of risk that an enterprise is willing to pursue and accept in order to achieve the goals and objectives of their strategic plan.

How should you address cybersecurity risk appetite?  You probably already have both inherent and residual risk assessed in your cybersecurity risk assessment, and have identified each as either “High”, “Medium”, or “Low”.  Risk “appetite” is simply a decision by management that the residual risk level is acceptable.  In other words, management is willing to accept the remaining risk as the cost of achieving its objectives.

For example, you’ve identified a vendor as having high inherent risk, and applied the necessary controls to reduce the risk as much as you can.  The remaining (residual) risk is deemed by management to be either acceptable or unacceptable based on their risk tolerance.  So if you use a “High”, “Medium” and “Low” designation for residual risk, a value of “Low” or even “Medium” can be deemed acceptable if it is within the risk appetite of the institution.

Establishing your risk appetite for cybersecurity can be accomplished using either a qualitative or quantitative approach.  A quantitative approach requires an analysis of specific financial loss connected to a cybersecurity event.  While this is a valid way to document risk, it can be a challenge for all but the largest institutions.

Most institutions prefer a qualitative approach, which uses a scale (i.e. 1 – 10, or H, M, and L) to rank the impact of a cyber event on reputation risk, strategic risk, regulatory/legal risk and/or operational risk.  Management can then determine the level of acceptable risk in each risk category.  For example, you may decide you have a very low (1-3) tolerance for risks in the reputation category, but you may be willing to accept a higher level (3-5) in the operational area.



Free White Paper



Dispelling 5 IT Outsourcing Myths within Financial Institutions

Learn why some of the most commonly believed “facts” about IT outsourcing for banks are actually myths.



7 Reasons Why Small Community Banks Should Outsource IT Network Management



Once you’ve established your risk appetite, the easiest way to document it is to add a “Risk Appetite” column to your existing cybersecurity risk assessment (ideally just after “Residual Risk”), where you designate remaining risk as either acceptable or unacceptable.

You might also want to amend your Information Security Policy to add a risk appetite statement.  Something like this:

“The Board has established specific strategic goals and objectives as defined in its strategic plan.  To increase the probability of achieving these goals, the Board has established acceptable risk tolerances within its risk appetite.  The board periodically reviews the risk appetite and associated tolerances, and may adjust them to adapt to changing economic conditions and/or strategic goals.”

16 Jul 2015

FFIEC Releases Cybersecurity Assessment Tool

UPDATE:  Safe Systems just released their Enhanced CyberSecurity Assessment Toolkit (ECAT) – This enhanced version of the FFIEC toolkit addresses the biggest drawback of the tool; the ability to collect, summarize, and report your risk and control maturity levels.  

Once risks and controls have been assessed (Step 1 below), institutions will now be better able to identify gaps in their cyber risk program, which is step 2 in the 5 step process.

 cyber cycle

This long-anticipated release of the Cybersecurity Assessment Tool (CAT) is designed to assist institution management in both assessing their exposure to cyber risk, and then measuring the maturity level of their current cybersecurity controls.  Users must digest 123 pages, and must provide one of five answers to a total of 69 questions in 10 categories or domains.  This is not a trivial undertaking, and will require input from IT, HR, internal audit, and key third-parties.

After completing the assessment, management should gain a pretty good understanding of the gaps, or weaknesses, that may exist in their own cybersecurity program.  As a result, “management can then decide what actions are needed either to affect the inherent risk profile or to achieve a desired state of maturity.”  I found the CAT to be quite innovative in what it does, but it also has some baffling limitations that may pose challenges to many financial institutions, particularly smaller ones.

[pullquote]The CAT is quite innovative in what it does, but it also has some baffling limitations…[/pullquote]

First of all, I was stunned by the specificity of both the risk assessment and the maturity review.  Never before have we seen this level of prescriptiveness and granularity from the FFIEC in how risks and controls should be categorized.  For example, when assessing risk from third-parties:

  • No third-parties with system access = Least Risk
  • 1 – 5 third parties = Minimal Risk
  • 6 – 10 third parties = Moderate Risk
  • 11 – 25 third parties = Significant Risk, and
  • > 25 third parties = Most Risk

Again, this is quite different from what the FFIEC has done previously.  If Information Security guidance used the same level of detail, we might see specific risk levels for passwords of a certain length and complexity, and correspondingly higher control maturity levels for longer, more complex passwords.  Of course we don’t see that from current guidance, what we get is a generic “controls must be appropriate to the size and complexity of the institution, and the nature and scope of its operations”, leaving the interpretation of exactly what that means to the institution, and to the examiner.

I see this new approach as a very good thing, because it removes all of the ambiguity from the process.  As someone who has seen institutions of all sizes and complexities struggle with how to interpret and implement guidance, I hope this represents a new approach to all risk assessments and control reviews going forward.  Imagine having a single worksheet for both institutions and examiners.  Both sides agree that a certain number of network devices, or third-parties, or wire transfers, or RDC customers, or physical locations, constitute a very specific level of risk.  Control maturity is assessed based on implementation of specific discrete elements.  Removing the uncertainty and guess-work from examinations would be VERY good thing for over-burdened institutions and examiners alike.


7 Reasons Why Small Community Banks Should Outsource IT Network Management



7 Reasons Why Small Community Banks Should Outsource IT Network Management



7 Reasons Why Small Community Banks Should Outsource IT Network Management

So all that is good, but there are 2 big challenges with the tool; collection, and interpretation and analysis.  The first is the most significant, and makes the tool almost useless in its current format.  Institutions are expected to collect, and then interpret and analyze the data, but the tool doesn’t have that capability to do that because all the documents provided by the FFIEC are in PDF format.

If you can figure out how to record your responses, that still leaves the responsibility to conduct a “gap” analysis to the institution.  In other words, now that you know where your biggest risks reside, how do you then align your controls with those risks?

One more challenge…regulators expect you to communicate the results to the Board and senior management, which is not really possible without some way of collecting and summarizing the data.  They also recommend an independent evaluation of the results from your auditors, which also requires summary data.

In summary, there is a lot to like in this guidance and I hope we see more of this level of specificity going forward.  But the tool should have the ability to (at a minimum) record responses, and ideally summarize the data at various points in time.  In addition, most institutions will likely need assistance summarizing and analyzing the  results, and addressing the gaps.  But at least we now have a framework, and that is a good start.