Tag: ECAT

21 Mar 2017
Late Night Exam Questions

Ask the Guru: How Can I Best Determine My Cyber Risk Profile?

Hey Guru!

We just completed the Cybersecurity Assessment, so now we have our current risk and control maturity levels identified.  Can we draw any conclusions about our average risk and control levels?  For example, most of our risks are in the Least and Minimal areas, but we do have a few Moderate as well.  Can we just average them and conclude that our overall cyber risk levels are minimal?


Towards the end of last year the FFIEC released a Frequently Asked Questions document about the Cybersecurity Assessment Tool, and item #6 directly addressed your question.  The Council stated that “…when a majority of activities, products, or services fall within the Moderate Risk Level, management may determine that the institution has a Moderate Inherent Risk Profile.”

This would seem to validate the approach of using the average1 of all risk levels to identify your overall risk level.  However, they go on to state that each risk category may pose a different level of risk. “Therefore, in addition to evaluating the number of times an institution selects a specific risk level, management may also consider evaluating whether the specific category poses additional risk that should be factored into the overall assessment of inherent risk.”  This would appear to directly contradict the averaging approach, indicating (correctly, in my opinion) that since all risks are NOT equal, you should NOT determine overall risk based on an average.

For example, let’s say that all of your risks in the Technologies and Connection Types category are in the Least and Minimal level except for Unsecured External Connections, which is at the Moderate level.  So you have 13 items no higher than minimal, and 1 item moderate.  Sounds like an overall minimal level of risk, right?  Except a Moderate level of risk for Unsecured External Connections indicates that you have several (6-10) unsecured connections.  As any IT auditor will tell you, even 1 unsecured connection can be a serious vulnerability!

So although the FFIEC says that “…you may determine…” you’re at one level if the majority of your responses fall within that level, they go on to say you really shouldn’t really draw that conclusion without additional evaluation.

This is just one of many examples of confusing, conflicting, and occasionally misleading elements in the CAT, and a very good reason to have assistance filling it out (shameless plug).

 

1 There are 3 primary ways of defining “average”; mean, mode and median.  If you’ve assigned 1-5 numeric values to the risk levels, we can define average as “mean”.  If we’re assuming average is “mode”, it’s simply the value that occurs most often.  This would appear the way the FFIEC is approaching it.  Regardless how you define “average”, it leads to the same (inaccurate) conclusion.

27 Sep 2016

FFIEC Rewrites the Information Security IT Examination Handbook

In the first update in over 10 years, the FFIEC just completely rewrote the definitive guidance on their expectations for managing information systems in financial institutions.  This was widely expected, as the IT world has changed considerably since 2006.

There is much to unpack in this new handbook, starting with what appears to be a new approach to managing information security risk. The original 2006 handbook put the risk assessment process up front, essentially conflating risk assessment with risk management.  But as I first mentioned almost 6 years ago, the risk assessment is only one step in risk management, and it’s not the first step.  Before risk can be assessed you must identify the assets to be protected and the threats and vulnerabilities to those assets.  Only then can you conduct a risk assessment.  The new guidance uses a more traditional approach to risk management, correctly placing risk assessment in the second slot:

  1. Risk Identification
  2. Risk Measurement (aka risk assessment)
  3. Risk Mitigation, and
  4. Risk Monitoring and Reporting

This is a good change, and it is also identical to the risk management structure in the 2015 Management Handbook.  Its also very consistent with the 4 phase process specified in the 2015 Business Continuity Handbook:

  1. Business Impact Analysis
  2. Risk Assessment
  3. Risk Management, and
  4. Risk Monitoring and Testing

Beyond that, here are a few additional observations (in no particular order):

More from Less:

  • The new handbook is about 40% shorter, consisting of 98 pages as contrasted with 138 in the 2006 handbook.

…HOWEVER…

  • The new guidance contains 412 references to the word “should”, as opposed to 341 references previously.  This is significant, because compliance folks know that every occurrence of the word “should” in the guidance, generally translates to the word “will” in your policies and procedures.  So the handbook is 40% shorter, but increases regulator expectations by 20%!

Cyber Focus:

  • “…because of the frequency and severity of cyber attacks, the institution should place an increasing focus on cybersecurity controls, a key component of information security.”  Cybersecurity is scattered throughout the new handbook, including an entire section.

Assess Yourself:

  • There are 17 separate references to “self-assessments”, increasing the importance of utilizing internal assessments to gauge the effectiveness of your risk management and control processes.

Take Your Own Medicine:

  • Technology Service Providers to financial institutions will be held to the same set of standards:
    • “Examiners should also use this booklet to evaluate the performance by third-party service providers, including technology service providers, of services on behalf of financial institutions.”

The Ripple Effect:

  • The impact of this guidance will likely be quite significant, and will be felt across all IT areas.  For example, the Control Maturity section of the  Cybersecurity Assessment Tool contains 98 references and hyperlinks to specific pages in the 2006 Handbook.  All of these are now invalid.  I’m sure we can expect an updated assessment tool  from the FFIEC at some point in the not-too-distant future.  (Which will also necessitate changes to certain online tools!)
  • The new FDIC IT Risk Examination procedures (InTREx) also contains several references to the IT Handbook, although they are not specific to any particular page.

Regarding InTREx, I was actually hoping that the new IT Handbook and the new FDIC exam procedures would be more closely coordinated, but perhaps that’s too much to ask at this point.  In any case, the similarity between the 3 recently released Handbooks indicates increased standardization, and I think that is a good thing.  We will continue to dissect this document and report observations as we find them.  In the meantime, don’t hesitate to reach out with your own observations.

16 Jul 2015

FFIEC Releases Cybersecurity Assessment Tool

UPDATE:  Safe Systems just released their Enhanced CyberSecurity Assessment Toolkit (ECAT) – This enhanced version of the FFIEC toolkit addresses the biggest drawback of the tool; the ability to collect, summarize, and report your risk and control maturity levels.  

Once risks and controls have been assessed (Step 1 below), institutions will now be better able to identify gaps in their cyber risk program, which is step 2 in the 5 step process.

 cyber cycle

This long-anticipated release of the Cybersecurity Assessment Tool (CAT) is designed to assist institution management in both assessing their exposure to cyber risk, and then measuring the maturity level of their current cybersecurity controls.  Users must digest 123 pages, and must provide one of five answers to a total of 69 questions in 10 categories or domains.  This is not a trivial undertaking, and will require input from IT, HR, internal audit, and key third-parties.

After completing the assessment, management should gain a pretty good understanding of the gaps, or weaknesses, that may exist in their own cybersecurity program.  As a result, “management can then decide what actions are needed either to affect the inherent risk profile or to achieve a desired state of maturity.”  I found the CAT to be quite innovative in what it does, but it also has some baffling limitations that may pose challenges to many financial institutions, particularly smaller ones.

[pullquote]The CAT is quite innovative in what it does, but it also has some baffling limitations…[/pullquote]

First of all, I was stunned by the specificity of both the risk assessment and the maturity review.  Never before have we seen this level of prescriptiveness and granularity from the FFIEC in how risks and controls should be categorized.  For example, when assessing risk from third-parties:

  • No third-parties with system access = Least Risk
  • 1 – 5 third parties = Minimal Risk
  • 6 – 10 third parties = Moderate Risk
  • 11 – 25 third parties = Significant Risk, and
  • > 25 third parties = Most Risk

Again, this is quite different from what the FFIEC has done previously.  If Information Security guidance used the same level of detail, we might see specific risk levels for passwords of a certain length and complexity, and correspondingly higher control maturity levels for longer, more complex passwords.  Of course we don’t see that from current guidance, what we get is a generic “controls must be appropriate to the size and complexity of the institution, and the nature and scope of its operations”, leaving the interpretation of exactly what that means to the institution, and to the examiner.

I see this new approach as a very good thing, because it removes all of the ambiguity from the process.  As someone who has seen institutions of all sizes and complexities struggle with how to interpret and implement guidance, I hope this represents a new approach to all risk assessments and control reviews going forward.  Imagine having a single worksheet for both institutions and examiners.  Both sides agree that a certain number of network devices, or third-parties, or wire transfers, or RDC customers, or physical locations, constitute a very specific level of risk.  Control maturity is assessed based on implementation of specific discrete elements.  Removing the uncertainty and guess-work from examinations would be VERY good thing for over-burdened institutions and examiners alike.


7 Reasons Why Small Community Banks Should Outsource IT Network Management



7 Reasons Why Small Community Banks Should Outsource IT Network Management



7 Reasons Why Small Community Banks Should Outsource IT Network Management

So all that is good, but there are 2 big challenges with the tool; collection, and interpretation and analysis.  The first is the most significant, and makes the tool almost useless in its current format.  Institutions are expected to collect, and then interpret and analyze the data, but the tool doesn’t have that capability to do that because all the documents provided by the FFIEC are in PDF format.

If you can figure out how to record your responses, that still leaves the responsibility to conduct a “gap” analysis to the institution.  In other words, now that you know where your biggest risks reside, how do you then align your controls with those risks?

One more challenge…regulators expect you to communicate the results to the Board and senior management, which is not really possible without some way of collecting and summarizing the data.  They also recommend an independent evaluation of the results from your auditors, which also requires summary data.

In summary, there is a lot to like in this guidance and I hope we see more of this level of specificity going forward.  But the tool should have the ability to (at a minimum) record responses, and ideally summarize the data at various points in time.  In addition, most institutions will likely need assistance summarizing and analyzing the  results, and addressing the gaps.  But at least we now have a framework, and that is a good start.