Author: Tom Hinkel

As author of the Compliance Guru website, Hinkel shares easy to digest information security tidbits with financial institutions across the country. With almost twenty years’ experience, Hinkel’s areas of expertise spans the entire spectrum of information technology. He is also the VP of Compliance Services at Safe Systems, a community banking tech company, where he ensures that their services incorporate the appropriate financial industry regulations and best practices.
21 Jul 2011

BCP plans continue to draw criticism

In a recent FDIC IT Examination, the examiner made the following criticism of the institutions’ DR/BCP:

“Business continuity planing should focus on all critical business functions that need to be recovered to resume operations. Continuity planing for technology alone should no longer be the primary focus of a BCP, but rather viewed as one critical aspect of the enterprise-wide process. The review of each critical business function should include the technology that supports it.” (bold is mine)

This is not the first time we’ve seen this finding, nor is it a new direction for regulators, but rather follows directly from the 2008 FFIEC Handbook on Business Continuity Planning when they state:

“The business continuity planning process involves the recovery, resumption, and maintenance of the entire business, not just the technology component. While the restoration of IT systems and electronic data is important, recovery of these systems and data will not always be enough to restore business operations.”

I still see way too many DR plans that focus on the recovery of technology, instead of recovery of the critical process supported by the technology.  Sure, technology is an interdependency of nearly every function you provide, but it must not be the primary focus of your recovery effort.  Focus instead on recovery of the entire process (teller, CSR, lending, funds management, etc.), by recognizing that each process is nothing more than the sum of its interdependencies.   For example, what does it take to deliver typical teller functionality?

  • A physical facility for customers to visit
  • A trained teller
  • A functional application, consisting of:
    • A workstation
    • A printer
    • A database, requiring:
      • LAN connectivity
      • WAN (core) connectivity, requiring:
        • Core functionality
      • A server, requiring:
        • Access rights
      • etc.
    • etc.
  • etc.

As you can see, technology certainly plays a very important role, but it is not the only critical aspect of the process.  All sub-components must work, and work together, for the overall  process to work.  Mapping out the processes through a work-flow analysis is an excellent way to get your arms around all of the interdependencies.

So next time you perform the annual review of your BCP (and you do review your plan annually, right?), make sure your IT department isn’t the only one in the room!

13 Jul 2011

Interpreting The New FFIEC Authentication Guidance – 5 Steps to Compliance

We’ve all now had a couple of weeks to digest the new guidance, and what has emerged is a clearer understanding of what the guidance requires…and what it doesn’t.  But before we can begin to formulate the specific compliance requirements, we have to interpret what the guidance is actually saying…and what it isn’t.  And along the way I’ll take the liberty of suggesting what it should have required…and what it should have said.

The release consists of 12 pages total, but only a couple of pages are directly relevant to your compliance efforts.  Pages 1 and 2 talk about the “why”, and pages 6 through 12 discuss the effectiveness (or ineffectiveness) of various authentication techniques and controls.  But beginning on page 3 (“Specific Supervisory Expectations”), through the top of page 6, is where the FFIEC details exactly what they expect from you once compliance is required in January of next year.  Since this is the real “meat” of the guidance, let’s take a closer look at what it says, and try to interpret what that means to you and your compliance efforts.

Here are the requirements:

  • Risk Assessments, conducted…
    • …prior to implementing new electronic services
    • …anytime “new” information becomes available, defined as:
      • Changes in the internal and external threat environment, i.e. if you become aware of any new threats
      • If your customer base changes, i.e. you take on a higher risk class of customer
      • Changes in the functionality of your electronic banking product, i.e. your vendor increases the capabilities of the product
      • Your fraud experience, i.e. you experience an account takeover or security breach
    • …at least every 12 months, if none of the preceding occurs

According to the guidance, your risk assessment must distinguish “high-risk” transactions from “low-risk” transactions. This is not as simple as saying that all retail customers are low-risk, and all commercial customers are high-risk. In fact, the FFIEC defines a “high-risk” transaction as “involving access to customer information or the movement of funds to other parties.” By this definition almost ALL electronic transactions are high-risk! A retail customer with bill-pay would qualify as high-risk because they access customer information (their own), and make payments (move funds) to other parties. So perhaps the way to interpret this is that your risk assessment process should distinguish between “high-risk” and “higher-risk”. This actually makes sense, because the frequency and dollar amounts of the transactions are what should really define risk levels and drive your specific layered controls, which is the next requirement.

  • Layered Security Programs (also known as “defense-in-depth”)

This concept means that controls should be layered so that gaps or weaknesses in one control (or layer of controls) are compensated for by other controls. Controls are grouped by timing (where they fit into a hierarchy of layers), and also by type.  So taken from the top layer down:

  1. Preventive- This is the first layer, the top of the hierarchy, and for good reason…it’s alwaysbest to prevent the problem in the first place. These are by far the most important, and should comprise the majority of your controls. Some examples drawn from the guidance are:
    1. Fraud detection and monitoring (automated and manual)
    2. Dual customer authorization and control
    3. Out-of-Band (OOB) verification of transactions
    4. Multi-factor authentication
    5. “Positive Pay”, or payee white-lists
    6. Transaction restrictions such as time-of-day, dollar volume, and daily volume
    7. IP restrictions, or black lists
    8. Restricting account administrative activity by the customer
    9. Customer education (this was actually listed as a requirement, but it’s really a preventive control)
  2. Detective – Positioned directly below preventive in the hierarchy because detecting a problem is not as optimal as preventing it. However they provide a safety net in the event preventive controls fail. Often a preventive control can also have a detective component to it. For example, control # 2 above can both require dual control, and also report if authentication fails on either login. Same with #4…if payment is blocked because the payee is not “white-listed (or is blocked by IP restrictions as in control #6), the institution can be automatically notified. In fact, most preventive controls also have a detective element to them.
  3. Corrective / Responsive – An important step to make sure the event doesn’t re-occur. This typically involves analyzing the incident and trying to determine what control failed, and at what level of the hierarchy. However, just as many preventive controls can detect, the best controls have the ability to self-respond as well. For example, the same control that requires dual control, and reports on failed login, can also lock the account. In fact, just as before, many preventive controls also have a responsive capability.

To properly demonstrate compliance with the layered security program requirement, you should have controls at each of the three layers. Simply put, higher risk transactions require more controls at each layer.

OK, now that you’ve classified your controls as preventive, detective and corrective (preferably a combination of all three!), you’ll need to determine the control type;  overlapping, compensating, or complimentary.

Overlapping controls are simply different controls designed to address the same risk.  Sort of a belt-and-suspenders approach, where if one controls misses something, a second (or third, or forth) will catch it.  Again, higher risk transactions should have multiple overlapping controls and they should exist at each layer.  So you should have overlapping preventive controls, as well as overlapping detective and even corrective controls.  The idea is that no single failure of any control at any layer will cause a vulnerability.

Compensating controls are those where one control makes up for a known gap in another control (belt-or-suspenders).  This gap may exist either because it is impossible or impractical to implement the preferred control, or because of a known shortcoming in the control.  An example might be that you feel transaction restrictions (#5 above) are appropriate for a particular account, but the customer requires 24/7 transaction capability.  To compensate for this, you might require that positive pay and IP restrictions be implemented instead.

Complimentary controls are often overlooked (indeed they aren’t mentioned at all in the guidance), but in my experience they are the one of the most effective controls…and the most common control to fail.  Complimentary controls (sometimes called “complimentary user controls”) are those that requires action on the part of the customer.  For example, the dual authentication control (#2 above) can be very effective IF the customer doesn’t share passwords internally.  Similarly, customer education (#8) requires cooperation from the customer to be effective.  Given the partnership between the institution and the customer (and recent litigation), I’m surprised there isn’t more discussion about the importance of complimentary controls.  (I addressed some of these issues here.)

In summary, demonstrating compliance to the spirit and letter of the guidance is simply a matter of following these 5 steps:

  1. Risk rank your on-line banking activities from higher to lower based on account type (consumer or commercial), and by frequency and dollar amount.
  2. Apply overlapping, compensating and complimentary controls (# 1 – #9 above) in…
  3. …preventive, corrective, and detective layers,…
  4. …commensurate with the risk of the transaction.  Higher risk = more controls.
  5. Periodically repeat the process.

Nothing to it, right?!  Well not exactly, but hopefully this makes the guidance a bit clearer and easier to follow.

By the way, although I have been somewhat critical of the updated guidance, I don’t share the same criticism of others that the guidance should have gone further and addressed mobile banking, or other emerging technology.  Frankly if new guidance was issued every time technology progressed we’d have a couple of updates every year.  Instead, focus on the fundamentals of good risk management.  After all, there is a good reason why the FFIEC hasn’t had to update their guidance on information security since 2006 (even though the threat landscape has changed dramatically since then)….they got it right the first time!

28 Jun 2011

Final FFIEC Authentication Guidance just released

Well, after much anticipation and speculation we finally have the updated FFIEC guidance, and there doesn’t appear to be anything radically new here that would justify waiting an additional 6 months.  At the very least I thought we might see some changes in the Effectiveness of Certain Authentication Techniques section, or in the Appendix (Threat Landscape and Compensating Controls), but both sections are virtually unchanged.  That said, I examined the final release against the draft, and here are my observations divided into 3 categories; Good, Bad, and Odd (all bold and italics in quoted text is mine):

Good

Education: “A financial institution’s customer awareness and educational efforts should address both retail and commercial account holders…”.  Agreed.  This is a good change from the draft.  Education shouldn’t be limited to high-risk transactions only.

Risk Assessments for Financial Institutions: (Generally good, but see below under Bad and Odd)

Layered Security Programs: (Again, generally good, but see below under Bad)

Bad

Multifactor Authentication: The draft release stated that “Financial institutions should implement multifactor authentication and layered security…”.  The final release changed that to “Financial institutions should implement layered security…the Agencies recommend that institutions offer multifactor authentication to their business customers.”  The verbiage change seems to remove multifactor authentication as a requirement.

Layered Security Programs: “Financial institutions should implement a layered approach to security for high-risk Internet-based systems”…how about layered security for ALL Internet-based systems?  (Late edit – the guidance does recommend layered security for both retail and business banking customers, but only “consistent with the risk” for retail customers.  More prescriptive guidance on determining a high risk retail customer from a low risk retail customer would have been beneficial here.)

And,

“The Agencies expect that an institution’s layered security program will contain the following two elements, at a minimum.
Detect and Respond to Suspicious Activity”.  NO, NO, NO, the FFIEC has previously stated that layered security programs must contain ALL THREE types of controls; preventive, detective and corrective.  The omission of preventive controls is particularly puzzling because in the section on Control of Administrative Functions it states “For example, a preventive control could include…An example of a detective control could include…”.  Preventive controls are the least expensive to implement, and by far the most effective. It’s such a glaring error that I’m inclined to believe it was a typo.

Risk Assessments for Financial Institutions: The final release changed an “and” to an “or”, possibly introducing some confusion.  Here is what the draft release said:

“Financial institutions should review and update their existing risk assessments as new information becomes available, focusing on authentication and related controls at least every twelve months and prior to implementing new electronic financial services.”

The final release says:

“Financial institutions should review and update their existing risk assessments as new information becomes available, prior to implementing new electronic financial services, or at least every twelve months

The final guidance might be misinterpreted to suggest that conducting risk assessments every 12 months are an either/or situation instead of a minimum requirement.

Risk Assessments for Customers: “A suggestion that commercial online banking customers perform a related risk assessment and controls evaluation periodically”  Really?  Nothing stronger than a suggestion?  This should be a requirement.

Odd

Specific Supervisory Expectation – Risk Assessments: Original draft:  “…financial institutions should perform periodic risk assessments and adjust their customer authentication, layered security, and other controls as appropriate in response to identified risks, including consideration of new and evolving threats to customers’ online accounts.”

Final release:  “…financial institutions should perform periodic risk assessments considering new and evolving threats to online accounts and adjust their customer authentication, layered security, and other controls as appropriate in response to identified risks.”

Not sure why the verbiage change there, but I don’t perceive any change in meaning…I actually liked the original verbiage better.

General Supervisory Expectations: This verbiage appeared in both the draft and final version, and it is this final odd observation that disturbs me the most…”The concept of customer authentication…is broad.  It includes more than the initial authentication of the customer when he/she connects to the financial institution at login.”  It would be extremely instructive here to define the “concept” by focusing on what it is, and not what it’s not.  Specifically, what more does the concept include beyond initial authentication?  The authentication of the transaction itself?  The transmission of the transaction through the Internet?  All interaction of the user with the interface?  The processing of the transaction at the financial institution and/or payment provider?  By defining the concept only as “broad”, by saying that it includes more than the initial authentication, this guidance has the potential of expanding the liability of the financial institution, and I can easily see this used in a future legal proceeding  to obfuscate the lines of responsibility*.

In the end, although the basics of the guidance are sound, I was disappointed that it didn’t go farther.  I will repeat what I said back in February; the guidance is still behind the curve on this issue, and institutions simply have too much to lose.  Implement additional preventive controls at the merchant side, additional controls at the institution side (such as dual authorization, out-of-band, positive pay, etc.), conduct annual (or more frequent) risk assessments, and most of all, educate everyone on basic security best practices.

 

*Particularly on the heals of recent court cases, one of which went the customer’s way, and the other (so far) going the Bank’s way.

27 Jun 2011

Audits vs. Examinations

As I speak with those in financial institutions responsible for responding to audit and examination requests, I find that there is considerable confusion over the differences between the two.  And some of this confusion is understandable…there is certainly some overlap between them, but there are also considerable differences in the nature and scope of each one.  It may sometimes seem as if you are asked to comply with 2 completely different standards.  How often has the auditor had findings that you’ve never been asked during an examination?  And how often has an examiner thrown you a curve ball seemingly out of left field?

In a perfect world shouldn’t the audit be nothing more than preparation for the examination?  The scope of the audit should be no more and no less than what you need to get past the examination.  Any more and you feel as though you’ve wasted resources (time and money), any less and you haven’t gotten your money’s worth, right?  Well…actually no.  While the two have the same broad goal of assessing alignment with a set of standards, the audit will often use a broader set of industry standards and best practices.  This is because the FFIEC guidance is so general and non-prescriptive.  For example, take one of the questions in the FDIC Information Technology Officer’s Pre-Examination Questionnaire.

“Do you have a written information security program designed to manage and control risk (Y/N)?”

Of course the correct answer is “Y”, but since the FDIC doesn’t provide an information security program template, how do you know that your program will be acceptable to the regulators?  You know because your IT auditor has examined your InfoSec program, and compared what you have done to existing IT best practices and standards, such as COBIT, ITIL, ISO 27001, SAS 94, NIST, and perhaps others.  While this doesn’t guarantee that your institution won’t have examination findings, it will reduce the probability, as well as the severity, of them.  This point is critical to understanding the differences between and audit and an examination; an audit will identify and allow you to correct the root cause of potential examination findings prior to the examination. So using the example above, even if the examiner has findings related to your information security program, they will be related to how you addressed the root cause, not if you addressed it.  (I’m defining root cause as anything found in the Examination Procedures.)  In fact, the FFIEC recognizes the dynamic between the IT audit and examination process this way:

An effective IT audit function may also reduce the time examiners spend reviewing areas of the institution during examinations.

And reduced time (usually) equals fewer curve balls, and a less stressful examination experience!

14 Jun 2011

SOC 2 vs. SAS 70 – 5 reasons to embrace the change

The SOC 2 and SOC 3 audit guides have recently been released by the AICPA, and the SAS 70 phase-out becomes effective tomorrow.  The more I learn about these new reports the more I like them.  First of all, as a service provider to financial institutions we will have to prepare for this engagement (just as we did for the SAS 70), so it’s certainly important to know what changes to expect from that perspective.  But as a trusted adviser to financial institutions struggling to manage the risks of outsourced services (and the associated vendors), the information provided in the new SOC 2 and SOC 3 reports are a welcome change. In fact, if your vendor provides services such as cloud computing, managed security services, or IT outsourcing, the new SOC reports provide exactly the assurances you need to address your concerns. Here is what I see as the 5 most significant differences between the SAS 70 and the SOC 2 reports, and why you should embrace the change:

Management Assertion – Management must now provide a written description of their organizations’ system (software, people, procedures and data) and controls. The auditor then expresses an opinion on whether or not managements’ assertion is accurate. Why should this matter to you? For the same reason the regulators want your Board and senior management to approve everything from lending policy to DR test results…accountability.

Relevance – The SAS 70 report was always intended to be a your-auditor-to-their-auditor communication, it was never intended to be used by you (the end-user) to address your institutions’ concerns about vendor privacy and security. The SOC reports are intended as a service provider to end-user communication, with the auditor providing verification (or attestation) as to accuracy.

Scope – Although the SAS 70 report addressed some of these, the SOC 2 report directly addresses all of the following 5 concerns:

  1. Security. The service provider’s systems is protected against unauthorized access.
  2. Availability. The service provider’s system is available for operation as contractually committed or agreed.
  3. Processing Integrity. The provider’s system is accurate, complete, and trustworthy.
  4. Confidentiality. Information designated as confidential is protected as contractually committed or agreed.
  5. Privacy. Personal information (if collected by the provider) is used, retained, disclosed, and destroyed in accordance with the providers’ privacy policy.

If these sound familiar, they should.  The FFIEC Information Security Booklet lists the following security objectives that all financial institutions should strive to accomplish:

  1. The Privacy and Security elements of GLBA
  2. Availability
  3. Integrity of data or systems
  4. Confidentiality of data or systems
  5. Accountability
  6. Assurance

As you can see, there is considerable overlap between the FFIEC requirements and the scope of a typical SOC 2 engagement.

Testing – Like the SAS 70, the SOC 1 and SOC 2 are available in both a Type 1 and Type II format.  A Type I speaks only to the adequacy of vendor controls, but the Type II gives management assurance that the vendor’s controls are not just adequate, but also effective.  The auditor can do this in a Type II engagement because they are expected to not just inquire about control effectiveness, but actually observe the control operating effectively via testing.  And because the scope of the SOC 2 is significantly greater than the SAS 70 (see above), the test results are much more meaningful to you.  In fact, the SOC 2 audit guide itself suggests that because your concerns (particularly as a financial institution) are in both control design and effectiveness, a Type I report is unlikely to provide you with sufficient information to assess the vendor’s risk management controls.  For this reason you should insist on a Type II report from all of your critical service providers.

Vendor Subcontractors – This refers to the subcontractor(s) of your service provider, and again this is another FFIEC requirement that is directly addressed in the new SOC reports.  The FFIEC in their Outsourcing Technology Services Booklet states that an institution can select from two techniques to manage a multiple service provider relationship:

  1. Use the lead provider to manage all of their subordinate relationships, or
  2. Use separate contracts and operational agreements for each subcontractor.

The booklet suggests that employing the first technique is the least cumbersome for the institution, but that either way;

“Management should also ensure the service provider’s control environment meets or exceeds the institution’s expectations, including the control environment of organizations that the primary service provider utilizes.”

The audit guidelines of the SOC 2 engagement require the service auditor to obtain an understanding of the significant vendors whose services affect the service provider’s system, and assess whether they should be included in the final report, or “carved-out”.  Given the regulatory requirement for managing service providers you should insist on an “inclusive” report.

In summary, there will be an adaptation curve as you adjust to the new reports, but in the end I think this is an overwhelming step in the right direction for the accuracy and effectiveness of your vendor management program.

07 Jun 2011

SAR Filings – Computer Intrusion vs. Identity Theft

The Financial Crimes Enforcement Network (FinCEN) publishes a statistical summary and review of all suspicious activity report (SAR) filings a couple of times per year.  The latest one was just released in May covering the 10 year period from 1/1/2001 through 12/31/2010.  I thought it might be interesting to see how the category of Computer Intrusion (Part III, item 35 f) compared with Identity Theft (Part III, item 35 u) during that period of time:

As expected, reported incidents of identity theft increased sharply and remain relatively high today…no surprises there.  What did surprise me though is the low reported incidents of computer intrusion over the past 5 years.  (The initial blip in 2003 was due to the fact that when the Computer Intrusion category was added in 2000, it initially defined an intrusion as “gaining access or attempting to gain access to a computer system of a financial institution”.  That meant that each time the firewall blocked an attempt, it had to be reported.  Obviously this proved to be extremely labor intensive, and the verbiage was changed in 2003 to define intrusion as actually gaining access to the system.)

I suppose the lesson here* is that financial institutions are doing a far better job securing their networks then they are securing their customer data, which leads me to the conclusion that the vast majority of identity theft must be occurring outside the protected perimeter of the institutions’ networks.  Remember, you must protect your data at every stage of its existence; during processing, in transit, and in storage, and regardless of its physical or electronic nature.

By the way, the newest SAR form for depository institutions is here.  It was just updated in March, and institutions must use this to replace the older form (dated July 2003) by September 30th of this year.  I compared the two forms side by side to see what the differences were, but couldn’t find a single change besides the date, so I’m not sure why the new form is required, but it is.

*Of course another possibility is that computer intrusions are simply being under-reported, but since most financial institutions have been subjected to regular audits, penetration tests and examinations, I believe that the low incidence is probably accurate.