Category: Hot Topics

20 Sep 2011

FDIC Sues Bank Directors (again)

On June 19, 2009 Cooperative Bank in Wilmington, NC was closed by the North Carolina Commissioner of Banks and the FDIC.  Federal banking regulators are now suing Cooperative Bank’s chairman and eight members of the board of directors for more than $145 million for negligence and breaches of fiduciary duty.  One of the FDIC’s assertions in the suit is the claim that the “…bank materially deviated from its approved business plan”, and that it did not adequately control the risks.  But this is not the only instance, it’s merely the latest.

If you are a bank director or officer, and your bank fails, there is a 1 in 4 chance that the FDIC will sue you.  In fact, as of September 13, 2011, the FDIC has authorized suits in connection with 32 failed institutions against 294 individuals with damage claims of at least $7.2 billion.  And not just officers and directors are being targeted…attorneys, accountants, appraisers, brokers and other professionals working on behalf of the bank can be held liable as well.  More importantly, the pace is increasing rapidly too.  From 1986 through 2010 there were a total of 109 defendants named in lawsuits, but in just 8 months of 2011 185 have been named.

The FDIC regulations defining officer and director obligations are explained here, and the key concept is something called the “duties of loyalty and care.”

 The duty of loyalty requires directors and officers to administer the affairs of the bank with candor, personal honesty and integrity. They are prohibited from advancing their own personal or business interests, or those of others, at the expense of the bank.

 The duty of care requires directors and officers to act as prudent and diligent business persons in conducting the affairs of the bank.

But the guidance states that the FDIC will not bring civil suit if it finds that they’ve…

  1. “…made reasonable business judgments…
  2. …on a fully informed basis, and…
  3. …after proper deliberation.”

If you are an officer or director, preventing a lawsuit in the first place is far preferable to having to defend yourself after being named, and prevention is entirely predicated on being able to demonstrate that you’ve properly exercised your duties.  Exercising your duties means making reasonable business decisions after proper deliberation.  The key to proper deliberation is that you be fully informed, and that requires accurate, timely and relevant information.   Not just data, but actionable information.

I’ve written before about how technology (specifically automation) can enable and/or enhance your compliance efforts, particularly in the effort to extract useful information from mountains of data.  I’ve also discussed how management committees like the IT committee and the audit committee can provide both a forum for the exchange of information, and documentation that the exchange took place.  And don’t underestimate the value of having outside expertise on those committees.  Not only can it add a different perspective, it can also help document that you are making an effort to be “fully informed” and that you are “properly deliberating”.

Now here is a question to ponder…if the regulators are found to have been at least partially liable for the failure of an institution, can they be named as a party to the lawsuit?   In my next post I’ll take a look at some recent Material Loss Reviews, and examine the regulator mandate of “Prompt Corrective Action”.  In the meantime, what do you think…can the FDIC be both a plaintiff and a defendant?

14 Sep 2011

The current single biggest security threat to financial institutions – UPDATE

(UPDATE – Hord Tipton, executive director of (ISC)2, posted recently on the biggest data breaches of the past year.  His analysis confirms that ” …humans are still at the heart of great security successes – and, unfortunately, great security breaches…The lesson we learn from this year’s breaches is that most of them were avoidable – even preventable – if humans had exercised best practices at the proper times.”)

What was the nature of the attack on the security company RSA that they described as “extremely sophisticated”  and an “advanced persistent threat”?  Simply put, it was a fairly ordinary phishing email that was sent to RSA employees that contained an Excel spreadsheet with an embedded Adobe Flash exploit.  At least one employee opened the attachment.  The exploit allowed the attacker to install a backdoor and subsequently gain access to the information they were after.

I wrote about this here, discussing how password tokens (like RSA) were just one factor in one layer of a multi-factor, multi-layered security process.  At the time the post was written (shortly after the attack became public) we weren’t sure about either the nature of the attack, or exactly what was taken, but at this point it is pretty clear that the real weakness that was exploited at RSA is still out there, and it can’t be fixed by a patch or an update.  In fact according to recent IT audits this particular vulnerability is still present at most financial institutions…the employee.  Or more specifically, the under-trained-and-tested employee.

How do you address this threat?  Sure, regular critical patch updates and Anti-virus/Anti-malware software are important, but the only way to mitigate the employee risk is through repeated testing and training.  As far back as 2004 the FFIEC recognized social engineering as a threat, stating in their Operations booklet:

Social engineering is a growing concern for all personnel, and in some organizations personnel may be easy targets for hackers trying to obtain information through trickery or deception.

And as recently as this year social engineering is mentioned again in the recent FFIEC Internet Authentication guidance:

Social engineering involves an attacker obtaining authenticators by simply asking for them. For instance, the attacker may masquerade as a legitimate user who needs a password reset or as a contractor who must have immediate access to correct a system performance problem. By using persuasion, being aggressive, or using other interpersonal skills, the attackers encourage a legitimate user or other authorized person to give them authentication credentials. Controls against these attacks involve strong identification policies and employee training.

Most financial institutions already include some form of social engineering testing in their IT controls audits, typically as part of a penetration, or PEN test.  Auditors assessing social engineering control effectiveness will use various techniques to entice an employee to enter their network authentication credentials.  Posing as a customer, an employee, a trusted vendor, or even going to the extreme of constructing a website with the same look and feel of the institutions actual website, auditors have been extremely effective in getting employees to disclose information.  In fact in all of the social engineering tests I’ve seen, the vast majority resulted in at least one employee disclosing confidential information, and in many cases 50% or more employees handed over information.  Although I believe this number is slowly declining, if the RSA breach taught us anything it was that all it takes is one set of disclosed credentials from one employee to compromise the organization.

So if both social engineering and the need for training and testing is not a new concept to financial institutions, then why is this such a persistent problem?  After all, most institutions have been conducting information security training for years.  In fact as part of their IT examinations, examiners have been required to “…review security guidance and training provided to ensure awareness among employees and contractors, including annual certification that personnel understand their responsibilities.”

I think a big part of the challenge is that financial institution employees are specifically hired for their customer service skills; their willingness to want to help each other and the customer.  These are exactly the personality traits that you want in a customer-facing employee.  But this helpful attitude is exactly why financial institution employees are notoriously difficult to train on information security.  (An excellent summary of this is found in a technical overview paper published by Predictive Index).  The same personality traits that make employees want to help are also correlated with a general lack of suspicion.  And a little suspicion can be more useful in preventing social engineering attacks than all the formal training in the world.

Suspicion can’t be taught, but adherence to polices and procedures can.  And fortunately one personality trait that is correlated with a helpful attitude is a willingness and ability to follow the rules.  Perhaps the answer is to spend more training time making sure your employees know what is expected of them (as defined in your policies) and how they are expected to respond to requests for information, and spend less time discussing why (i.e. the current threat environment).  Make sure you include social engineering testing as part of your annual IT audits because this is the only way to measure the success of your training efforts.  And if the testing results indicate that more training is necessary, repeat training and testing not just annually but as frequently as you have to until the test response rate  is zero.  Also, use the news of recent cyber-incidents as an opportunity to stage “what would you do in this circumstance” training exercises with your employees.  In the end this is one risk you’ll never completely eliminate…the best you can hope is that you don’t become a training exercise for someone else!

31 Aug 2011

Online Transactions – Defining “Normal”

I’ve gotten several inquiries about this since I last posted so I thought I’d better address it.  The new FFIEC authentication guidance requires you to conduct periodic risk assessments, and to apply layered controls appropriate to the level of risk.  Transactions like ACH origination and interbank transfers involve a generally higher level of risk to the institution and the customer, and as such require additional controls.  But here’s the catch…given the exact same product with the exact same capabilities one customer’s normal activity is another customer’s abnormal.  So defining normal is critical to identifying your abnormal, or “high-risk”, customers.

Most Internet banking software has built-in transaction monitoring or anomaly detection capabilities, and vendors that don’t are scrambling to add it in the wake of the guidance.  As the guidance states:

“Based upon the incidents the Agencies have reviewed, manual or automated transaction monitoring or anomaly detection and response could have prevented many of the frauds since the ACH/wire transfers being originated by the fraudsters were anomalous when compared with the customer’s established patterns of behavior.

So automated anomaly detection systems can be a very effective preventive, detective and responsive control.  But I think there is a very real risk that a purely automated system may not be enough, and may even make the situation worse in some cases.  For one thing, any viable risk management solution must strike a balance between security and usability.  A highly secure automated anomaly detection and prevention system may be so tightly tuned that it becomes a nuisance to the customer or a burden to the institution.  Customers are already reluctant to accept any constraints on usability, even if they can be presented as in their best interest.  And if your requirements are just a little bit more than your competitor, you risk losing the customer to them.  Interesting paradox…you implement additional controls to protect them, and lose them to a (potentially) less secure competitor!

But another way a purely automated solution may not achieve the desired result is that it may actually lull the institution into a false sense of security.  I’ve already heard this in my discussions with our customers…”My vendor says they will fully comply with the new guidance, so I’m counting on them.”  And indeed the vendors are all saying “Don’t worry, we’ve got this…”.  But do they?  In at least one incident, transaction monitoring did not stop an account take-over because according to the automated systems the fraudulent transactions were all within the range of “normal”.

So what more should you do?  One thing is to make sure that you don’t rely solely on your vendor to define “normal”.  Just as with information security, you can, and (because of your reliance on the vendor) should outsource many of the risk management controls.  But since you can not outsource the responsibility for transaction security, you must take an active role with your vendor by sharing responsibility for monitoring.  One way to do this is to participate in setting the alert triggers.  For example, high account inquiries may trigger an automated anomaly alert, but really don’t carry a high risk of loss.  (However, they could be indicative of the early stages of an account takeover, so they shouldn’t be completely ignored either.)  On the other hand, a slight increase in interbank transfers may not trigger an alert, but could carry a potentially large loss.  Rank the capabilities of each product by risk of loss, and work with your vendor to set anomaly alerts accordingly.

Once you’ve established “normal” ranges for your products by capability, and set the anomaly triggers, your vendor should be able to generate reports for you showing deviations from normal for each product.  The next step is to separately assess each customer that falls outside those normal ranges.  Anomaly triggers for these customers should necessarily be set more tightly, and your vendor should be able to provide deviation reports for those as well.  By regularly reviewing these reports you are demonstrating a shared security responsibility approach, and most of all, demonstrating an understanding of both the letter and spirit of the guidance.

Remember, although your vendor can help, “normal” transaction frequency and dollar amounts must be defined by you based on your understanding of the nature and scope of your on-line banking activities.

22 Aug 2011

Risk Assessing Internet Banking – Two Different Approaches

One of the big “must do” take-aways from the updated FFIEC Authentication Guidance was the requirement for all institutions to conduct risk assessments.  Not just prior to implementing electronic banking services, but periodically throughout the relationship if certain factors change, such as:

  • changes in the internal and external threat environment, including those discussed in the Appendix to this Supplement;
  • changes in the customer base adopting electronic banking;
  • changes in the customer functionality offered through electronic banking;
  • and actual incidents of security breaches, identity theft, or fraud experienced by the institution or industry.

The guidance also mandated annual re-assessments if none of these previous factors change, but given the increasingly hostile on-line environment it’s really a question of ‘when’ actual incidents occur, not ‘if’.  That being the case, if you only update your risk assessment annually the regulators could reasonably take the position that you’re not doing it often enough.

So risk assessments must occur “routinely”, but what is the best way to approach them?  Although the guidance does not specify a particular approach, it might be instructive to take a look at what the FFIEC has to say about Information Security and Disaster Recovery, both of which require (separate) risk assessments.  In both cases the FFIEC encourages that you approach the task by analyzing the probability and impact of the threat, not the nature of the threat.  This makes perfect sense.  By shifting the focus of your risk assessment off of the moving target of the constantly changing threat environment, and on to strengthening the overall security of your Internet-based services1, you can build a secure transaction environment that will scale and evolve as you grow.  Here is the critical difference between the two approaches; if you take a “nature-of-the-threat” approach, you must list every possible specific threat both existing and reasonably anticipated2.  It doesn’t work very well for disaster recovery or information security risk assessments, and in my opinion it is not the best approach for Internet banking either.

Although certainly not the only way to do the risk assessment, I would recommend a 2-step approach that addresses most if not all of the updated FFIEC guidelines.  Step 1 of this approach is to assess the overall risk of your products by listing the capabilities and controls for each one.  As a part of that step you would determine how many customers use the product, and then also how many of those you consider to be “high-risk” as defined by high transaction frequency and high dollar amount.  In Step 2 you should list those high-risk customers you identified in step 1 separately, along with the associated controls you plan to implement for each one.

Again, there is no one single way to do this correctly.  Whatever you do should be consistent with the size and complexity of your institution, and the nature and scope of your Internet banking operations.  Good luck!

 

1 Although other regulations and guidelines address financial institutions’ responsibilities to protect customer information and prevent identity theft, this guidance specifically addresses Internet authentication, and should be the primary focus of this risk assessment.

2 You must still re-assess if either you or the industry experience any actual incidents, but instead of adding a new threat to your risk assessment, you simply determine if your existing control environment is sufficient to address the impact of the threat. In other words, you re-assess for the impact, not the nature of the threat.

13 Jul 2011

Interpreting The New FFIEC Authentication Guidance – 5 Steps to Compliance

We’ve all now had a couple of weeks to digest the new guidance, and what has emerged is a clearer understanding of what the guidance requires…and what it doesn’t.  But before we can begin to formulate the specific compliance requirements, we have to interpret what the guidance is actually saying…and what it isn’t.  And along the way I’ll take the liberty of suggesting what it should have required…and what it should have said.

The release consists of 12 pages total, but only a couple of pages are directly relevant to your compliance efforts.  Pages 1 and 2 talk about the “why”, and pages 6 through 12 discuss the effectiveness (or ineffectiveness) of various authentication techniques and controls.  But beginning on page 3 (“Specific Supervisory Expectations”), through the top of page 6, is where the FFIEC details exactly what they expect from you once compliance is required in January of next year.  Since this is the real “meat” of the guidance, let’s take a closer look at what it says, and try to interpret what that means to you and your compliance efforts.

Here are the requirements:

  • Risk Assessments, conducted…
    • …prior to implementing new electronic services
    • …anytime “new” information becomes available, defined as:
      • Changes in the internal and external threat environment, i.e. if you become aware of any new threats
      • If your customer base changes, i.e. you take on a higher risk class of customer
      • Changes in the functionality of your electronic banking product, i.e. your vendor increases the capabilities of the product
      • Your fraud experience, i.e. you experience an account takeover or security breach
    • …at least every 12 months, if none of the preceding occurs

According to the guidance, your risk assessment must distinguish “high-risk” transactions from “low-risk” transactions. This is not as simple as saying that all retail customers are low-risk, and all commercial customers are high-risk. In fact, the FFIEC defines a “high-risk” transaction as “involving access to customer information or the movement of funds to other parties.” By this definition almost ALL electronic transactions are high-risk! A retail customer with bill-pay would qualify as high-risk because they access customer information (their own), and make payments (move funds) to other parties. So perhaps the way to interpret this is that your risk assessment process should distinguish between “high-risk” and “higher-risk”. This actually makes sense, because the frequency and dollar amounts of the transactions are what should really define risk levels and drive your specific layered controls, which is the next requirement.

  • Layered Security Programs (also known as “defense-in-depth”)

This concept means that controls should be layered so that gaps or weaknesses in one control (or layer of controls) are compensated for by other controls. Controls are grouped by timing (where they fit into a hierarchy of layers), and also by type.  So taken from the top layer down:

  1. Preventive- This is the first layer, the top of the hierarchy, and for good reason…it’s alwaysbest to prevent the problem in the first place. These are by far the most important, and should comprise the majority of your controls. Some examples drawn from the guidance are:
    1. Fraud detection and monitoring (automated and manual)
    2. Dual customer authorization and control
    3. Out-of-Band (OOB) verification of transactions
    4. Multi-factor authentication
    5. “Positive Pay”, or payee white-lists
    6. Transaction restrictions such as time-of-day, dollar volume, and daily volume
    7. IP restrictions, or black lists
    8. Restricting account administrative activity by the customer
    9. Customer education (this was actually listed as a requirement, but it’s really a preventive control)
  2. Detective – Positioned directly below preventive in the hierarchy because detecting a problem is not as optimal as preventing it. However they provide a safety net in the event preventive controls fail. Often a preventive control can also have a detective component to it. For example, control # 2 above can both require dual control, and also report if authentication fails on either login. Same with #4…if payment is blocked because the payee is not “white-listed (or is blocked by IP restrictions as in control #6), the institution can be automatically notified. In fact, most preventive controls also have a detective element to them.
  3. Corrective / Responsive – An important step to make sure the event doesn’t re-occur. This typically involves analyzing the incident and trying to determine what control failed, and at what level of the hierarchy. However, just as many preventive controls can detect, the best controls have the ability to self-respond as well. For example, the same control that requires dual control, and reports on failed login, can also lock the account. In fact, just as before, many preventive controls also have a responsive capability.

To properly demonstrate compliance with the layered security program requirement, you should have controls at each of the three layers. Simply put, higher risk transactions require more controls at each layer.

OK, now that you’ve classified your controls as preventive, detective and corrective (preferably a combination of all three!), you’ll need to determine the control type;  overlapping, compensating, or complimentary.

Overlapping controls are simply different controls designed to address the same risk.  Sort of a belt-and-suspenders approach, where if one controls misses something, a second (or third, or forth) will catch it.  Again, higher risk transactions should have multiple overlapping controls and they should exist at each layer.  So you should have overlapping preventive controls, as well as overlapping detective and even corrective controls.  The idea is that no single failure of any control at any layer will cause a vulnerability.

Compensating controls are those where one control makes up for a known gap in another control (belt-or-suspenders).  This gap may exist either because it is impossible or impractical to implement the preferred control, or because of a known shortcoming in the control.  An example might be that you feel transaction restrictions (#5 above) are appropriate for a particular account, but the customer requires 24/7 transaction capability.  To compensate for this, you might require that positive pay and IP restrictions be implemented instead.

Complimentary controls are often overlooked (indeed they aren’t mentioned at all in the guidance), but in my experience they are the one of the most effective controls…and the most common control to fail.  Complimentary controls (sometimes called “complimentary user controls”) are those that requires action on the part of the customer.  For example, the dual authentication control (#2 above) can be very effective IF the customer doesn’t share passwords internally.  Similarly, customer education (#8) requires cooperation from the customer to be effective.  Given the partnership between the institution and the customer (and recent litigation), I’m surprised there isn’t more discussion about the importance of complimentary controls.  (I addressed some of these issues here.)

In summary, demonstrating compliance to the spirit and letter of the guidance is simply a matter of following these 5 steps:

  1. Risk rank your on-line banking activities from higher to lower based on account type (consumer or commercial), and by frequency and dollar amount.
  2. Apply overlapping, compensating and complimentary controls (# 1 – #9 above) in…
  3. …preventive, corrective, and detective layers,…
  4. …commensurate with the risk of the transaction.  Higher risk = more controls.
  5. Periodically repeat the process.

Nothing to it, right?!  Well not exactly, but hopefully this makes the guidance a bit clearer and easier to follow.

By the way, although I have been somewhat critical of the updated guidance, I don’t share the same criticism of others that the guidance should have gone further and addressed mobile banking, or other emerging technology.  Frankly if new guidance was issued every time technology progressed we’d have a couple of updates every year.  Instead, focus on the fundamentals of good risk management.  After all, there is a good reason why the FFIEC hasn’t had to update their guidance on information security since 2006 (even though the threat landscape has changed dramatically since then)….they got it right the first time!

28 Jun 2011

Final FFIEC Authentication Guidance just released

Well, after much anticipation and speculation we finally have the updated FFIEC guidance, and there doesn’t appear to be anything radically new here that would justify waiting an additional 6 months.  At the very least I thought we might see some changes in the Effectiveness of Certain Authentication Techniques section, or in the Appendix (Threat Landscape and Compensating Controls), but both sections are virtually unchanged.  That said, I examined the final release against the draft, and here are my observations divided into 3 categories; Good, Bad, and Odd (all bold and italics in quoted text is mine):

Good

Education: “A financial institution’s customer awareness and educational efforts should address both retail and commercial account holders…”.  Agreed.  This is a good change from the draft.  Education shouldn’t be limited to high-risk transactions only.

Risk Assessments for Financial Institutions: (Generally good, but see below under Bad and Odd)

Layered Security Programs: (Again, generally good, but see below under Bad)

Bad

Multifactor Authentication: The draft release stated that “Financial institutions should implement multifactor authentication and layered security…”.  The final release changed that to “Financial institutions should implement layered security…the Agencies recommend that institutions offer multifactor authentication to their business customers.”  The verbiage change seems to remove multifactor authentication as a requirement.

Layered Security Programs: “Financial institutions should implement a layered approach to security for high-risk Internet-based systems”…how about layered security for ALL Internet-based systems?  (Late edit – the guidance does recommend layered security for both retail and business banking customers, but only “consistent with the risk” for retail customers.  More prescriptive guidance on determining a high risk retail customer from a low risk retail customer would have been beneficial here.)

And,

“The Agencies expect that an institution’s layered security program will contain the following two elements, at a minimum.
Detect and Respond to Suspicious Activity”.  NO, NO, NO, the FFIEC has previously stated that layered security programs must contain ALL THREE types of controls; preventive, detective and corrective.  The omission of preventive controls is particularly puzzling because in the section on Control of Administrative Functions it states “For example, a preventive control could include…An example of a detective control could include…”.  Preventive controls are the least expensive to implement, and by far the most effective. It’s such a glaring error that I’m inclined to believe it was a typo.

Risk Assessments for Financial Institutions: The final release changed an “and” to an “or”, possibly introducing some confusion.  Here is what the draft release said:

“Financial institutions should review and update their existing risk assessments as new information becomes available, focusing on authentication and related controls at least every twelve months and prior to implementing new electronic financial services.”

The final release says:

“Financial institutions should review and update their existing risk assessments as new information becomes available, prior to implementing new electronic financial services, or at least every twelve months

The final guidance might be misinterpreted to suggest that conducting risk assessments every 12 months are an either/or situation instead of a minimum requirement.

Risk Assessments for Customers: “A suggestion that commercial online banking customers perform a related risk assessment and controls evaluation periodically”  Really?  Nothing stronger than a suggestion?  This should be a requirement.

Odd

Specific Supervisory Expectation – Risk Assessments: Original draft:  “…financial institutions should perform periodic risk assessments and adjust their customer authentication, layered security, and other controls as appropriate in response to identified risks, including consideration of new and evolving threats to customers’ online accounts.”

Final release:  “…financial institutions should perform periodic risk assessments considering new and evolving threats to online accounts and adjust their customer authentication, layered security, and other controls as appropriate in response to identified risks.”

Not sure why the verbiage change there, but I don’t perceive any change in meaning…I actually liked the original verbiage better.

General Supervisory Expectations: This verbiage appeared in both the draft and final version, and it is this final odd observation that disturbs me the most…”The concept of customer authentication…is broad.  It includes more than the initial authentication of the customer when he/she connects to the financial institution at login.”  It would be extremely instructive here to define the “concept” by focusing on what it is, and not what it’s not.  Specifically, what more does the concept include beyond initial authentication?  The authentication of the transaction itself?  The transmission of the transaction through the Internet?  All interaction of the user with the interface?  The processing of the transaction at the financial institution and/or payment provider?  By defining the concept only as “broad”, by saying that it includes more than the initial authentication, this guidance has the potential of expanding the liability of the financial institution, and I can easily see this used in a future legal proceeding  to obfuscate the lines of responsibility*.

In the end, although the basics of the guidance are sound, I was disappointed that it didn’t go farther.  I will repeat what I said back in February; the guidance is still behind the curve on this issue, and institutions simply have too much to lose.  Implement additional preventive controls at the merchant side, additional controls at the institution side (such as dual authorization, out-of-band, positive pay, etc.), conduct annual (or more frequent) risk assessments, and most of all, educate everyone on basic security best practices.

 

*Particularly on the heals of recent court cases, one of which went the customer’s way, and the other (so far) going the Bank’s way.