Category: Hot Topics

16 Jul 2012

Commercially UNreasonable Security

So an appellate court has just reversed the PATCO court ruling, essentially deciding against the financial institution.  They ruled that the banks’ security procedures were commercially UN-reasonable.

To summarize, a commercial e-banking customer (PATCO Construction) experienced a financial loss due to an account take-over.  They sued the bank to recover the loss, claiming the bank used poor security.  The original ruling was in favor of the bank.  This ruling was in favor of the customer, and has major implications for all financial institutions as they navigate their way through the increasing risk and increased regulatory requirements of Internet banking.

The entire ruling is worth a read, but here are a few of the highlights from my perspective:

  • In the end, it wasn’t just a single control failure, but a series of failures on the part of the Bank that led to the ruling.  One example is that the Bank lowered the alert trigger for challenge questions from $100,000 to $1, effectively requiring all transactions to require an additional authentication step.  The Bank undoubtedly felt they were increasing the safety of all transactions by taking this step, but it actually had the opposite effect.  By requiring challenge questions for all transactions they substantially increased the number of chances the criminals had to intercept the correct challenge responses.
  • The on-line banking product (NetTeller Premium) and provider (Jack Henry & Associates) offered adequate options for on-line transaction security, but not all options were enabled by the Bank.  And…
  • …of those security options offered by the Bank, not all were accepted by the customer.  And…
  • …of those offered and accepted, some were ignored.  For example the anomaly detection capabilities worked properly, and automated risk profiling correctly generated abnormally high risk scores for the fraudulent transactions, but no action was taken by the Bank to block them.
  • The definition of “commercially reasonable” has evolved from the initial ruling favoring the customer, to the most recent one.  Both rulings make several references to Article 4A of the UCC (Uniform Commercial Code).  The initial ruling stated that because the customer signed the agreement, they implicitly agreed to the security measures, effectively rendering them commercially reasonable.    However the most recent ruling quotes from UCC 4A and states “[t]he standard is not whether the security procedure is the best available. Rather it is whether the procedure is reasonable for the particular customer and the particular bank.”  Therefore…
  • …a “one-size-fits-all” approach will not work, institutions MUST tailor their controls to the risks of the transaction.

But here is the most significant take-away for me, and the one with the biggest implication for financial institutions.  The judge ruled that based on the UCC 4A official comments on Section (1)(b), if and when the security procedures are deemed commercially reasonable, the burden then shifts to the customer

“…to supervise its employees to assure compliance with the security procedure and to safeguard confidential security information and access to transmitting facilities so that the security procedure cannot be breached.”

So, all you have to do is risk assess the customer and transactions and employ layered security suitable to the risk, and then the legal and financial liability shifts to the customer, right?  Maybe not.  According to the FFIEC Internet Authentication update, one of the controls an institution may include (translated from ‘FFIEC-speak’ as SHOULD include) in its layered security program is a “Customer Awareness and Education” program.  Which means you are still on the hook unless you can document that you also maintain a customer awareness program AND your customers are actually being trained.  (As I mentioned here, you may also want to add a summary of your customer awareness program to your annual report to the Board of Directors).

I’m certain we’ll see more lawsuits on this matter and future rulings may go either way, but the risk is real and immediate so don’t wait for the courts to sort things out.  Here is what you need to do:

  1. Complete the risk assessment if you haven’t already.  Define high risk transactions, and identity high risk customers.
  2. Implement a layered security program.  Make sure you know and understand all of the controls available from your e-banking product vendor.  Vendors are adding controls all the time to address the evolving threat environment.
  3. Make sure your customers know and understand all of the controls you’ve made available to them.   If they resist or refuse a particular control that you’ve recommended, have them sign-off that they understand and accept the increased risk.
  4. Educate your customers, initially and periodically throughout the relationship, and regardless of whether they resist.  Regardless of the quantity and sophistication of your technical controls, the customer is, and will always remain, the weakest link in the security chain.
10 Jul 2012

FFIEC issues Cloud Computing Guidance

Actually the document is classified as “for informational purposes only”, which is to say that it is not a change or update to any specific Handbook and presumably does not carry the weight of regulatory guidance.  However, it is worth a read by all financial institutions outsourcing services because it provides reinforcement for, and references to, all applicable guidance and best practices surrounding cloud computing.

It is a fairly short document (4 pages) and again does not represent a new approach, but rather reinforces the fact that managing cloud providers is really just a best practices exercise in vendor management.  It makes repeated reference to the existing guidance found in the Information Security and Outsourcing Technology Services Handbooks.  It also introduces a completely new section of the InfoBase called Reference Materials.

The very first statement in the document pretty well sums it up:

“The (FFIEC) Agencies consider cloud computing to be another form of outsourcing with the same basic risk characteristics and risk management requirements as traditional forms of outsourcing.”

It then proceeds to describe basic vendor management best practices such as information security and business continuity, but one big take-away for me was the reference to data classification.  This is not the first time we’ve seen this term, I wrote about examiners asking for it here, and the Information Security Handbook says that:

“Institutions may establish an information data classification program to identify and rank data, systems, and applications in order of importance.”

But when all your sensitive data is stored, transmitted, and processed in a controlled environment  (i.e. between you and your core provider) a simple schematic will usually suffice to document data flow.  No need to classify and segregate data, all data is treated equally regardless of sensitivity.  However once that data enters the cloud you lose that control.  What path did the data take to get to the cloud provider?  Where exactly is the data stored?  Who else has access to the data?  And what about traditional issues such as recoverability and data retention and destruction?

Another important point made in the document, and one that doesn’t appear in any other guidance,  is that because of the unique legal and regulatory challenges faced by financial institutions, the cloud vendor should be familiar with the financial industry.  They even suggest that if the vendor is not keeping up with regulatory changes (either because the are unwilling or unable) you may determine on that basis that you cannot employ that vendor.

The document concludes by stating that:

“The fundamentals of risk and risk management defined in the IT Handbook apply to cloud computing as they do to other forms of outsourcing. Cloud computing may require more robust controls due to the nature of the service.”

And…

“Vendor management, information security, audits, legal and regulatory compliance, and business continuity planning are key elements of sound risk management and risk mitigation controls for cloud computing.”

…as they are for all outsourced relationships!

12 Jun 2012

Managing Social Media Risk – LinkedIn Edition

By now everyone has heard about the breach at LinkedIn, where 6.5 million email password hashes were leaked (over half of which have been cracked, or converted into plain text).  Those who read this blog regularly know how I feel about social media in general:

“So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic and financial) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit.

This is not to say that social media can never be properly risk managed, only that the decision to engage (or not) must be analyzed the same way you analyze any other business decision.  And this is a challenge because social media does not easily lend itself to traditional risk management techniques, and this incident is a good case in point.

So once again, let’s use this latest breach as yet another incident training exercise.  In your initial risk assessment, chances are you classified the site as low risk.  There is no NPI/PII stored there, and it doesn’t offer transactional services beyond account upgrades.  Additionally, regarding the breach itself, only about 5% of all user password hashes were disclosed, and as I said previously, about half of those were converted into the underlying plain text password.  And what exactly is your risk exposure if your password was one that was stolen and cracked?  First of all, they would also need your login name to go with the password.  But if they were able to somehow put the two together, they might change your employment or background information, or post something that could portray you or your company in a negative light.  So there are certainly some risks, but they come with lots of “ifs”.  So low probability + low impact = low risk…change your password and move on, right?

Well maybe, depending on how you answer this question:  Is your LinkedIn password being used anywhere else?  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you can also assume it is now being used in dictionary attacks.  So yes, if you are an individual user, change your LinkedIn password, but also change all other occurrences of that password.

But back to our training exercise…if you are an institution with an official (or unofficial) LinkedIn presence through one or more employees, even if they’ve changed their password(s), you may still be at risk.  If the employee uses the same password to access your Facebook or Google+ page, or remotely authenticate to your email system, or access anything else that is connected to you, your response procedures should require (and validate) that all affected passwords have been changed.  In fact, since you have no way of knowing if your employee has a personal LinkedIn (or Facebook, etc.) presence,  it might be good practice to have your network administrator force all passwords to change just to be safe.  You may also want to change your policy to state that  internal (or corporate) passwords should never be duplicated or re-used on external or personal sites (although enforcing that may be a challenge).

As far as what you can do to reduce the chance of this type of incident happening again, there isn’t much.  You have to rely on your service providers to properly manage their own security.  You do this in part by obtaining and reviewing third-party reviews (like the SOC reports) if they exist,  but also by reviewing the vendor’s own privacy and security policy.  For example, LinkedIn’s privacy policy says this about the data it collects from you:

  • Security
    • Personal information you provide will be secured in accordance with industry standards and technology. Since the internet is not a 100% secure environment, we cannot ensure or warrant the security of any information you transmit to LinkedIn. There is no guarantee that information may not be accessed, copied, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. (Bold is mine)
    • You are responsible for maintaining the secrecy of your unique password and account information, and for controlling access to your email communications at all times.

Even though they have made public statements that they have taken steps to address the root cause of the breach, given the above policy there is no indication that LinkedIn feel it necessary to obtain a third-party review for validation of their enhanced privacy and security measures.  Granted, given the nature of the information they collect and store they may not feel compelled to do so, and you may not require it, but at the very least you should expect the passwords to be secure.

The first step in managing risk is to identify it.  In this case because of the breach, the unanswered questions*, the lack of a third-party review, and their privacy policy, you are accepting a higher level of residual risk with them than you would normally find acceptable in another vendor.  You can still rationalize your decision strategically, but you must quantify the expected return and then document that the return justifies the increased risk.  And then do the same for your other social media efforts!

 

*Indeed there are several issues raised by this breach that are yet to be answered:  How did it occur?  Could the breach be worse than disclosed?  Why did they encrypt the passwords using the older SHA1 hash algorithm?  Why did they not salt the hashes?  Why didn’t they have a CIO?  Did they truly use industry standards to secure your information?  If they did, those standards are clearly inadequate, so will they now exceed industry standards?

04 Jun 2012

5 Keys to Understanding a SOC 2 Report

Although I have written about these relatively new reports frequently, and for some time now, it still remains a topic of great interest to financial institutions.  Fully 20% of all searches on this site over the past 6 months include the terms “SOC” or “SOC 2”, or “SAS 70”.  Some of this increased interest comes from new FFIEC guidance on how financial institutions should manage their service provider relationships, and some of it comes from financial institutions that are just now seeing these new reports from their vendors for the first time.  And because the SOC 2 is designed to focus on organizations that collect, process, transmit, store, organize, maintain or dispose of information on behalf of others, you are likely to see many more of them going forward.

Having just completed our own SOC 2 (transitioning from the SAS 70 in the previous period), I can say unequivocally that  not only is it much more detailed, but that it has the potential to directly addresses the risks and controls that should concern you as the recipient of IT related services.  But not all SOC 2 reports are alike, and you must review the report that your vendor gives you to determine its relevance to you.  Here are the 5 things you must look for in every report:

  1. Products and Services – Does the report address the products and services you’ve contracted for?

  2. Criteria – Which of the 5 Trust Services Criteria (privacy, security, confidentiality, availability and data integrity) are included in the report?

  3. Sub-service Providers – Does the report cover the subcontractors (sub-service providers) of the vendor?

  4. Type I or Type II – Does the report address the effectiveness of the controls (Type II), or only the suitability of controls (Type I)?

  5. Exceptions – Is the report “clean”?  Does it contain any material exceptions?

Before we get into the details of each item, it is important to understand how a SOC 2 report is structured.  There are 3 distinct sections to a SOC 2 (and they generally appear in this order);

  1. The Service Auditors Report,
  2. The Managements Assertion, and
  3. The Description of Systems.

So simply put, what happens in a SOC 2 report is that your service providers’ management prepares a detailed description of the systems and processes they use to deliver their products and services to you, and the controls they have in place to manage the risks.  They then make an assertion that the description is accurate and complete.  Finally, the auditor renders an opinion on whether or not the description is “fair” as to control suitability (Type I) and effectiveness (Type II).

Products and Services

The first thing to look for in a SOC 2 report is generally found in the Management’s Assertion section.   It will state something to the effect that “…the system description is intended to provide users with information about the X, Y and Z services…”  You should be able to identify all of your products and services among the “X”, “Y”, and “Z”.  If you have a product or service with the vendor that is not specifically mentioned, you’ll need to satisfy yourself that the systems and processes in place for your products are the same as they are for the products covered in the report.  (You should also encourage the vendor to include your products in their next report.)

Criteria

The next thing to look for is found in the Service Auditor’s Report section.  Look for the term “Trust Services Principles and Criteria”, and make a note of which of the 5 criteria are listed.  The 5 possible covered criteria are:  Privacy, Security, Confidentiality, Integrity and Availability.  Service provider management is allowed to select which criteria they want included in the report, and once again you should make sure your specific concerns are addressed.

Sub-service Providers

The next item is also found in the Service Auditor’s Report section, and usually in the first paragraph or two.  Look for either “…our examination included controls of the sub-service providers”, or “…our examination did not extend to controls of sub-service providers”.  The report may also use the terms “inclusive” to indicate that they DID look at sub-service providers, or “carve-out” to indicate that the auditor DID NOT look at the controls of any sub-service providers.  These are the service providers to your service provider, and if they store or process your (or your customers) data you’ll need assurance that they are being held to the same standards as your first-level service provider.  This assurance, if required and  not provided in the SOC 2, may be found in a review of the sub-service provider’s third-party reviews.

Type I or Type II

As with the older SAS 70, the new SOC 1 and SOC 2 reports come in two versions; a Type I, which reports on the adequacy of controls as of a single point in time, and a Type II, which reports on both control adequacy and effectiveness by evaluating the controls over a period of time, typically 6 months.  Clearly the Type II report is preferred, but because the SOC 2 audit guides were just released last year, most service providers may choose to initially release a Type I.  If your concerns about the service provider include whether or not their risk management controls were both adequate AND effective (and in most cases they should), make sure they immediately follow up the Type I with a Type II.

Exceptions

Finally, scan the Service Auditor’s Report section for verbiage such as “except for the matter described in the preceding paragraph…”, or “the controls were not suitably designed…” or “…disclaim an opinion…”, or terms such as “omission” or “misrepresentation” or “inadequate”.  These are an indication that the report could contain important material exceptions that would be cause for concern.

One more thing…pay particular attention to a sub-section (usually found in Description of Systems section) called “Complementary End-User (or User-Entity) Controls”.  This is not new to the SOC reports, the SAS 70 had a similar section, but it is one of the most important parts of the entire report, and one that is often ignored.  This is a list of what the vendor expects from you.  Things without which some or all of the criteria would not be met.  This is the vendor saying “we’ll do our part to keep your data private, secure, available, etc.,  but we expect you to do a few things too”.  It’s important that you understand these items, because the entire auditor’s opinion depends on you doing your part, and failure to do so could invalidate some or all of the trust criteria.  By the way, you should be able to find a corresponding list of these end-user controls repeated in your contracts.

The lesson here is that vendor third-party reviews like the SOC 2 are no longer a “check the box and be done” type of exercise.  As part of your vendor management process, you must actually review the reports, understand them (don’t hesitate to enlist the help of your own auditor if necessary), and document that they adequately address your concerns.

25 Apr 2012

FDIC Supervisory Letter Issued on Critical Service Provider

(NOTE:  Although the vendor in question has been publicized by the NCUA, I will not name it here because it is not relevant.  If you currently contract with the vendor you know who it is, and you need to know how to respond to the letter.  If you don’t, you’ll need to know how to respond in case it happens to a critical vendor of yours at some point.)

What if you received this letter from the FDIC on one of your most critical service providers (summarized and redacted)?

“Dear Board of Directors,

Enclosed is a copy of the Information Technology (IT) Supervisory Letter based on the interim review of (your vendor).  We are sending you this Supervisory Letter for your evaluation and consideration in managing your vendor relationship…I encourage you to review the Supervisory Letter as it discusses some regulatory concerns that require corrective action by (your vendors’) management and Board of Directors.

Sincerely,

FDIC Regional Director”

The letter states in part:

“(Vendors’) Executive Management supervision and control over the Risk Management (RM) and Information Security (IS) functions are unsatisfactory. Additionally, the Board of Directors (BOD) does not provide sufficient direction and oversight for management responsibilities, as well as for independent review in these areas by Internal Audit (IA). The breadth and severity of weaknesses noted at this IR stem from management’s failure to adequately address previously identified systemic issues and to take proactive measures to mitigate the identified systemic risks. These weaknesses had exposed serviced financial institutions to increased risk, and have raised concerns regarding management’s ability to establish and enforce effective information security measures commensurate with the needs of (vendor).”

So the FDIC conducted an IT Examination on the service provider.  Nothing new there…IT service providers are subject to the same regulatory oversight as financial institutions, and even have their own Examination Handbook*.   However, in this case the exam uncovered significant material weaknesses in their audit, management and IT controls.  Weaknesses so severe that the FDIC felt it necessary to proactively notify all institutions under their regulatory responsibility that utilize the provider.

Since the FDIC stated that they are sending the letter for “your evaluation and consideration“, they clearly expect you to take specific action on this matter.  Don’t be surprised to see them asking for your formal response during your next visit from them.  So here is what you’ll need to do:

  • The first thing you’ll want to do is call a meeting with the group you use to manage your vendor relationships.  If you haven’t assigned vendor management responsibility to a management committee (as opposed to an individual), do so.  IT Steering or Audit is a logical choice.  Formally document in the committee that “the examiner’s letter represents certain concerns that will cause us to reevaluate the vendor, reassess the residual risk, and consider implementing additional compensating controls”.
  • Request, review and evaluate the vendor’s response to the examiners letter.  Determine whether the response is sufficient to address your concerns.  If not, consider implementing the following additional compensating controls:
  1. Accelerate the normal annual due diligence process by requesting more frequent financial statements (quarterly instead of annual).
  2. Request that vendor provide additional 3rd party security reviews other than SSAE 16 if possible (i.e. SOC 2, PEN tests, etc.).  The SOC 2 is a good choice, as it directly addresses controls related to privacy, security, confidentiality, integrity and availability…all the things that are important to you.
  3. Have legal review existing vendor contracts for possible breach of contract claims.
  4. Consider adding a “right to audit” clause in future contracts.
  5. Become active (or more active) in vendor user groups.  The intent is to stay close to the situation, and possibly influence them to release additional 3rd party reviews (such as SOC 2).

It is important to take action even if you are in a long term contract with the vendor, or if the vendor would be difficult to replace.  And you can’t take the position that since you can’t control what the vendor does, you’ll simply have to go along…that it’s not your problem to solve.  Guidance makes it clear that “institutions should ensure the service provider’s physical and data security standards meet or exceed standards required by the institution.”  So for all intents and purposes, the vendor’s deficiencies are your problem.

*According to the FFIEC:

The federal financial regulators have the statutory authority to supervise all of the activities and records of the financial institution whether performed or maintained by the institution or by a third party on or off of the premises of the financial institution.

The decision to examine a service provider is at least partially based on the number of Bank Service Company Act (BSCA) filings the regulators receive on the provider.  I explain this here, and make the point that because the definition of a “Service Company” has expanded, more service providers can expect more examinations in the future.

09 Apr 2012

FFIEC Handbook Update – Outsourcing

The FFIEC has just added a section to the Outsourcing Technology Services IT Examination Handbook, and it should be required reading for financial institutions as well as any managed service providers.  The new section is Appendix D: Managed Security Service Providers, and it is the first significant change to the Handbook since it was released in 2004.  It addresses the fact that because of the increasing sophistication of the threat environment, and the lack of internal expertise, a growing number of financial institutions are (either partially or completely) outsourcing their security management functions to unaffiliated third-party vendors.

Because of the critical and sensitive nature of these security services, and the loss of control when these services are outsourced, the guidance stresses that institution must address additional risks beyond their normal vendor management responsibilities.  Specifically, more emphasis must be placed on the contract and on oversight of the vendor’s processes, infrastructure, and control environment.

The most interesting addition to the guidance for me is the “Emerging Risks” section, which is the first time the FFIEC has addressed cloud computing.  Although it is addressed from the perspective of the service provider, it defines cloud computing this way:

“…client users receive information technology services on demand from third-party service providers via the Internet “cloud.” In cloud environments, a client or customer will relocate their resources such as data, applications, and services to computing facilities outside the corporate firewall, which the end user then accesses via the Internet.”

Any data transmitted, stored or processed outside the security confines of the corporate firewall is considered higher risk data, and must have additional controls.  This would seem to infer that data in the cloud should be classified differently in your data-flow diagram, and have a correspondingly higher protection profile.*  It will be interesting to see if this will be the FFIEC’s approach when and if they address cloud computing in the future.

The guidance also has a useful MSSP Engagement Criteria matrix that institutions can use to evaluate their own service providers, as well as a set of MSSP Examination Procedures, which service providers (like mine) can use to prepare for future examinations.  In summary, financial institutions would be wise to familiarize themselves with the new guidance, after all to quote from the last line;

“As with all outsourcing arrangements FI management can outsource the daily responsibilities and expertise; however, they cannot outsource accountability.”

 

 

* A protection profile is a description of the protections that should be afforded to data in each classification.