Category: Hot Topics

25 Oct 2013

Windows XP and Vendor Management

The FFIEC issued a joint statement recently regarding Microsoft’s discontinuation of support for Windows XP.  The statement requires financial institutions to identify, assess, and manage the risks of these devices in their institutions after April 8, 2014.   After this date Microsoft will no longer provide regular security patches or support for this product, potentially leaving those devices vulnerable to cyber-attack and/or incompatibility with other applications.

Identifying, assessing and managing these devices within your own organization is fairly straightforward.  Have your admin or support provider run an OS report and present it to the IT Committee for review and discussion of possible mitigation options.  But somewhat lost in the FFIEC guidance is the fact that you are also responsible for identifying and assessing these devices at your third-party service providers as well.  While the statement was written as if it was directed at both FI’s and TSP’s separately, the FFIEC makes it clear that:

A financial institution’s use of a TSP to provide needed products and services does not diminish the responsibility of the institution’s board of directors and management to ensure that the activities are conducted in a safe and sound manner and in compliance with applicable laws and regulations, just as if the institution were to perform the activities in-house.

So my interpretation of the expectations resulting from this guidance is that you must reach out to your critical service providers and ask about any XP devices currently in use at their organization.  If they aren’t using any, an affidavit from the CIO or similar person should suffice.  If they are, a statement about how they plan to mitigate the risk should be made a part of your risk assessment.  The fact that the FFIEC mentioned “TSP’s” five times in less than two pages indicates to me that they expect you to be pro-active about this.

One other thing that might have been overlooked in the guidance is this concept of operational risk.  Many IT risk assessments focus exclusively on the information security elements in their risk assessments, i.e. access to NPI/PII.  They only assess the GLBA elements of privacy and security.  Operational risk addresses the risk of failure, or of not performing to management’s expectations.  If your risk assessment is limited only to GLBA elements, expand it.  Make sure the criticality of the asset, product, or service is assessed as well.  And, when indicated by high residual risk, refer to your business continuity plan for further mitigation.

17 Sep 2013

Data Classification and the Cloud

UPDATE –  In response to the reluctance of financial institutions to adopt cloud storage, vendors such as Microsoft and HP have announced that they are building “hybrid” clouds.  These new models are designed to allow institutions to simultaneously store and process certain data in the cloud, while a portion of the processing or storage is done locally on premise.  For example, the application may reside in the cloud, but the customer data is stored locally.  This may make the decision easier, but only makes classification of data more important, as the decision to utilize a “hybrid” cloud must be justified by your assessment of the privacy and criticality of the data.

I get “should-we-or-shouldn’t-we” questions about the Cloud all the time, and because of the high standards for financial institution data protection, I always advise caution.  In fact, I recently outlined 7 cloud deal-breakers for financial institutions.  But could financial institutions still justify using a cloud vendor even if they don’t seem to meet all of the regulatory requirements?  Yes…if you’ve first classified your data.

The concept of “data classification” is not new, it’s mentioned several times in the FFIEC Information Security Handbook:

“Institutions may* establish an information data classification program to identify and rank data, systems, and applications in order of importance. Classifying data allows the institution to ensure consistent protection of information and other critical data throughout the system.”

“Data classification is the identification and organization of information according to its criticality and sensitivity. The classification is linked to a protection profile. A protection profile is a description of the protections that should be afforded to data in each classification.”

The term is also mentioned several times in the FFIEC Operations Handbook:

“As part of the information security program, management should* implement an information classification strategy appropriate to the complexity of its systems. Generally, financial institutions should classify information according to its sensitivity and implement
controls based on the classifications. IT operations staff should know the information classification policy and handle information according to its classification.”

 But the most relevant reference for financial institutions looking for guidance about moving data to the Cloud is a single mention in the FFIEC Outsourcing Technology Services Handbook, Tier 1 Examination Procedures section:

“If the institution engages in cloud processing, determine that inherent risks have been comprehensively evaluated, control mechanisms have been clearly identified, and that residual risks are at acceptable levels. Ensure that…(t)he types of data in the cloud have been identified (social security numbers, account numbers, IP addresses, etc.) and have established appropriate data classifications based on the financial institution’s policies.”

So although data classification is a best practice even before you move to the cloud, the truth is that most institutions aren’t doing it (more on that in a moment).   However examiners are expected to ensure (i.e. to verify) that you’ve properly classified your data afterwards…and that regardless of where data is located, you’ve protected it consistent with your existing policies.  (To date I have not seen widespread indications that examiners are asking for data classification yet, but I expect as cloud utilization increases, they will.  After all, it is required in their examination procedures.)

Most institutions don’t bother to classify data that is processed and stored internally because they treat all data the same, i.e. they have a single protection profile that treats all data at the highest level of sensitivity.  And indeed the guidance states that:

“Systems that store or transmit data of different sensitivities should be classified as if all data were at the highest sensitivity.”

But once that data leaves your protected infrastructure everything changes…and nothing changes.  Your policies still require (and regulators still expect) complete data security, privacy, availability, etc., but since your level of control drops considerably, so should your level of confidence.  And you likely have sensitive data combined with non-sensitive, critical combined with non-critical.  This would suggest that unless the cloud vendor meets the highest standard for your most critical data, they can’t be approved for any data.  Unless…

  1. You’ve clearly defined data sensitivity and criticality categories, and…
  2. You’re able to segregate one data group from another, and…
  3. You’ve established and applied appropriate protection profiles to each one.

Classification categories are generally defined in terms of criticality and sensitivity, but the guidance is not prescriptive on how you should label each category.  I’ve seen “High”, “Medium”, and “Low”, as well as “Tier 1”, “Tier 2” and “Tier 3”, and even a scale of 1 to 5,…whatever works best for your organization is fine.  Once that is complete, the biggest challenge is making sure you don’t mix data classifications.  This is easier for data like financials or Board reports, but particularly challenging for data like email, which could contain anything from customer information to yesterdays lunch plans.  Remember, if any part of the data is highly sensitive or critical, all data must be treated as such.

So back to my original question…can you justify utilizing the cloud even if the vendor is less than fully compliant?  Yes, if data is properly classified and segregated, and if cloud vendors are selected based on their ability to adhere to your policies (or protection profiles) for each category of data.

 

 

*In “FFIEC-speak”, ‘may’ means “should’, and ‘should’ means ‘must’.

05 Aug 2013

Critical Controls for Effective Cyber Defense – Converging Standards?

Earlier this year the SANS Institute issued a document titled “Critical Controls for Effective Cyber Defense“.  Although not specific to financial institutions, it provides a useful prescriptive framework for any institution looking to defend their networks and systems from internal and external threats.  The document lists the top 20 controls institutions should use to prevent and detect cyber attacks.

This document actually preceded the announcement by the FFIEC in June that they were forming a working group to “promote coordination across the federal and state banking regulatory agencies on critical infrastructure and cybersecurity issues”.  I mentioned this announcement here in relation to its possible effect on future regulatory guidance.  So I was particularly interested in any overlap, any common thread, between the this initiative and the SANS document.  If there was any overlap between the organizations contributing to the SANS list and the FFIEC Cybersecurity working group, we might have the basis for  a common, consistent set of prescriptive guidance. Could a single “check-list” type information security standard be in the works?

For example, the Information Security Handbook requires financial institutions to have “…numerous controls to safeguard and limits access to key information system assets at all layers in the network stack.”  They then go on to suggest general best practices in various categories for achieving that goal, leaving the specifics up to the institution.

Contrast that to the much more specific SANS Critical Control list.  Here are the first 5:

  • Critical Control 1:  Inventory of Authorized and Unauthorized Devices
  • Critical Control 2:  Inventory of Authorized and Unauthorized Software
  • Critical Control 3:  Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
  • Critical Control 4:  Continuous Vulnerability Assessment and Remediation
  • Critical Control 5:  Malware Defenses

As you can see, although the goal of protecting information assets is the same in each case, the SANS list is much more specific.  Could we possibly see a converging of the general guidance of the FFIEC with the more specific control requirements of SANS, with cybersecurity as the common goal?  Again, a look at the common contributors to each group might provide a clue.

The SANS group credits input from multiple agencies of the U.S. government; the Department of Defense, Homeland Security, NIST, FBI, NSA, Department of Energy, and others.  The FFIEC working group coordinates with groups such as the FFIEC’s Information Technology Subcommittee of the Task Force on Supervision, the Financial and Banking Information Infrastructure Committee, the Financial Services Sector Coordinating Council, and the Financial Services Information Sharing and Analysis Center (FS-ISAC).  SO no direct common thread there, unfortunately.  However the FS-ISAC group does share many partners with the SANS group, including the Departments of Defense, Energy, and Homeland Security, so we may yet see the FFIEC Information Security guidance evolve.  Particularly since the Handbook was published back in 2006, and is overdue for a major update.  In the meantime, financial institutions would be well advised to use the SANS Critical Controls as a de-facto checklist to measure their own security posture.*

By the way, the document  also lists 5 critical tenets of an effective cyber defense system, 2 of which are ‘Continuous Monitoring’ and ‘Automation’.   More on those in a future post (although I already addressed the advantages of automation here).

* There is nothing in the SANS list that is inconsistent with FFIEC requirements, in fact we’ve already seen at least one company servicing the Credit Union industry adopt this list as their framework.  However, keep in mind that although the controls listed are necessary for cyber defense, they are not sufficient.  A fully compliant information security program must also address management and oversight…an area conspicuously absent on the SANS list.

04 Jun 2013

Incident Response in an Outsourced World

UPDATE – On June 6th the FFIEC formed the Cybersecurity and Critical Infrastructure Working Group, designed to enhance communications between and among the FFIEC members agencies as well as other key financial industry committees and councils.  The goal of this group will undoubtedly be to increase the defense and resiliency of financial institutions to cyber attacks, but the question is “what effect will this have on new regulatory requirements and best practices”?  Will annual testing of your incident response plan be a requirement, just as testing your BCP is now?  I think you can count on it…

I’ve asked the following question at several recent speaking engagements:  “Can you remember the last time you heard about a financial institution being hacked, and having its information stolen?”  No responses.  I then ask a second question:  “Can anyone remember the last time a service provider was hacked, and financial institution data stolen?”.  Heartland…TJX…FIS…almost every hand goes up.

As financial institutions have gotten pretty good at hardening and protecting data, cyber criminals are focusing more and more on the service providers as the weak link in the information security chain.  And wherever there are incidents making the news, the regulators are sure to follow with new regulations and increased reinforcement of existing ones.

The regulators make no distinction between your responsibilities for data within your direct control, and data outside your direct control;

“Management is responsible for ensuring the protection of institution and customer data, even when that data is transmitted, processed, stored, or disposed of by a service provider.” (Emphasis added)

In other words, you have 100% of the responsibility, and zero control.  All you have is oversight, which is at best predictive and reactive, and NOT preventive.  So you use the vendor’s past history and third-party audit reports to try to predict their ability to prevent security incidents, but in the end you must have a robust incident response plan to effectively react to the inevitable vendor incident.

The FFIEC last issued guidance on incident response plans in 2005 (actually just an interpretation of GLBA 501b provisions), stating that…

“…every financial institution should develop and implement a response program designed to address incidents of unauthorized access to sensitive customer information maintained by the financial institution or its service provider.” (Emphasis added)

The guidance specified certain minimum components for an incident response plan, including:

  • Assessing the nature and scope of an incident and identifying what customer information systems and types of customer information have been accessed or misused;
  • Notifying its primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information;
  • If required, filing a timely SAR, and in situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, promptly notifying appropriate law enforcement authorities;
  • Taking appropriate steps to contain and control the incident to prevent further unauthorized access to or use of customer information; and
  • Notifying customers when warranted in a manner designed to ensure that a customer can reasonably be expected to receive it.

The guidance goes on to state that even if the incident originated with a service provider the institution is still responsible for notifying their customers and regulator.  Although they may contract that back to the service provider, I have personally not seen notification outsourcing to be commonplace, and in fact I would not recommend it.  An incident could carry reputation risk, but mishandled regulator or customer notification could carry significant regulatory and financial risks.  In other words, while the former could be embarrassing and costly, the latter could shut you down.

So to summarize the challenges:

  • Financial institutions are outsourcing more and more critical products and services.
  • Service providers must be held to the same data security standards as the institution, but…
  • …the regulators are only slowly catching up, resulting in a mismatch between the FI’s security, and the service provider’s.
  • Cyber criminals are exploiting that mismatch to increasingly, and successfully, target institutions via their service providers.

What can be done to address these challenges?  Vendor selection due diligence and on-going oversight are still very important, but because of the lack of control, an effective incident response plan is the best, and perhaps only, defense.  Yes, preventive controls are always best, but lacking those, being able to quickly react to a service provider incident is essential to minimizing the damage.  When was the last time you reviewed your incident response plan?  Does it contain all of the minimum elements listed above?  Better yet, when was the last time you tested it?

Just as with disaster recovery, the only truly effective plan is one that is periodically updated and tested.  But unlike DR plans, most institutions don’t even update their incident responses plans, let alone test them.  And while there are no specific indications that regulators have increased scrutiny of incident response plans just yet, I would not be at all surprised if they do so in the near future.  Get ahead of this issue now by updating your plan and testing it.  Use a scenario from recent events, there are certainly plenty of real-world examples to choose from.  Gather the members of your incident response team together and walk through your response, the entire test shouldn’t take more than an hour or so.  Answer the following questions:

  1. What went wrong to cause the incident?  Why (times 5…root cause)?  If this is a vendor incident, full immediate disclosure of all of the facts to get to the root cause may be difficult, but request them anyway…in writing.
  2. Was our customer or other confidential data exposed?  If so, can it be classified as “sensitive customer information“?
  3. Is this a reportable incident to our regulators?  If so, do we notify them or does the vendor?  (Check your contract)
  4. Is this a reportable incident to our customers?  How do we decide if “misuse of the information has occurred or it is reasonably possible that misuse will occur“?
  5. Is this a reportable incident to law enforcement?
  6. What if the incident involved a denial of service attack, but no customer information was involved?  A response may not be required, but should you?
  7. What can we do to prevent this from happening again (see #1), and if we can’t prevent it, are there steps we should take to reduce the possibility?  Can the residual risk be transferred?

Make sure to document the test, and then test again the next time an incident makes the news.  It may not prevent the next incident from involving you, but it could definitely minimize the impact!

 

NOTE:  For more on this topic, Safe Systems will be hosting the webinar “How to Conduct an Incident Response Test” on 6/27.  The presentation will be open to both customers and non-customers and is free of charge, but registration is required.  Sign up here.

 

24 Apr 2013

The Financial Institutions Examination Fairness and Reform Act – Redux

This new bill (H.R. 1553) introduced on April 15th is actually a word-for-word duplicate of H.R. 3461 which I wrote about here.   The previous bill died in committee, but H.R. 1553 has a few more sponsors.  Now, I know what you are thinking…that there is no such thing as “good” regulation.   But bear with me, because this bill actually is good for the industry, banks and credit unions alike.

The full text of the bill is here, and I encourage everyone to read it and consider throwing your support behind it, but in summary it…

  1. …requires that examiners issue their final examination reports in a timely manner, no later than 60 days following the exit interview.  This is important for FI’s because all examination findings must be reported to the Board, and assigned to responsible parties for remediation.  I have heard stories of institutions waiting 6+ months for a final report, and this just isn’t fair.  Since most institutions are on a 12 month examination cycle, this only leaves a few months for remediation in order to avoid repeat findings.
  2. …requires that the examiner make available all factual information relied upon by the examiner in support of their findings.  This levels the playing field, allowing institutions to see exactly why findings occurred and better prepare you to push back if you think the finding is incorrect.
  3. …changes the treatment of commercial loans.  Currently, if the value of the underlying collateral declines, the loan may be forced into non-accrual status regardless of the repayment capacity of the borrower.  The bill would prevent that from happening.  It would also prevent a new appraisal on a performing loan unless new funds were involved.  It stands to reason that if non-accrual status is tied to non-performing status, it should result in higher asset quality assessments, resulting in fairer reserve requirements, fewer enforcement actions, and fewer CAMELS score downgrades.
  4. …requires a standard definition of “non-accrual” along with consistent reporting requirements.  The less institutions are subject to examiner interpretation, the more predictable the examination experience will be.  Lack of consistency resulting in unpredictable results is the single biggest complaint of the examination process.  I don’t know of a single institutions that wants to “get away” with anything during an exam, they only want to know what to expect.
  5. …establishes an “examination ombudsman” in the office of the FFIEC, independent of any regulatory agency.  In addition to being a more impartial forum for presentation of grievances regarding the examination process, they would also be responsible for assuring that all examinations adhere to the same standards of consistency.  In the survey conducted with my previous post on this topic, 60% of respondents said that they would be more likely to appeal an exam finding if the appeal process was with the FFIEC as opposed to the regulator that conducted the exam.
  6. …would prohibit retaliation against the FI for exercising their rights under the appeals process, and delay any further agency action until the appeals process was complete.

Of course it’s a long road from a bill to a law, but I think you would agree that all these things are good for the industry.  At the very least, any regulation that gives bankers more control and less uncertainty is a welcome change from recent events!  You can track it here.  A companion Senate bill was also just introduced, S. 727.  Track it here.

26 Mar 2013

Court rules in favor of Bank in account takeover case

Unlike the PATCO ruling, a district court in Missouri has ruled in favor of the bank in an account takeover case brought by one of its commercial customers.  This case was very similar to the PATCO case with one important exception, which I’ll discuss shortly.  But it also raises some interesting questions that could impact financial institutions.

First, the details.  In March 2010, BancorpSouth received a request via the Internet to execute a wire transfer in the amount of $440,000 on behalf of its customer, Choice Escrow and Land Title.  The Bank wired the funds, and the following day the customer contacted the Bank to notify them that they they in fact did not authorize the wire transfer.  The company filed suit to recover the loss, claiming that the Bank did not use appropriate security measures.  But their claim wasn’t that appropriate security wasn’t made available, but that there were several security options available and the Bank allowed the customer to select an inferior option.  This is quite different from the PATCO case, where strong authentication was available to the Bank from the software vendor, but the Bank in that case decided not to offer it to their customer.  In this case the Bank offered both single and dual-control authentication options, and the customer waived the dual-control option.  This gave any authorized user of the software the ability to initiate and approve a wire without requiring a second user to approve and release the funds.  Using malware, a hacker was able to gain control of the PC, record the user name and password via a keystroke logger, and send the fraudulent wire.

The PATCO case was decided in favor of the customer because the Bank failed to make strong, commercially reasonable, authentication options available to the customer even though the software vendor offered them to the Bank.  But in this case, the judge decided just the opposite; stronger options were made available, but were declined by the customer.  Remember, according to UCC 4A the risk of loss for an unauthorized transaction will lie with a customer if the bank can establish that its security procedure is a commercially reasonable method of providing security against unauthorized payment orders.  So, advantage Bank.  But again, the customer claimed that the Bank should NOT have offered the weaker option to them knowing that it was insufficient to address the risks.  In other words, simply offering the weaker option to the customer was an implicit acknowledgement by the Bank that it was commercial reasonable.  In the end this argument was rejected because the Bank had documentation that it offered, and the customer refused, the stronger option multiple times.

Although this case turned out OK for the Bank, the verdict does raise several questions for financial institutions:

  • Knowing that one option is better than another, should institutions even offer more than one authentication option to their customers?  And what happens when a customer (or product) increases in risk?  Do you require the users to upgrade?
  • Since the judge in both this case and the PATCO case referenced UCC 4A as the legal basis for their decisions, should the FFIEC be more prescriptive about exactly what constitutes “commercially reasonable” (and what doesn’t)?  The 2003 FFIEC E-Banking guidance says that “whether a method is a commercially reasonable system depends on an evaluation of the circumstances.”  But the updated 2011 FFIEC authentication guidance doesn’t mention “commercially reasonable” (or UCC 4A) at all.  Why not?  Specifically, why not include the “…the risk of loss for an unauthorized transaction will lie with a customer if…” language?
  • Are institutions putting too much faith in technical measures, and avoiding simpler, but more effective, controls?   Anomaly detection is getting a lot of attention these days, but in this case Choice had a history of transfers with similar size and quantity, and anomaly triggers were not activated.  Simple dual-authentication would have prevented this fraudulent transfer.
  • On the other hand, are vendors overlooking more effective technologies, such as out-of-band authentication and secure DNS?

In summary, there are still questions, but there are also a couple of lessons financial institutions should take away from this.  First, the court determined that although dual-control was more labor intensive for the customer, it was also the more secure option, and as such Choice should have opted for increased security over the increased inconvenience.  Lesson?  Perhaps you should be less concerned about inconveniencing your customers with increased security requirements, and more focused on convincing (i.e. educating) them on why a little increased effort on their part is justified…i.e. security trumps useability.  Second, customer awareness efforts and documentation made all the difference in this case.  If the Bank had not made, and documented, multiple efforts to implement stronger authentication, this case could easily have gone the other way.