Tag: incident management

04 Jun 2013

Incident Response in an Outsourced World

UPDATE – On June 6th the FFIEC formed the Cybersecurity and Critical Infrastructure Working Group, designed to enhance communications between and among the FFIEC members agencies as well as other key financial industry committees and councils.  The goal of this group will undoubtedly be to increase the defense and resiliency of financial institutions to cyber attacks, but the question is “what effect will this have on new regulatory requirements and best practices”?  Will annual testing of your incident response plan be a requirement, just as testing your BCP is now?  I think you can count on it…

I’ve asked the following question at several recent speaking engagements:  “Can you remember the last time you heard about a financial institution being hacked, and having its information stolen?”  No responses.  I then ask a second question:  “Can anyone remember the last time a service provider was hacked, and financial institution data stolen?”.  Heartland…TJX…FIS…almost every hand goes up.

As financial institutions have gotten pretty good at hardening and protecting data, cyber criminals are focusing more and more on the service providers as the weak link in the information security chain.  And wherever there are incidents making the news, the regulators are sure to follow with new regulations and increased reinforcement of existing ones.

The regulators make no distinction between your responsibilities for data within your direct control, and data outside your direct control;

“Management is responsible for ensuring the protection of institution and customer data, even when that data is transmitted, processed, stored, or disposed of by a service provider.” (Emphasis added)

In other words, you have 100% of the responsibility, and zero control.  All you have is oversight, which is at best predictive and reactive, and NOT preventive.  So you use the vendor’s past history and third-party audit reports to try to predict their ability to prevent security incidents, but in the end you must have a robust incident response plan to effectively react to the inevitable vendor incident.

The FFIEC last issued guidance on incident response plans in 2005 (actually just an interpretation of GLBA 501b provisions), stating that…

“…every financial institution should develop and implement a response program designed to address incidents of unauthorized access to sensitive customer information maintained by the financial institution or its service provider.” (Emphasis added)

The guidance specified certain minimum components for an incident response plan, including:

  • Assessing the nature and scope of an incident and identifying what customer information systems and types of customer information have been accessed or misused;
  • Notifying its primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information;
  • If required, filing a timely SAR, and in situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, promptly notifying appropriate law enforcement authorities;
  • Taking appropriate steps to contain and control the incident to prevent further unauthorized access to or use of customer information; and
  • Notifying customers when warranted in a manner designed to ensure that a customer can reasonably be expected to receive it.

The guidance goes on to state that even if the incident originated with a service provider the institution is still responsible for notifying their customers and regulator.  Although they may contract that back to the service provider, I have personally not seen notification outsourcing to be commonplace, and in fact I would not recommend it.  An incident could carry reputation risk, but mishandled regulator or customer notification could carry significant regulatory and financial risks.  In other words, while the former could be embarrassing and costly, the latter could shut you down.

So to summarize the challenges:

  • Financial institutions are outsourcing more and more critical products and services.
  • Service providers must be held to the same data security standards as the institution, but…
  • …the regulators are only slowly catching up, resulting in a mismatch between the FI’s security, and the service provider’s.
  • Cyber criminals are exploiting that mismatch to increasingly, and successfully, target institutions via their service providers.

What can be done to address these challenges?  Vendor selection due diligence and on-going oversight are still very important, but because of the lack of control, an effective incident response plan is the best, and perhaps only, defense.  Yes, preventive controls are always best, but lacking those, being able to quickly react to a service provider incident is essential to minimizing the damage.  When was the last time you reviewed your incident response plan?  Does it contain all of the minimum elements listed above?  Better yet, when was the last time you tested it?

Just as with disaster recovery, the only truly effective plan is one that is periodically updated and tested.  But unlike DR plans, most institutions don’t even update their incident responses plans, let alone test them.  And while there are no specific indications that regulators have increased scrutiny of incident response plans just yet, I would not be at all surprised if they do so in the near future.  Get ahead of this issue now by updating your plan and testing it.  Use a scenario from recent events, there are certainly plenty of real-world examples to choose from.  Gather the members of your incident response team together and walk through your response, the entire test shouldn’t take more than an hour or so.  Answer the following questions:

  1. What went wrong to cause the incident?  Why (times 5…root cause)?  If this is a vendor incident, full immediate disclosure of all of the facts to get to the root cause may be difficult, but request them anyway…in writing.
  2. Was our customer or other confidential data exposed?  If so, can it be classified as “sensitive customer information“?
  3. Is this a reportable incident to our regulators?  If so, do we notify them or does the vendor?  (Check your contract)
  4. Is this a reportable incident to our customers?  How do we decide if “misuse of the information has occurred or it is reasonably possible that misuse will occur“?
  5. Is this a reportable incident to law enforcement?
  6. What if the incident involved a denial of service attack, but no customer information was involved?  A response may not be required, but should you?
  7. What can we do to prevent this from happening again (see #1), and if we can’t prevent it, are there steps we should take to reduce the possibility?  Can the residual risk be transferred?

Make sure to document the test, and then test again the next time an incident makes the news.  It may not prevent the next incident from involving you, but it could definitely minimize the impact!

 

NOTE:  For more on this topic, Safe Systems will be hosting the webinar “How to Conduct an Incident Response Test” on 6/27.  The presentation will be open to both customers and non-customers and is free of charge, but registration is required.  Sign up here.

 

21 Aug 2012

NIST Incident Response Guidance released

UPDATE – The National Institute of Standards and Technology (NIST) has just released an update to their Computer Security Incident Handling Guide (SP 800-61).   The guide contains very prescriptive guidance that can be used to frame, or enhance, your incident response plan.  It also contains a very useful incident response checklist on page 42.  I’ve taken the liberty of  modifying it slightly to conform to the FFIEC guidance.  It is form-fillable and available for download here.  I hope you find it useful for testing purposes as well as actual incidents.  I originally posted on this back in May 2012, here is the rest of the original post:


Although adherence to NIST standards is strictly required for Federal agencies, it is not binding for financial institutions.  However NIST publications are referred to 10 times in the FFIEC IT Handbooks, 8 times in the Information Security Handbook alone.  They are considered a best-practice metric by which you can measure your efforts.  So because of…

  1.  The importance of properly managing an information security event,
  2. The increasing frequency, complexity, and severity of security incidents,
  3.  The scarcity of recent regulatory guidance in this area, and
  4.  The relevance of NIST to future financial industry regulatory guidance,

…it should be required reading for all financial institutions.

Incident response is actually addressed in 4 FFIEC Handbooks; Information Security, Operations, BCP and E-Banking.  It’s important to distinguish here between a security incident and a recovery, or business continuity, incident.  This post will only address security incidents, but guidance states that the two areas intersect in this way:

“In the event of a security incident, management must decide how to properly protect information systems and confidential data while also maintaining business continuity.”

Security incidents and recovery incidents also share this in common…both require an impact analysis to prioritize the recovery effort.

So although there are several regulatory references to security incident management, none have been updated since 2008 even though the threat environment has certainly changed since then.  Perhaps SP 800-61 will form the basis for this update just as  SP 800-33 informed the FFIEC Information Security Handbook a few years after its release.  But until it does, proactive Information Security Officers and Network Administrators would do well to familiarize (or re-familiarize) themselves with the basic concepts.

NIST defines the incident life-cycle this way:

Each section is detailed in the guide, and worth reading in its entirety, but to summarize:

  1. Preparation – Establish an incident response capability by defining a policy (typically part of your Information Security Program), and an incident response team (referred to in the FFIEC IT Handbook as the Computer Security Incident Response Team, or CSIRT).  Smaller institutions may want to have the IT Steering Committee manage this.  Assess the adequacy of your preventive capabilities in the current threat environment, including patch management, Anti-virus/Anti-malware, firewalls, network intrusion prevention and server intrusion prevention.  Don’t forget employee awareness training…perhaps the most important preventive control.
  2.  Detection & Analysis – Determine exactly what constitutes an incident, and under what circumstances you will activate your incident response team and initiate procedures.  Signs of an incident fall into one of two categories: precursors and indicators.  A precursor is a sign that an incident may occur in the future.  An indicator is a sign that an incident may have occurred or may be occurring now.  Many of the controls listed in the preparation phase (firewalls, IPS/IDS devices, etc.) can also alert to both precursors and indicators.  The initial analysis should provide enough information for the team to prioritize subsequent activities, such as containment of the incident and deeper analysis of the effects of the incident (steps 3 & 4).
  3. Containment, Eradication & Recovery – Most incidents require containment to control and minimize the damage.  An essential part of containment is decision-making i.e., shutting down a system, disconnecting it from a network, disabling certain functions, etc.  Although you may be  tempted to start the eradication and recovery phase immediately, bear in mind that you’ll need to collect as much forensic evidence as possible to facilitate the post-incident analysis.
  4. Post-Incident Activity – Simply put, lessons learned.  It’s easy to ignore this step as you try to get back to the business of banking, but don’t.  Some important questions to answer are:
  • Exactly what happened, and at what times?
  • How well did staff and management perform in dealing with the incident? Were the documented procedures followed? Were they adequate?
  • Were any steps or actions taken that might have inhibited the recovery?
  • What could we do differently the next time a similar incident occurs?
  • What corrective actions can prevent similar incidents in the future?
  • What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

The FFIEC addresses intrusion response here, and recommends that “institutions should assess the adequacy of their preparations through testing.”  Use the checklist, and recent security incidents to train your key CSIRT personnel, and then use the full guide to fine-tune and enhance your incident response capabilities.

20 Aug 2012

Incident Response guidance – UPDATE

UPDATE – The National Institute of Standards and Technology (NIST) has just released an update to their Computer Security Incident Handling Guide (SP 800-61).   The guide contains very prescriptive guidance that can be used to frame, or enhance, your incident response plan.  It also contains a very useful incident response checklist on page 42.  I’ve taken the liberty of  modifying it slightly to conform to the FFIEC guidance.  It is form-fillable and available for download here.  I hope you find it useful for testing purposes as well as actual incidents.  Here is the original post:


Although adherence to NIST standards is strictly required for Federal agencies, it is not binding for financial institutions.  However NIST publications are referred to 10 times in the FFIEC IT Handbooks, 8 times in the Information Security Handbook alone.  They are considered a best-practice metric by which you can measure your efforts.  So because of…

  1.  The importance of properly managing an information security event,
  2. The increasing frequency, complexity, and severity of security incidents,
  3.  The scarcity of recent regulatory guidance in this area, and
  4.  The relevance of NIST to future financial industry regulatory guidance,

…it should be required reading for all financial institutions.

Incident response is actually addressed in 4 FFIEC Handbooks; Information Security, Operations, BCP and E-Banking.  It’s important to distinguish here between a security incident and a recovery, or business continuity, incident.  This post will only address security incidents, but guidance states that the two areas intersect in this way:

“In the event of a security incident, management must decide how to properly protect information systems and confidential data while also maintaining business continuity.”

Security incidents and recovery incidents also share this in common…both require an impact analysis to prioritize the recovery effort.

So although there are several regulatory references to security incident management, none have been updated since 2008 even though the threat environment has certainly changed since then.  Perhaps SP 800-61 will form the basis for this update just as  SP 800-33 informed the FFIEC Information Security Handbook a few years after its release.  But until it does, proactive Information Security Officers and Network Administrators would do well to familiarize (or re-familiarize) themselves with the basic concepts.

NIST defines the incident life-cycle this way:

Each section is detailed in the guide, and worth reading in its entirety, but to summarize:

  1. Preparation – Establish an incident response capability by defining a policy (typically part of your Information Security Program), and an incident response team (referred to in the FFIEC IT Handbook as the Computer Security Incident Response Team, or CSIRT).  Smaller institutions may want to have the IT Steering Committee manage this.  Assess the adequacy of your preventive capabilities in the current threat environment, including patch management, Anti-virus/Anti-malware, firewalls, network intrusion prevention and server intrusion prevention.  Don’t forget employee awareness training…perhaps the most important preventive control.
  2.  Detection & Analysis – Determine exactly what constitutes an incident, and under what circumstances you will activate your incident response team and initiate procedures.  Signs of an incident fall into one of two categories: precursors and indicators.  A precursor is a sign that an incident may occur in the future.  An indicator is a sign that an incident may have occurred or may be occurring now.  Many of the controls listed in the preparation phase (firewalls, IPS/IDS devices, etc.) can also alert to both precursors and indicators.  The initial analysis should provide enough information for the team to prioritize subsequent activities, such as containment of the incident and deeper analysis of the effects of the incident (steps 3 & 4).
  3. Containment, Eradication & Recovery – Most incidents require containment to control and minimize the damage.  An essential part of containment is decision-making i.e., shutting down a system, disconnecting it from a network, disabling certain functions, etc.  Although you may be  tempted to start the eradication and recovery phase immediately, bear in mind that you’ll need to collect as much forensic evidence as possible to facilitate the post-incident analysis.
  4. Post-Incident Activity – Simply put, lessons learned.  It’s easy to ignore this step as you try to get back to the business of banking, but don’t.  Some important questions to answer are:
  • Exactly what happened, and at what times?
  • How well did staff and management perform in dealing with the incident? Were the documented procedures followed? Were they adequate?
  • Were any steps or actions taken that might have inhibited the recovery?
  • What could we do differently the next time a similar incident occurs?
  • What corrective actions can prevent similar incidents in the future?
  • What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

The FFIEC addresses intrusion response here, and recommends that “institutions should assess the adequacy of their preparations through testing.”  Use the checklist, and recent security incidents to train your key CSIRT personnel, and then use the full guide to fine-tune and enhance your incident response capabilities.

12 Jun 2012

Managing Social Media Risk – LinkedIn Edition

By now everyone has heard about the breach at LinkedIn, where 6.5 million email password hashes were leaked (over half of which have been cracked, or converted into plain text).  Those who read this blog regularly know how I feel about social media in general:

“So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic and financial) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit.

This is not to say that social media can never be properly risk managed, only that the decision to engage (or not) must be analyzed the same way you analyze any other business decision.  And this is a challenge because social media does not easily lend itself to traditional risk management techniques, and this incident is a good case in point.

So once again, let’s use this latest breach as yet another incident training exercise.  In your initial risk assessment, chances are you classified the site as low risk.  There is no NPI/PII stored there, and it doesn’t offer transactional services beyond account upgrades.  Additionally, regarding the breach itself, only about 5% of all user password hashes were disclosed, and as I said previously, about half of those were converted into the underlying plain text password.  And what exactly is your risk exposure if your password was one that was stolen and cracked?  First of all, they would also need your login name to go with the password.  But if they were able to somehow put the two together, they might change your employment or background information, or post something that could portray you or your company in a negative light.  So there are certainly some risks, but they come with lots of “ifs”.  So low probability + low impact = low risk…change your password and move on, right?

Well maybe, depending on how you answer this question:  Is your LinkedIn password being used anywhere else?  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you can also assume it is now being used in dictionary attacks.  So yes, if you are an individual user, change your LinkedIn password, but also change all other occurrences of that password.

But back to our training exercise…if you are an institution with an official (or unofficial) LinkedIn presence through one or more employees, even if they’ve changed their password(s), you may still be at risk.  If the employee uses the same password to access your Facebook or Google+ page, or remotely authenticate to your email system, or access anything else that is connected to you, your response procedures should require (and validate) that all affected passwords have been changed.  In fact, since you have no way of knowing if your employee has a personal LinkedIn (or Facebook, etc.) presence,  it might be good practice to have your network administrator force all passwords to change just to be safe.  You may also want to change your policy to state that  internal (or corporate) passwords should never be duplicated or re-used on external or personal sites (although enforcing that may be a challenge).

As far as what you can do to reduce the chance of this type of incident happening again, there isn’t much.  You have to rely on your service providers to properly manage their own security.  You do this in part by obtaining and reviewing third-party reviews (like the SOC reports) if they exist,  but also by reviewing the vendor’s own privacy and security policy.  For example, LinkedIn’s privacy policy says this about the data it collects from you:

  • Security
    • Personal information you provide will be secured in accordance with industry standards and technology. Since the internet is not a 100% secure environment, we cannot ensure or warrant the security of any information you transmit to LinkedIn. There is no guarantee that information may not be accessed, copied, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. (Bold is mine)
    • You are responsible for maintaining the secrecy of your unique password and account information, and for controlling access to your email communications at all times.

Even though they have made public statements that they have taken steps to address the root cause of the breach, given the above policy there is no indication that LinkedIn feel it necessary to obtain a third-party review for validation of their enhanced privacy and security measures.  Granted, given the nature of the information they collect and store they may not feel compelled to do so, and you may not require it, but at the very least you should expect the passwords to be secure.

The first step in managing risk is to identify it.  In this case because of the breach, the unanswered questions*, the lack of a third-party review, and their privacy policy, you are accepting a higher level of residual risk with them than you would normally find acceptable in another vendor.  You can still rationalize your decision strategically, but you must quantify the expected return and then document that the return justifies the increased risk.  And then do the same for your other social media efforts!

 

*Indeed there are several issues raised by this breach that are yet to be answered:  How did it occur?  Could the breach be worse than disclosed?  Why did they encrypt the passwords using the older SHA1 hash algorithm?  Why did they not salt the hashes?  Why didn’t they have a CIO?  Did they truly use industry standards to secure your information?  If they did, those standards are clearly inadequate, so will they now exceed industry standards?

09 Jan 2012

Another incident management table-top training exercise

I’ve mentioned before that financial institutions would be wise to use news reports of security incidents as “what if” table-top training exercises.  Here is another one that just occurred a couple of days ago:

Test scenario:

  • You receive a subpoena from a government agency requesting financial information on several customers.  The subpoena includes names and social security numbers for the customers involved.
    • (Your privacy policy probably contains verbiage similar to this:  “Social Security numbers may only be accessed by and disclosed to <bank employees> and others with a legitimate business “need to know” in accordance with applicable laws and regulations”, or perhaps you state that you will disclose only if “…responding to court orders and legal investigations”.)
  • You determine that information disclosure is necessary and appropriate in this case, and you provide the information.
  •  Although there is nothing in your privacy policy that requires it, you then decide that you will notify the affected customers that their information was disclosed pursuant to a legal request.
  • You send a letter to each affected customer explaining the reasons for the disclosure, as well as what information was disclosed.
  • You include a copy of the original subpoena in the letter to the affected customers in it’s original form, including the names and social security numbers of all of the affected customers.  In other words, you did not redact information pertaining to everyone other than the intended recipient of the letter, all affected customers received everyone else’s information in addition to their own.

Discussion topics:

  1. Does this qualify as  a “security incident” as it is defined by your Incident Response Plan?  It is clearly not an intrusion, but it does qualify as an irregular or adverse event which negatively impact the confidentiality of customer non-public information.
  2. Is customer or regulator notification required?  In order to answer this question, answer the following:  “Has misuse of non-public information occurred, or is it reasonably possible that misuse could occur?”  If the answer is “yes”, customer and regulator notification is required, as well as credit monitoring services, ID theft insurance, credit freeze activation, and any other remedies the law, and your policies, require.
  3. Is a Suspicious Activity Report filing required?  (Perhaps not, but I would err on the side of caution.)
  4. What, if anything, would we do differently?  Under what exact circumstances will we disclose customer NPI?  If disclosed, will we notify the affected customer?  What are the legal implications?

Use these real world examples to fine tune your incident management policies and procedures.  Perhaps they will prevent you from becoming someone else’s training exercise!