Tag: incident response program

23 Jul 2014

Cybersecurity – Part 2

In Part 1 I discussed the increasing regulatory focus on cybersecurity, and what to expect in the short term.  In this post I want to dissect the individual elements of cybersecurity, and list what you’ll need to do to demonstrate compliance on each one going forward. So here are the required elements of a cybersecurity program, followed by what you need to do:

  • Governance – risk management and oversight
  • Threat intelligence and collaboration – Internal & External Resources
  • Third -party service provider and vendor risk management
  • Incident response and resilience

1.     Governance – risk management and oversight

Nothing new about this one, virtually all FFIEC IT Handbooks list proper governance as the first and most important item necessary for compliance, and governance begins at the top.  In fact a recent FFIEC webinar was titled “Executive Leadership of Cybersecurity: What Today’s CEO Needs to Know About the Threats They Don’t See.”  But governance involves more than just management oversight.  The IT Handbook defines it this way:

“Governance is achieved through the management structure, assignment of responsibilities and authority, establishment of policies, standards and procedures, allocation of resources, monitoring, and accountability.”

 What you need to do:

  •  Update & Test your Policies, Procedures and Practices.  Verify that cyber threats are specifically included in your information security, incident response, and business continuity policies.
  • Assess your Cybersecurity Risk (Risk = Threat times Vulnerability minus Controls).  When selecting controls, remember that there are three categories; preventive, detective, and responsive/corrective.  Preventive controls are always best, but given the increasing reliance on third-parties for data processing and storage, they may not be optimal.  Focus instead on detective and responsive controls.  Also, make sure your assessment accounts for any actual events affecting you or your vendors.  Document both:
    • Inherent cybersecurity risk exposure – risk level prior to application of mitigating controls
    • Residual cybersecurity risk exposure – risk remaining after application of controls
  • Adjust your Policies, Procedures and Practices as needed based on the risk assessment results.
  • Use your IT Steering Committee (or equivalent) to manage the process.
  • Provide periodic Board updates.

2.     Threat intelligence and collaboration – Internal & External Resources

This element reflects both the complexity and the pervasiveness of the  cybersecurity problem, and (unlike governance) is a particular challenge to smaller institutions (<1B).  According to a study conducted in May of this year by the New York State Department of Financial Services, the information security frameworks of small institutions lagged behind larger institutions in two key areas: oversight over third party service providers (more on that later), and membership in an information-sharing organization.

What you need to do:

Regulators expect all financial institutions to identify and monitor cyber-threats to their organization, and to the financial sector as a whole.  Make sure this “real-world” information is factored into your risk assessment.  Some information sharing resources include:

3.     Third -party service provider and vendor risk management

For the vast majority of outsourced financial institutions, managing cybersecurity comes down to managing the risk originating at third-party providers and other unaffiliated third-parties. As the Chairman of the FFIEC, Thomas J Curry, recently stated:

“One area of ongoing concern is the increasing reliance on third parties..The OCC has long considered bank oversight of third parties to be an important part of a bank’s overall risk management capability.”

Smaller institutions may be even more at risk, because they tend to rely more on third-parties, and (as I pointed out earlier) tend to lag behind larger institutions when it comes to vendor management.  This is mostly because of available internal resources.  Larger institutions may conduct their own compliance audits, while smaller institutions may rely more on external resources, such as SOC reports and FFIEC Reports of Examination (ROE).  And once the reports are received, interpreting them to determine if they indeed address your concerns can be an even bigger challenge.

What you need to do:

Regardless of size, all institutions should  employ basic vendor management best practices to understand and control third-party risk.  Pay particular attention to the following:

  • Pre-contract Planning & Due Diligence – in addition to reviewing the SOC reports and ROE’s, determine if the vendor had any significant recent security events.
  • Contracts – they should define if and how you’ll be notified in the event of a security event involving you or your customer’s data, and who is responsible for customer notification.  They should also include a “right-to-audit” clause, giving you the right to conduct audits at the service provider if necessary.
  • Ongoing Monitoring – in addition to updated SOC reports, financials, and ROE’s, don’t forget to take advantage of vendor forums and user groups.  As the FFIEC statement stressed:

“…financial institutions that utilize third party service providers should check with their provider about the existence of user groups that also could be valuable sources of information.”

  • Termination/Disengagement – management should understand what happens to their data at the end of the relationship.

4.     Incident response and resilience

Incident response has been mentioned in all regulatory statements about cybersecurity, and for good reason.  Regardless of whether it originates internally or externally, a security incident is a virtual certainty.  And regulators know that although vendor oversight does provide some measure of assurance, you have very little actual control over specific vendor-based preventive controls.  So detective and corrective/responsive controls must compensate.

What you need to do:

Make sure your incident response program (IRP) has been updated to accommodate a response to a cybersecurity event.  As I stated in Part 1, your existing policies should already do this if they are impact-based instead of threat-based.  “Cyber” simply refers to the source or nature of the threat.  The impact of a cybersecurity event is generally the same as any other adverse event; information is compromised or business is interrupted.  However, all IRP’s should contain certain elements:

  • The incident response team members
  • A method for classifying the severity of the incident
  • A response based on severity, to include internal escalation, and external notification.
  • Periodic testing and Board reporting

Regarding testing, the FFIEC considers it so important they refer to it as one of the primary take-aways from their recent webinar, encouraging all institutions to consider:

How often is my institution testing its plans to respond to a cyber attack? Do these tests include our key internal and external stakeholders?

 In summary, review the requirements for cybersecurity, and compare them with your current policies, procedures and practices.  Hopefully you’ve already incorporated many (if not most) of these elements into your program, and very little adjustment needs to be made.  But either way, be prepared to discuss what you are doing, and how you are doing it, with the regulators…they WILL be asking you.

10 Jul 2014

Cybersecurity – Part 1

Cybersecurity has gotten a lot of attention from regulators lately, and with assessments already underway it promises to be a regulatory focus for the foreseeable future.  But exactly what are they expecting from you, and how does that differ from what you may be doing already?  More importantly, how should you demonstrate that you are cybersecurity compliant?

First of all it’s important to understand that, at least initially, regulators  will be data gathering only.  They may offer verbal feedback, but don’t expect any written examination findings or recommendation at this time.  What they will be doing is assessing the overall posture of cybersecurity.  It would appear that the regulators are following the NIST cybersecurity framework that came out earlier this year in response to the Presidential Executive Order that came out in February of 2013.  The  NIST framework provides a common mechanism for organizations to:

  1. Describe their current cybersecurity posture;
  2. Describe their target state for cybersecurity;
  3. Identify and prioritize opportunities for improvement within the context of a continuous and repeatable process;
  4. Assess progress toward the target state; and
  5. Communicate among internal and external stakeholders about cybersecurity risk.

It would appear that financial regulators are currently on step 1; gathering information in order to describe the current state of cybersecurity across the financial industry.  Of course once the current state has been established, I expect that the “target state” for cybersecurity (step #2) will involve additional regulatory expectations.

So what do you need to do now?  Well, if you’ve kept your information security, business continuity, and vendor management policies and procedures up-to-date, probably not much.  Cybersecurity is simply a subset of each of those existing policies.  In most cases, ‘cyber’ refers to either the source or nature of the attack or the vulnerability.  Your InfoSec  policies (including incident response) should already address this, and so should your business continuity plan.  In other words, you should already have procedures in place to secure customer and confidential data and recovery critical business processes regardless of  the source or nature of the threat.  Your policies should all be impact-based, not threat-based.

Your risk assessments, however, may need to be adjusted if they don’t specifically account for cyber threats.  For example, critical vendors should be assessed for their exposure to, and protection from, cyber threats…with your controls adjusted accordingly (i.e. audit reports, PEN tests, etc.).  Your BCP risk assessment should account for the impact and probability of cyber, as well as traditional, fraud, theft and blackmail.  All that said, regulators will likely be looking for specific references to ‘cyber’, so it won’t hurt to make sure your policies include the term as well.

For me, the biggest takeaway from the flurry of cybersecurity activity (the 2013 Presidential Directive, the 2013 FFIEC working group, the 2014 NIST Cybersecurity Framework, the recent FFIEC statements on ATM Hacking and Heartbleed and DDoS attacks, as well as the recent FDIC’s C-level cybersecurity webinar) is this; for the vast majority of outsourced financial institutions, cybersecurity readiness means A). managing your vendors, and B). having a proven plan in place to detect and recover if a cyber-attack occurs.  

According to the FDIC, here are the required elements of a cybersecurity risk management program …notice the last two:

  • Governance – risk management and oversight
  • Threat intelligence and collaboration – Internal & External Resources
  • Third -party service provider and vendor risk management
  • Incident response and resilience

I’ve covered vendor management and incident response before.  In Part 2 I’ll break down each of the four elements in greater detail, and tell you what you’ll need to do to demonstrate compliance.

04 Jun 2013

Incident Response in an Outsourced World

UPDATE – On June 6th the FFIEC formed the Cybersecurity and Critical Infrastructure Working Group, designed to enhance communications between and among the FFIEC members agencies as well as other key financial industry committees and councils.  The goal of this group will undoubtedly be to increase the defense and resiliency of financial institutions to cyber attacks, but the question is “what effect will this have on new regulatory requirements and best practices”?  Will annual testing of your incident response plan be a requirement, just as testing your BCP is now?  I think you can count on it…

I’ve asked the following question at several recent speaking engagements:  “Can you remember the last time you heard about a financial institution being hacked, and having its information stolen?”  No responses.  I then ask a second question:  “Can anyone remember the last time a service provider was hacked, and financial institution data stolen?”.  Heartland…TJX…FIS…almost every hand goes up.

As financial institutions have gotten pretty good at hardening and protecting data, cyber criminals are focusing more and more on the service providers as the weak link in the information security chain.  And wherever there are incidents making the news, the regulators are sure to follow with new regulations and increased reinforcement of existing ones.

The regulators make no distinction between your responsibilities for data within your direct control, and data outside your direct control;

“Management is responsible for ensuring the protection of institution and customer data, even when that data is transmitted, processed, stored, or disposed of by a service provider.” (Emphasis added)

In other words, you have 100% of the responsibility, and zero control.  All you have is oversight, which is at best predictive and reactive, and NOT preventive.  So you use the vendor’s past history and third-party audit reports to try to predict their ability to prevent security incidents, but in the end you must have a robust incident response plan to effectively react to the inevitable vendor incident.

The FFIEC last issued guidance on incident response plans in 2005 (actually just an interpretation of GLBA 501b provisions), stating that…

“…every financial institution should develop and implement a response program designed to address incidents of unauthorized access to sensitive customer information maintained by the financial institution or its service provider.” (Emphasis added)

The guidance specified certain minimum components for an incident response plan, including:

  • Assessing the nature and scope of an incident and identifying what customer information systems and types of customer information have been accessed or misused;
  • Notifying its primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information;
  • If required, filing a timely SAR, and in situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, promptly notifying appropriate law enforcement authorities;
  • Taking appropriate steps to contain and control the incident to prevent further unauthorized access to or use of customer information; and
  • Notifying customers when warranted in a manner designed to ensure that a customer can reasonably be expected to receive it.

The guidance goes on to state that even if the incident originated with a service provider the institution is still responsible for notifying their customers and regulator.  Although they may contract that back to the service provider, I have personally not seen notification outsourcing to be commonplace, and in fact I would not recommend it.  An incident could carry reputation risk, but mishandled regulator or customer notification could carry significant regulatory and financial risks.  In other words, while the former could be embarrassing and costly, the latter could shut you down.

So to summarize the challenges:

  • Financial institutions are outsourcing more and more critical products and services.
  • Service providers must be held to the same data security standards as the institution, but…
  • …the regulators are only slowly catching up, resulting in a mismatch between the FI’s security, and the service provider’s.
  • Cyber criminals are exploiting that mismatch to increasingly, and successfully, target institutions via their service providers.

What can be done to address these challenges?  Vendor selection due diligence and on-going oversight are still very important, but because of the lack of control, an effective incident response plan is the best, and perhaps only, defense.  Yes, preventive controls are always best, but lacking those, being able to quickly react to a service provider incident is essential to minimizing the damage.  When was the last time you reviewed your incident response plan?  Does it contain all of the minimum elements listed above?  Better yet, when was the last time you tested it?

Just as with disaster recovery, the only truly effective plan is one that is periodically updated and tested.  But unlike DR plans, most institutions don’t even update their incident responses plans, let alone test them.  And while there are no specific indications that regulators have increased scrutiny of incident response plans just yet, I would not be at all surprised if they do so in the near future.  Get ahead of this issue now by updating your plan and testing it.  Use a scenario from recent events, there are certainly plenty of real-world examples to choose from.  Gather the members of your incident response team together and walk through your response, the entire test shouldn’t take more than an hour or so.  Answer the following questions:

  1. What went wrong to cause the incident?  Why (times 5…root cause)?  If this is a vendor incident, full immediate disclosure of all of the facts to get to the root cause may be difficult, but request them anyway…in writing.
  2. Was our customer or other confidential data exposed?  If so, can it be classified as “sensitive customer information“?
  3. Is this a reportable incident to our regulators?  If so, do we notify them or does the vendor?  (Check your contract)
  4. Is this a reportable incident to our customers?  How do we decide if “misuse of the information has occurred or it is reasonably possible that misuse will occur“?
  5. Is this a reportable incident to law enforcement?
  6. What if the incident involved a denial of service attack, but no customer information was involved?  A response may not be required, but should you?
  7. What can we do to prevent this from happening again (see #1), and if we can’t prevent it, are there steps we should take to reduce the possibility?  Can the residual risk be transferred?

Make sure to document the test, and then test again the next time an incident makes the news.  It may not prevent the next incident from involving you, but it could definitely minimize the impact!

 

NOTE:  For more on this topic, Safe Systems will be hosting the webinar “How to Conduct an Incident Response Test” on 6/27.  The presentation will be open to both customers and non-customers and is free of charge, but registration is required.  Sign up here.

 

21 Aug 2012

NIST Incident Response Guidance released

UPDATE – The National Institute of Standards and Technology (NIST) has just released an update to their Computer Security Incident Handling Guide (SP 800-61).   The guide contains very prescriptive guidance that can be used to frame, or enhance, your incident response plan.  It also contains a very useful incident response checklist on page 42.  I’ve taken the liberty of  modifying it slightly to conform to the FFIEC guidance.  It is form-fillable and available for download here.  I hope you find it useful for testing purposes as well as actual incidents.  I originally posted on this back in May 2012, here is the rest of the original post:


Although adherence to NIST standards is strictly required for Federal agencies, it is not binding for financial institutions.  However NIST publications are referred to 10 times in the FFIEC IT Handbooks, 8 times in the Information Security Handbook alone.  They are considered a best-practice metric by which you can measure your efforts.  So because of…

  1.  The importance of properly managing an information security event,
  2. The increasing frequency, complexity, and severity of security incidents,
  3.  The scarcity of recent regulatory guidance in this area, and
  4.  The relevance of NIST to future financial industry regulatory guidance,

…it should be required reading for all financial institutions.

Incident response is actually addressed in 4 FFIEC Handbooks; Information Security, Operations, BCP and E-Banking.  It’s important to distinguish here between a security incident and a recovery, or business continuity, incident.  This post will only address security incidents, but guidance states that the two areas intersect in this way:

“In the event of a security incident, management must decide how to properly protect information systems and confidential data while also maintaining business continuity.”

Security incidents and recovery incidents also share this in common…both require an impact analysis to prioritize the recovery effort.

So although there are several regulatory references to security incident management, none have been updated since 2008 even though the threat environment has certainly changed since then.  Perhaps SP 800-61 will form the basis for this update just as  SP 800-33 informed the FFIEC Information Security Handbook a few years after its release.  But until it does, proactive Information Security Officers and Network Administrators would do well to familiarize (or re-familiarize) themselves with the basic concepts.

NIST defines the incident life-cycle this way:

Each section is detailed in the guide, and worth reading in its entirety, but to summarize:

  1. Preparation – Establish an incident response capability by defining a policy (typically part of your Information Security Program), and an incident response team (referred to in the FFIEC IT Handbook as the Computer Security Incident Response Team, or CSIRT).  Smaller institutions may want to have the IT Steering Committee manage this.  Assess the adequacy of your preventive capabilities in the current threat environment, including patch management, Anti-virus/Anti-malware, firewalls, network intrusion prevention and server intrusion prevention.  Don’t forget employee awareness training…perhaps the most important preventive control.
  2.  Detection & Analysis – Determine exactly what constitutes an incident, and under what circumstances you will activate your incident response team and initiate procedures.  Signs of an incident fall into one of two categories: precursors and indicators.  A precursor is a sign that an incident may occur in the future.  An indicator is a sign that an incident may have occurred or may be occurring now.  Many of the controls listed in the preparation phase (firewalls, IPS/IDS devices, etc.) can also alert to both precursors and indicators.  The initial analysis should provide enough information for the team to prioritize subsequent activities, such as containment of the incident and deeper analysis of the effects of the incident (steps 3 & 4).
  3. Containment, Eradication & Recovery – Most incidents require containment to control and minimize the damage.  An essential part of containment is decision-making i.e., shutting down a system, disconnecting it from a network, disabling certain functions, etc.  Although you may be  tempted to start the eradication and recovery phase immediately, bear in mind that you’ll need to collect as much forensic evidence as possible to facilitate the post-incident analysis.
  4. Post-Incident Activity – Simply put, lessons learned.  It’s easy to ignore this step as you try to get back to the business of banking, but don’t.  Some important questions to answer are:
  • Exactly what happened, and at what times?
  • How well did staff and management perform in dealing with the incident? Were the documented procedures followed? Were they adequate?
  • Were any steps or actions taken that might have inhibited the recovery?
  • What could we do differently the next time a similar incident occurs?
  • What corrective actions can prevent similar incidents in the future?
  • What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

The FFIEC addresses intrusion response here, and recommends that “institutions should assess the adequacy of their preparations through testing.”  Use the checklist, and recent security incidents to train your key CSIRT personnel, and then use the full guide to fine-tune and enhance your incident response capabilities.

12 Jun 2012

Managing Social Media Risk – LinkedIn Edition

By now everyone has heard about the breach at LinkedIn, where 6.5 million email password hashes were leaked (over half of which have been cracked, or converted into plain text).  Those who read this blog regularly know how I feel about social media in general:

“So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic and financial) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit.

This is not to say that social media can never be properly risk managed, only that the decision to engage (or not) must be analyzed the same way you analyze any other business decision.  And this is a challenge because social media does not easily lend itself to traditional risk management techniques, and this incident is a good case in point.

So once again, let’s use this latest breach as yet another incident training exercise.  In your initial risk assessment, chances are you classified the site as low risk.  There is no NPI/PII stored there, and it doesn’t offer transactional services beyond account upgrades.  Additionally, regarding the breach itself, only about 5% of all user password hashes were disclosed, and as I said previously, about half of those were converted into the underlying plain text password.  And what exactly is your risk exposure if your password was one that was stolen and cracked?  First of all, they would also need your login name to go with the password.  But if they were able to somehow put the two together, they might change your employment or background information, or post something that could portray you or your company in a negative light.  So there are certainly some risks, but they come with lots of “ifs”.  So low probability + low impact = low risk…change your password and move on, right?

Well maybe, depending on how you answer this question:  Is your LinkedIn password being used anywhere else?  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you can also assume it is now being used in dictionary attacks.  So yes, if you are an individual user, change your LinkedIn password, but also change all other occurrences of that password.

But back to our training exercise…if you are an institution with an official (or unofficial) LinkedIn presence through one or more employees, even if they’ve changed their password(s), you may still be at risk.  If the employee uses the same password to access your Facebook or Google+ page, or remotely authenticate to your email system, or access anything else that is connected to you, your response procedures should require (and validate) that all affected passwords have been changed.  In fact, since you have no way of knowing if your employee has a personal LinkedIn (or Facebook, etc.) presence,  it might be good practice to have your network administrator force all passwords to change just to be safe.  You may also want to change your policy to state that  internal (or corporate) passwords should never be duplicated or re-used on external or personal sites (although enforcing that may be a challenge).

As far as what you can do to reduce the chance of this type of incident happening again, there isn’t much.  You have to rely on your service providers to properly manage their own security.  You do this in part by obtaining and reviewing third-party reviews (like the SOC reports) if they exist,  but also by reviewing the vendor’s own privacy and security policy.  For example, LinkedIn’s privacy policy says this about the data it collects from you:

  • Security
    • Personal information you provide will be secured in accordance with industry standards and technology. Since the internet is not a 100% secure environment, we cannot ensure or warrant the security of any information you transmit to LinkedIn. There is no guarantee that information may not be accessed, copied, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. (Bold is mine)
    • You are responsible for maintaining the secrecy of your unique password and account information, and for controlling access to your email communications at all times.

Even though they have made public statements that they have taken steps to address the root cause of the breach, given the above policy there is no indication that LinkedIn feel it necessary to obtain a third-party review for validation of their enhanced privacy and security measures.  Granted, given the nature of the information they collect and store they may not feel compelled to do so, and you may not require it, but at the very least you should expect the passwords to be secure.

The first step in managing risk is to identify it.  In this case because of the breach, the unanswered questions*, the lack of a third-party review, and their privacy policy, you are accepting a higher level of residual risk with them than you would normally find acceptable in another vendor.  You can still rationalize your decision strategically, but you must quantify the expected return and then document that the return justifies the increased risk.  And then do the same for your other social media efforts!

 

*Indeed there are several issues raised by this breach that are yet to be answered:  How did it occur?  Could the breach be worse than disclosed?  Why did they encrypt the passwords using the older SHA1 hash algorithm?  Why did they not salt the hashes?  Why didn’t they have a CIO?  Did they truly use industry standards to secure your information?  If they did, those standards are clearly inadequate, so will they now exceed industry standards?

09 Jan 2012

Another incident management table-top training exercise

I’ve mentioned before that financial institutions would be wise to use news reports of security incidents as “what if” table-top training exercises.  Here is another one that just occurred a couple of days ago:

Test scenario:

  • You receive a subpoena from a government agency requesting financial information on several customers.  The subpoena includes names and social security numbers for the customers involved.
    • (Your privacy policy probably contains verbiage similar to this:  “Social Security numbers may only be accessed by and disclosed to <bank employees> and others with a legitimate business “need to know” in accordance with applicable laws and regulations”, or perhaps you state that you will disclose only if “…responding to court orders and legal investigations”.)
  • You determine that information disclosure is necessary and appropriate in this case, and you provide the information.
  •  Although there is nothing in your privacy policy that requires it, you then decide that you will notify the affected customers that their information was disclosed pursuant to a legal request.
  • You send a letter to each affected customer explaining the reasons for the disclosure, as well as what information was disclosed.
  • You include a copy of the original subpoena in the letter to the affected customers in it’s original form, including the names and social security numbers of all of the affected customers.  In other words, you did not redact information pertaining to everyone other than the intended recipient of the letter, all affected customers received everyone else’s information in addition to their own.

Discussion topics:

  1. Does this qualify as  a “security incident” as it is defined by your Incident Response Plan?  It is clearly not an intrusion, but it does qualify as an irregular or adverse event which negatively impact the confidentiality of customer non-public information.
  2. Is customer or regulator notification required?  In order to answer this question, answer the following:  “Has misuse of non-public information occurred, or is it reasonably possible that misuse could occur?”  If the answer is “yes”, customer and regulator notification is required, as well as credit monitoring services, ID theft insurance, credit freeze activation, and any other remedies the law, and your policies, require.
  3. Is a Suspicious Activity Report filing required?  (Perhaps not, but I would err on the side of caution.)
  4. What, if anything, would we do differently?  Under what exact circumstances will we disclose customer NPI?  If disclosed, will we notify the affected customer?  What are the legal implications?

Use these real world examples to fine tune your incident management policies and procedures.  Perhaps they will prevent you from becoming someone else’s training exercise!