Tag: information security

20 Oct 2020

Compliance Quick Bites – Tests vs. Exercises, and the Resiliency Factor

One of several changes implemented in the 2019 FFIEC BCM Examination Handbook is a subtle but important differentiation between a BCMP “test” and an “exercise”. I discussed some of the more material changes here, but we’re starting to see examiner scrutiny into not just if, but exactly what and how you’re testing.

According to the Handbook:

  • “An exercise is a task or activity involving people and processes that is designed to validate one or more aspects of the BCP or related procedures.”
  • “A test is a type of exercise intended to verify the quality, performance, or reliability of system resilience in an operational environment.”

Essentially, “…the distinction between the two is that exercises address people, processes, and systems whereas tests address specific aspects of a system.” Simply put, think of an exercise as a scenario-based simulation of your written process recovery procedures (a table-top exercise, for example), and a test as validation of the interdependencies of those processes, such as data restoration or circuit fail-over.

The new guidance makes it clear that you must have a comprehensive program that includes both exercises and tests, and that the primary objective should be to validate the effectiveness of your entire business continuity program. In the past, most FI’s have conducted an annual table-top or structured walk-through test, and that was enough to validate their plan. It now seems that this new differentiation requires multiple methods of validation of your recovery capabilities. Given the close integration between the various internal and external interdependencies of your recovery procedures, this makes perfect sense.

An additional consideration in preparing for future testing is the increased focus on resiliency, defined as any proactive measures you’ve already implemented to mitigate disruptive events and enhance your recovery capabilities. The term “resiliency” is used 126 times in the new Handbook, and you can bet that examiners will be looking for you to validate your ability to withstand as well as recover in your testing exercises. Resilience measures can include fire suppression, auxiliary power, server virtualization and replication, hot-site facilities, alternate providers, succession planning, etc.

One way of incorporating resilience capabilities into future testing is to evaluate the impact of a disruptive event after consideration of your internal and external process interdependencies and accounting for any existing resilience measures. For example, let’s say your lending operations require 3 external providers and 6 internal assets, including IT infrastructure, scanned documents, paper documents, and key employees. List any resilience capabilities you already have in place, such as recovery testing results from your third-parties, data replication and restoration, and cross-training for key employees, then evaluate what the true impact of the disruptive event would be in that context.

In summary, conducting both testing and exercises gives all stakeholders a high level of assurance that you’ve thoroughly identified and evaluated all internal and external process interdependencies, built resilience into each component, and can successfully restore critical business functions within recovery time objectives.

30 Sep 2020
Ask the Guru – Can We Apply Similar Controls to Satisfy Both GLBA and GDPR

Can We Apply Similar Controls to Satisfy Both GLBA and GDPR?

Hey Guru!

Are the Gramm–Leach–Bliley Act (GLBA) and the General Data Protection Regulation (GDPR) similar enough to apply the same or equivalent set of layered controls? My understanding is that GDPR has placed a higher premium on the protection of a narrower definition of data. So, my question is more about whether FFIEC requirements for the protection of data extends equally to both Confidential PII and the narrow data type called out by GDPR.


Hi Steve, and thanks for the question! Comparing Gramm–Leach–Bliley Act (GLBA) and the General Data Protection Regulation (GDPR) is instructive as they both try to address the same challenge; privacy and security. Specifically, protecting information shared between a customer and a service provider. GLBA is specific to financial institutions, while GDPR defines a “data processor” as any third-party that processes personal data. However, they both have a very similar definition of the protected data. GDPR uses the term “personal data” as any information that relates to an individual who can be directly or indirectly identified, and GLBA uses the term non-public personal information (or NPI) to describe the same type of data.

To answer the question of whether the two are similar enough to apply the same or similar set of layered controls, my short answer is since using layering controls is a risk mitigation strategy best practice, it would apply equally to both.

Here’s a bit more. The most important distinction between GLBA and GDPR is that GLBA has two sections; 501(a) and 501(b). The former establishes the right to privacy and the obligation that financial institutions must protect the security and confidentiality of customer NPI. 501(b) empowers the regulators to require FI’s to establish safeguards to protect against any threats to NPI. Simply put, 501(a) is the “what”, and 501(b) is the “how”. Of course, the “how” has given us the 12 FFIEC IT Examination Handbooks, cybersecurity regulations, PEN tests, the IT audit, and lots of other stuff with no end in sight.

By contrast, GDPR is more focused on “what” (what a third-party can and can’t do with customer data, as well what the customer can control; i.e. right to have their data deleted, etc.) and much less on the “how” it is supposed to be done.

My understanding is that the scope of GLBA (and all the information security standards based thereon) is strictly limited to customer NPI, it does not expend to confidential or PII. One distinguishing factor between NPI and PII is that in US regulations NPI always refers to the “customer”, and PII always refers to the “consumer”. (Frankly there isn’t really any difference between data obtained from a customer or consumer by a financial institution during the process of either pursuing or maintaining a business relationship.) We have always taken the position that for the purposes of data classification, NPI and confidential (PII) data share the same level of sensitivity, but guidance is only concerned about customer NPI. GDPR does not make that distinction.

In my opinion, our federal regulations will move towards merging NPI and PII, and in fact some states are already there. So, although it’s not strictly a requirement to protect anything other than NPI, it’s certainly a best practice, and combining both NPI and PII / confidential data in the same data sensitivity classification will do that.

One last thought about enforcement… So far, we have not heard of US regulators checking US based FI’s for GDPR compliance, but since our community-based financial institutions have very little EU exposure, your experience may be different.

05 Aug 2013

Critical Controls for Effective Cyber Defense – Converging Standards?

Earlier this year the SANS Institute issued a document titled “Critical Controls for Effective Cyber Defense“.  Although not specific to financial institutions, it provides a useful prescriptive framework for any institution looking to defend their networks and systems from internal and external threats.  The document lists the top 20 controls institutions should use to prevent and detect cyber attacks.

This document actually preceded the announcement by the FFIEC in June that they were forming a working group to “promote coordination across the federal and state banking regulatory agencies on critical infrastructure and cybersecurity issues”.  I mentioned this announcement here in relation to its possible effect on future regulatory guidance.  So I was particularly interested in any overlap, any common thread, between the this initiative and the SANS document.  If there was any overlap between the organizations contributing to the SANS list and the FFIEC Cybersecurity working group, we might have the basis for  a common, consistent set of prescriptive guidance. Could a single “check-list” type information security standard be in the works?

For example, the Information Security Handbook requires financial institutions to have “…numerous controls to safeguard and limits access to key information system assets at all layers in the network stack.”  They then go on to suggest general best practices in various categories for achieving that goal, leaving the specifics up to the institution.

Contrast that to the much more specific SANS Critical Control list.  Here are the first 5:

  • Critical Control 1:  Inventory of Authorized and Unauthorized Devices
  • Critical Control 2:  Inventory of Authorized and Unauthorized Software
  • Critical Control 3:  Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
  • Critical Control 4:  Continuous Vulnerability Assessment and Remediation
  • Critical Control 5:  Malware Defenses

As you can see, although the goal of protecting information assets is the same in each case, the SANS list is much more specific.  Could we possibly see a converging of the general guidance of the FFIEC with the more specific control requirements of SANS, with cybersecurity as the common goal?  Again, a look at the common contributors to each group might provide a clue.

The SANS group credits input from multiple agencies of the U.S. government; the Department of Defense, Homeland Security, NIST, FBI, NSA, Department of Energy, and others.  The FFIEC working group coordinates with groups such as the FFIEC’s Information Technology Subcommittee of the Task Force on Supervision, the Financial and Banking Information Infrastructure Committee, the Financial Services Sector Coordinating Council, and the Financial Services Information Sharing and Analysis Center (FS-ISAC).  SO no direct common thread there, unfortunately.  However the FS-ISAC group does share many partners with the SANS group, including the Departments of Defense, Energy, and Homeland Security, so we may yet see the FFIEC Information Security guidance evolve.  Particularly since the Handbook was published back in 2006, and is overdue for a major update.  In the meantime, financial institutions would be well advised to use the SANS Critical Controls as a de-facto checklist to measure their own security posture.*

By the way, the document  also lists 5 critical tenets of an effective cyber defense system, 2 of which are ‘Continuous Monitoring’ and ‘Automation’.   More on those in a future post (although I already addressed the advantages of automation here).

* There is nothing in the SANS list that is inconsistent with FFIEC requirements, in fact we’ve already seen at least one company servicing the Credit Union industry adopt this list as their framework.  However, keep in mind that although the controls listed are necessary for cyber defense, they are not sufficient.  A fully compliant information security program must also address management and oversight…an area conspicuously absent on the SANS list.

03 Aug 2012

Risk Assessing iCloud (and other online backups) – UPDATE 2, DropBox

Update 2 (8/2012) – Cloud-based storage vendor DropBox confirmed recently that a stolen employee password led to the theft of a “project document” that contained user e-mail addresses. Those addresses were then used to SPAM DropBox users.  The password itself was not stolen directly from the DropBox site, but from another site the employee used.  This reinforces the point I made in a previous post about LinkedIn.  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you should also assume it is now being used in dictionary attacks.  So change your DropBox password, but also change all other occurrences of that password.

But passwords (and password change policies!) aside, serious questions remain about this, and other, on-line storage vendors:

  1. Do they hold themselves to the same high information confidentiality, integrity and availability standards required of financial institutions?
  2. If so, can they document adherence to that standard by producing a third-party report, like the SOC 2?
  3. Will they retain and destroy information consistent with your internal data retention policies?
  4. What happens to your data once your relationship with the vendor is terminated?
  5. Do they have a broad and deep familiarity with the regulatory requirements of the financial industry, and are they willing and able to make changes in their service offerings necessitated by those requirements?

Any vendor that can not address these questions to your satisfaction should not be considered as a service provider for data classified any higher then “low”.

________________________________________________________

Update 1 (3/2012) – A recent article in Data Center Knowledge  estimates that Amazon is using at least 454,400 servers in seven data center hubs around the globe.  This emphasizes my point that large cloud providers with widely distributed data storage make it very difficult for financial institutions to satisfy the requirement to secure data in transit and storage if they don’t know exactly where the data is stored.

________________________________________________________

Apple recently introduced the iCloud service for Apple devices such as the iPhone and iPad.  The free version offers 5GB of storage, and additional storage up to 50GB can be purchased.  The storage can be used for anything from music to documents to email.

Since iPhones and iPads (and other mobile devices) have become ubiquitous among financial institution users, and since it is reasonable to assume that email and other documents stored on these devices (and replicated in iCloud) could contain non-public customer information, the use of this technology must be properly risk managed.  But iCloud is no different than any of the other on-line backup services such as Microsoft SkyDrive, Google Docs, Carbonite, DropBox, Amazon Web Services (AWS) or our own C-Vault…if customer data is transmitted or stored anywhere outside of your protected network, the risk assessment process is always the same.

The FFIEC requires financial institutions to:

  • Establish and ensure compliance with policies for handling and storing information,
  • Ensure safe and secure disposal of sensitive media, and
  • Secure information in transit or transmission to third parties.

These responsibilities don’t go away when all or part of a service is outsourced.  In fact, “…although outsourcing arrangements often provide a cost-effective means to support the institution’s technology needs, the ultimate responsibility and risk rests with the institution.“*  So once you’ve established a strategic basis  for cloud-based data storage, risk assessing outsourced products and services is basically a function of vendor management.  And the vendor management process actually begins well before the vendor actually becomes a vendor, i.e. before the contract is signed.  Again, the FFIEC provides guidance in this area:

Financial institutions should exercise their security responsibilities for outsourced operations through:

  • Appropriate due diligence in service provider research and selection,
  • Contractual assurances regarding security responsibilities, controls, and reporting,
  • Nondisclosure agreements regarding the institution’s systems and data,
  • Independent review of the service provider’s security though appropriate audits and tests, and
  • Coordination of incident response policies and contractual notification requirements.*

So how do you comply (and demonstrate compliance) with this guidance?  For starters, begin your vendor management process early, right after the decision is made to implement cloud-based backup.  Determine your requirements and priorities (usually listed in a formal request for proposal), such as availability, capacity, privacy/security, and price…and perform due diligence on your short list of potential providers to narrow the choice.  Non-disclosure agreements would typically be exchanged at this point (or before).

Challenges & Solutions

This is where the challenges begin when considering large cloud-based providers.  They aren’t likely to respond to a request for proposal (RFP), nor are they going to provide a non-disclosure agreement (NDA) beyond their standard posted privacy policy. This does not, however, relieve you from your responsibility to satisfy yourself any way you can that the vendor will still meet all of your requirements.  One more challenge (and this is a big one)…since large providers may store data simultaneously in multiple locations, you don’t really know where your data is physically located.  How do you satisfy the requirement to secure data in transit and storage if you don’t know where it’s going or how it gets there?  Also, what happens if you decide to terminate the service?  How will you validate that your data is completely removed?  And what happens if the vendor sells themselves to someone else.  Chances are your data was considered an asset for the purposes of valuing the transaction, and now that asset (your data) is in the hands of someone else, someone that may have a different privacy policy or may even be located in a different country.

The only possible answer to these challenges is bullet #4 above…you request, receive and review the providers financials and other third-party reviews (SOC, SAS 70, etc).  Here again, large providers may not be willing to share information beyond what is already public.  So the answer actually presents an additional challenge.

Practically speaking, perhaps the best way to approach this is to have a policy that classifies and restricts data stored in the cloud.  Providers that can meet your privacy, security, confidentiality, availability and data integrity requirements would be approved for all data types, providers that could NOT satisfactorily meet your requirements would be restricted to storing only non-critical, non-sensitive information.  Of course enforcing that policy is the final challenge…and the topic of a future post!  In the meantime, if your institution is using cloud-based data storage, how are you addressing these challenges?

* Information Security Booklet – July 2006, Service Provider Oversight

23 May 2012

Patch deployment – now or later? (with interactive poll!)

We recently saw an examination finding that recommended that “Critical Patches be deployed within 24 hours of notice (of patch release)”.  This would seem to contradict the FFIEC guidance in the Information Security Handbook that states that the institution:

Apply the patch to an isolated test system and verify that the patch…

(1) is compatible with other software used on systems to which the patch will be applied,

(2) does not alter the system’s security posture in unexpected ways, such as altering log settings, and

(3) corrects the pertinent vulnerability.”

If this testing process is followed correctly, it is highly unlikely that it will be completed within 24 hours of patch release.  The rational behind immediate patch release is that the risk of “zero-day exploits” is greater than the risk of installing an untested patch that may cause problems with your existing applications.  So the poll question is:
[poll id=”2″]
Regardless of your approach, you’ll have to document the risk and how you plan to mitigate it.  A “test first” approach might choose to increase end-user training and emphasize other controls such as firewall firmware, IPS/IDS, and Anti-virus/Anti-malware.  If you take a “patch first” approach you may want to leave one un-patched machine in each critical department to allow at least minimal functionality in case something goes wrong.  You should also test the “roll-back” capabilities of the particular patch prior to full deployment.

I’ll be watching to see if this finding appears in other examinations, and also to see if the guidance is updated online.  Until then, because of the criticality of your applications and the required up-time of your processes, I believe a “test-first” approach that adheres to the guidance is the most prudent approach…for now.  However you manage it though, be prepared to explain the why and how to the Board and senior management.  Not only are the results expected to be included in your annual Board report, it may help to explain repeat future examination findings if your current approach differs from examiner expectations.

12 Mar 2012

Risk Managing BYOD (bring your own device)

Thanks in part to social media, users today often don’t differentiate between work and non-work activities, and they certainly don’t want to have to carry multiple work/non-work devices to keep them connected.    As a result, new multi-function, multi-purpose mobile devices are constantly being added to your secure financial institution network…and often in violation of your policies.

Most institutions have an IT Acquisition Policy, or something similar, that defines exactly how (and why) new technology is requested, acquired, implemented and maintained.  The scope of the policy extends to all personnel who are approved to use network resources within the institution, and the IT Committee (or equivalent) is usually tasked with making the final purchasing decision.   And although older policies may use language like “microcomputers”, and “PC’s”, the policy effectively covers all network connected devices, including the new generation of devices like smartphones and iPads.  And managing risk always begins with the acquisition policy…before the devices are acquired.

Your policy may differ in the specific language, but it should contain the following basic elements required of all requests for new technology:

  • Description of the specific hardware and software requested, along with an estimate of costs (note what type of vendor support is available).
  • Description of the intended use or need for the item(s).
  • Description of the cost vs. benefits of acquiring the requested item(s).
  • Analysis of information security ramifications of requested item(s).
  • Time frame required for purchase.

Most of these are pretty straightforward to answer, but what about bullet #4?  Are you able to apply the same level of information security standards to these multifunctional devices as you are to your PC’s and laptops?  Or does convenience trump security?  This is where the provisions of your information security policy take over.

The usefulness of these always-on mobile devices is undeniable, and they have really bent the cost/benefit curve, but they have also re-drawn the security profile in many cases.  The old adage is that a chain is only as strong as its weakest link, and in today’s IT infrastructure environment these devices are often the weak links in the security chain.  So while your users have their feet on the accelerator of new technology adoption, the ISO (and the committee managing information security) needs to have both feet firmly on the brake unless they are willing to declare these devices as an exception to their security policy…which is definitely not recommended.

So how can you effectively manage these devices within the provisions your existing information security program, without compromising your overall security profile?  It might be worth reviewing what the FFIEC has to say about security strategy:

Security strategies include prevention, detection, and response, and all three are needed for a comprehensive and robust security framework. Typically, security strategies focus most resources on prevention. Prevention addresses the likelihood of harm. Detection and response are generally used to limit damage once a security breech has occurred. Weaknesses in prevention may be offset by strengths in detection and response.

Regulators expect you to treat all network devices the same, and clearly preventive risk management controls are preferred, but the fact is that many of the same well-established tools and techniques that are used for servers, PC’s and laptops are either not available, or not as mature in the smartphone/iPad world.  Traditional tools such as patch management, anti-virus and remote access event log monitoring, and techniques such as multi-factor authentication and least permissions, are difficult if not impossible to apply to these devices.  However there are still preventive controls you can, and should, implement.

First of all, only deploy remote devices to approved users (as required by your remote access policy), and require connectivity via approved, secure connections (i.e. 3G/4G, SSL, secure WiFi, etc.).  Require both power-on and wake pass codes.  Require approval for all applications and utilize some form of patch management (manual or automated) for the operating system and the applications.  Encrypt all sensitive data in storage, and utilize anti-virus/ anti-spyware if available.

Because of the unavailability of preventive controls, maintaining your information security profile will likely rest on your compensating detective and corrective controls.  Controls are somewhat limited in these areas as well, but include maintaining an up-to-date device inventory, and having tracking and remote wipe capabilities to limit damage if a security breach does occur.

But there is one more very important preventive control you can use, and this one is widely available, mature, and highly effective…employee training.  Require your remote device users to undergo initial, and periodic, training on what your policies say they are (and aren’t) allowed to do with their devices.  You should still conduct testing of the remote wipe capability, and spot check for unencrypted data and unauthorized applications, but most of all train (and retrain) your employees.