Tag: cloud storage

17 Sep 2013

Data Classification and the Cloud

UPDATE –  In response to the reluctance of financial institutions to adopt cloud storage, vendors such as Microsoft and HP have announced that they are building “hybrid” clouds.  These new models are designed to allow institutions to simultaneously store and process certain data in the cloud, while a portion of the processing or storage is done locally on premise.  For example, the application may reside in the cloud, but the customer data is stored locally.  This may make the decision easier, but only makes classification of data more important, as the decision to utilize a “hybrid” cloud must be justified by your assessment of the privacy and criticality of the data.

I get “should-we-or-shouldn’t-we” questions about the Cloud all the time, and because of the high standards for financial institution data protection, I always advise caution.  In fact, I recently outlined 7 cloud deal-breakers for financial institutions.  But could financial institutions still justify using a cloud vendor even if they don’t seem to meet all of the regulatory requirements?  Yes…if you’ve first classified your data.

The concept of “data classification” is not new, it’s mentioned several times in the FFIEC Information Security Handbook:

“Institutions may* establish an information data classification program to identify and rank data, systems, and applications in order of importance. Classifying data allows the institution to ensure consistent protection of information and other critical data throughout the system.”

“Data classification is the identification and organization of information according to its criticality and sensitivity. The classification is linked to a protection profile. A protection profile is a description of the protections that should be afforded to data in each classification.”

The term is also mentioned several times in the FFIEC Operations Handbook:

“As part of the information security program, management should* implement an information classification strategy appropriate to the complexity of its systems. Generally, financial institutions should classify information according to its sensitivity and implement
controls based on the classifications. IT operations staff should know the information classification policy and handle information according to its classification.”

 But the most relevant reference for financial institutions looking for guidance about moving data to the Cloud is a single mention in the FFIEC Outsourcing Technology Services Handbook, Tier 1 Examination Procedures section:

“If the institution engages in cloud processing, determine that inherent risks have been comprehensively evaluated, control mechanisms have been clearly identified, and that residual risks are at acceptable levels. Ensure that…(t)he types of data in the cloud have been identified (social security numbers, account numbers, IP addresses, etc.) and have established appropriate data classifications based on the financial institution’s policies.”

So although data classification is a best practice even before you move to the cloud, the truth is that most institutions aren’t doing it (more on that in a moment).   However examiners are expected to ensure (i.e. to verify) that you’ve properly classified your data afterwards…and that regardless of where data is located, you’ve protected it consistent with your existing policies.  (To date I have not seen widespread indications that examiners are asking for data classification yet, but I expect as cloud utilization increases, they will.  After all, it is required in their examination procedures.)

Most institutions don’t bother to classify data that is processed and stored internally because they treat all data the same, i.e. they have a single protection profile that treats all data at the highest level of sensitivity.  And indeed the guidance states that:

“Systems that store or transmit data of different sensitivities should be classified as if all data were at the highest sensitivity.”

But once that data leaves your protected infrastructure everything changes…and nothing changes.  Your policies still require (and regulators still expect) complete data security, privacy, availability, etc., but since your level of control drops considerably, so should your level of confidence.  And you likely have sensitive data combined with non-sensitive, critical combined with non-critical.  This would suggest that unless the cloud vendor meets the highest standard for your most critical data, they can’t be approved for any data.  Unless…

  1. You’ve clearly defined data sensitivity and criticality categories, and…
  2. You’re able to segregate one data group from another, and…
  3. You’ve established and applied appropriate protection profiles to each one.

Classification categories are generally defined in terms of criticality and sensitivity, but the guidance is not prescriptive on how you should label each category.  I’ve seen “High”, “Medium”, and “Low”, as well as “Tier 1”, “Tier 2” and “Tier 3”, and even a scale of 1 to 5,…whatever works best for your organization is fine.  Once that is complete, the biggest challenge is making sure you don’t mix data classifications.  This is easier for data like financials or Board reports, but particularly challenging for data like email, which could contain anything from customer information to yesterdays lunch plans.  Remember, if any part of the data is highly sensitive or critical, all data must be treated as such.

So back to my original question…can you justify utilizing the cloud even if the vendor is less than fully compliant?  Yes, if data is properly classified and segregated, and if cloud vendors are selected based on their ability to adhere to your policies (or protection profiles) for each category of data.

 

 

*In “FFIEC-speak”, ‘may’ means “should’, and ‘should’ means ‘must’.

03 Aug 2012

Risk Assessing iCloud (and other online backups) – UPDATE 2, DropBox

Update 2 (8/2012) – Cloud-based storage vendor DropBox confirmed recently that a stolen employee password led to the theft of a “project document” that contained user e-mail addresses. Those addresses were then used to SPAM DropBox users.  The password itself was not stolen directly from the DropBox site, but from another site the employee used.  This reinforces the point I made in a previous post about LinkedIn.  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you should also assume it is now being used in dictionary attacks.  So change your DropBox password, but also change all other occurrences of that password.

But passwords (and password change policies!) aside, serious questions remain about this, and other, on-line storage vendors:

  1. Do they hold themselves to the same high information confidentiality, integrity and availability standards required of financial institutions?
  2. If so, can they document adherence to that standard by producing a third-party report, like the SOC 2?
  3. Will they retain and destroy information consistent with your internal data retention policies?
  4. What happens to your data once your relationship with the vendor is terminated?
  5. Do they have a broad and deep familiarity with the regulatory requirements of the financial industry, and are they willing and able to make changes in their service offerings necessitated by those requirements?

Any vendor that can not address these questions to your satisfaction should not be considered as a service provider for data classified any higher then “low”.

________________________________________________________

Update 1 (3/2012) – A recent article in Data Center Knowledge  estimates that Amazon is using at least 454,400 servers in seven data center hubs around the globe.  This emphasizes my point that large cloud providers with widely distributed data storage make it very difficult for financial institutions to satisfy the requirement to secure data in transit and storage if they don’t know exactly where the data is stored.

________________________________________________________

Apple recently introduced the iCloud service for Apple devices such as the iPhone and iPad.  The free version offers 5GB of storage, and additional storage up to 50GB can be purchased.  The storage can be used for anything from music to documents to email.

Since iPhones and iPads (and other mobile devices) have become ubiquitous among financial institution users, and since it is reasonable to assume that email and other documents stored on these devices (and replicated in iCloud) could contain non-public customer information, the use of this technology must be properly risk managed.  But iCloud is no different than any of the other on-line backup services such as Microsoft SkyDrive, Google Docs, Carbonite, DropBox, Amazon Web Services (AWS) or our own C-Vault…if customer data is transmitted or stored anywhere outside of your protected network, the risk assessment process is always the same.

The FFIEC requires financial institutions to:

  • Establish and ensure compliance with policies for handling and storing information,
  • Ensure safe and secure disposal of sensitive media, and
  • Secure information in transit or transmission to third parties.

These responsibilities don’t go away when all or part of a service is outsourced.  In fact, “…although outsourcing arrangements often provide a cost-effective means to support the institution’s technology needs, the ultimate responsibility and risk rests with the institution.“*  So once you’ve established a strategic basis  for cloud-based data storage, risk assessing outsourced products and services is basically a function of vendor management.  And the vendor management process actually begins well before the vendor actually becomes a vendor, i.e. before the contract is signed.  Again, the FFIEC provides guidance in this area:

Financial institutions should exercise their security responsibilities for outsourced operations through:

  • Appropriate due diligence in service provider research and selection,
  • Contractual assurances regarding security responsibilities, controls, and reporting,
  • Nondisclosure agreements regarding the institution’s systems and data,
  • Independent review of the service provider’s security though appropriate audits and tests, and
  • Coordination of incident response policies and contractual notification requirements.*

So how do you comply (and demonstrate compliance) with this guidance?  For starters, begin your vendor management process early, right after the decision is made to implement cloud-based backup.  Determine your requirements and priorities (usually listed in a formal request for proposal), such as availability, capacity, privacy/security, and price…and perform due diligence on your short list of potential providers to narrow the choice.  Non-disclosure agreements would typically be exchanged at this point (or before).

Challenges & Solutions

This is where the challenges begin when considering large cloud-based providers.  They aren’t likely to respond to a request for proposal (RFP), nor are they going to provide a non-disclosure agreement (NDA) beyond their standard posted privacy policy. This does not, however, relieve you from your responsibility to satisfy yourself any way you can that the vendor will still meet all of your requirements.  One more challenge (and this is a big one)…since large providers may store data simultaneously in multiple locations, you don’t really know where your data is physically located.  How do you satisfy the requirement to secure data in transit and storage if you don’t know where it’s going or how it gets there?  Also, what happens if you decide to terminate the service?  How will you validate that your data is completely removed?  And what happens if the vendor sells themselves to someone else.  Chances are your data was considered an asset for the purposes of valuing the transaction, and now that asset (your data) is in the hands of someone else, someone that may have a different privacy policy or may even be located in a different country.

The only possible answer to these challenges is bullet #4 above…you request, receive and review the providers financials and other third-party reviews (SOC, SAS 70, etc).  Here again, large providers may not be willing to share information beyond what is already public.  So the answer actually presents an additional challenge.

Practically speaking, perhaps the best way to approach this is to have a policy that classifies and restricts data stored in the cloud.  Providers that can meet your privacy, security, confidentiality, availability and data integrity requirements would be approved for all data types, providers that could NOT satisfactorily meet your requirements would be restricted to storing only non-critical, non-sensitive information.  Of course enforcing that policy is the final challenge…and the topic of a future post!  In the meantime, if your institution is using cloud-based data storage, how are you addressing these challenges?

* Information Security Booklet – July 2006, Service Provider Oversight