Data Classification and the Cloud

Data Classification and the Cloud

UPDATE –  In response to the reluctance of financial institutions to adopt cloud storage, vendors such as Microsoft and HP have announced that they are building “hybrid” clouds.  These new models are designed to allow institutions to simultaneously store and process certain data in the cloud, while a portion of the processing or storage is done locally on premise.  For example, the application may reside in the cloud, but the customer data is stored locally.  This may make the decision easier, but only makes classification of data more important, as the decision to utilize a “hybrid” cloud must be justified by your assessment of the privacy and criticality of the data.

I get “should-we-or-shouldn’t-we” questions about the Cloud all the time, and because of the high standards for financial institution data protection, I always advise caution.  In fact, I recently outlined 7 cloud deal-breakers for financial institutions.  But could financial institutions still justify using a cloud vendor even if they don’t seem to meet all of the regulatory requirements?  Yes…if you’ve first classified your data.

The concept of “data classification” is not new, it’s mentioned several times in the FFIEC Information Security Handbook:

“Institutions may* establish an information data classification program to identify and rank data, systems, and applications in order of importance. Classifying data allows the institution to ensure consistent protection of information and other critical data throughout the system.”

“Data classification is the identification and organization of information according to its criticality and sensitivity. The classification is linked to a protection profile. A protection profile is a description of the protections that should be afforded to data in each classification.”

The term is also mentioned several times in the FFIEC Operations Handbook:

“As part of the information security program, management should* implement an information classification strategy appropriate to the complexity of its systems. Generally, financial institutions should classify information according to its sensitivity and implement
controls based on the classifications. IT operations staff should know the information classification policy and handle information according to its classification.”

 But the most relevant reference for financial institutions looking for guidance about moving data to the Cloud is a single mention in the FFIEC Outsourcing Technology Services Handbook, Tier 1 Examination Procedures section:

“If the institution engages in cloud processing, determine that inherent risks have been comprehensively evaluated, control mechanisms have been clearly identified, and that residual risks are at acceptable levels. Ensure that…(t)he types of data in the cloud have been identified (social security numbers, account numbers, IP addresses, etc.) and have established appropriate data classifications based on the financial institution’s policies.”

So although data classification is a best practice even before you move to the cloud, the truth is that most institutions aren’t doing it (more on that in a moment).   However examiners are expected to ensure (i.e. to verify) that you’ve properly classified your data afterwards…and that regardless of where data is located, you’ve protected it consistent with your existing policies.  (To date I have not seen widespread indications that examiners are asking for data classification yet, but I expect as cloud utilization increases, they will.  After all, it is required in their examination procedures.)

Most institutions don’t bother to classify data that is processed and stored internally because they treat all data the same, i.e. they have a single protection profile that treats all data at the highest level of sensitivity.  And indeed the guidance states that:

“Systems that store or transmit data of different sensitivities should be classified as if all data were at the highest sensitivity.”

But once that data leaves your protected infrastructure everything changes…and nothing changes.  Your policies still require (and regulators still expect) complete data security, privacy, availability, etc., but since your level of control drops considerably, so should your level of confidence.  And you likely have sensitive data combined with non-sensitive, critical combined with non-critical.  This would suggest that unless the cloud vendor meets the highest standard for your most critical data, they can’t be approved for any data.  Unless…

  1. You’ve clearly defined data sensitivity and criticality categories, and…
  2. You’re able to segregate one data group from another, and…
  3. You’ve established and applied appropriate protection profiles to each one.

Classification categories are generally defined in terms of criticality and sensitivity, but the guidance is not prescriptive on how you should label each category.  I’ve seen “High”, “Medium”, and “Low”, as well as “Tier 1”, “Tier 2” and “Tier 3”, and even a scale of 1 to 5,…whatever works best for your organization is fine.  Once that is complete, the biggest challenge is making sure you don’t mix data classifications.  This is easier for data like financials or Board reports, but particularly challenging for data like email, which could contain anything from customer information to yesterdays lunch plans.  Remember, if any part of the data is highly sensitive or critical, all data must be treated as such.

So back to my original question…can you justify utilizing the cloud even if the vendor is less than fully compliant?  Yes, if data is properly classified and segregated, and if cloud vendors are selected based on their ability to adhere to your policies (or protection profiles) for each category of data.



*In “FFIEC-speak”, ‘may’ means “should’, and ‘should’ means ‘must’.

Print Friendly, PDF & Email
Tom Hinkel
As author of the Compliance Guru website, Hinkel shares easy to digest information security tidbits with financial institutions across the country. With almost twenty years’ experience, Hinkel’s areas of expertise spans the entire spectrum of information technology. He is also the VP of Compliance Services at Safe Systems, a community banking tech company, where he ensures that their services incorporate the appropriate financial industry regulations and best practices.

Write a Comment