Author: Tom Hinkel

As author of the Compliance Guru website, Hinkel shares easy to digest information security tidbits with financial institutions across the country. With almost twenty years’ experience, Hinkel’s areas of expertise spans the entire spectrum of information technology. He is also the VP of Compliance Services at Safe Systems, a community banking tech company, where he ensures that their services incorporate the appropriate financial industry regulations and best practices.
03 Aug 2012

Risk Assessing iCloud (and other online backups) – UPDATE 2, DropBox

Update 2 (8/2012) – Cloud-based storage vendor DropBox confirmed recently that a stolen employee password led to the theft of a “project document” that contained user e-mail addresses. Those addresses were then used to SPAM DropBox users.  The password itself was not stolen directly from the DropBox site, but from another site the employee used.  This reinforces the point I made in a previous post about LinkedIn.  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you should also assume it is now being used in dictionary attacks.  So change your DropBox password, but also change all other occurrences of that password.

But passwords (and password change policies!) aside, serious questions remain about this, and other, on-line storage vendors:

  1. Do they hold themselves to the same high information confidentiality, integrity and availability standards required of financial institutions?
  2. If so, can they document adherence to that standard by producing a third-party report, like the SOC 2?
  3. Will they retain and destroy information consistent with your internal data retention policies?
  4. What happens to your data once your relationship with the vendor is terminated?
  5. Do they have a broad and deep familiarity with the regulatory requirements of the financial industry, and are they willing and able to make changes in their service offerings necessitated by those requirements?

Any vendor that can not address these questions to your satisfaction should not be considered as a service provider for data classified any higher then “low”.

________________________________________________________

Update 1 (3/2012) – A recent article in Data Center Knowledge  estimates that Amazon is using at least 454,400 servers in seven data center hubs around the globe.  This emphasizes my point that large cloud providers with widely distributed data storage make it very difficult for financial institutions to satisfy the requirement to secure data in transit and storage if they don’t know exactly where the data is stored.

________________________________________________________

Apple recently introduced the iCloud service for Apple devices such as the iPhone and iPad.  The free version offers 5GB of storage, and additional storage up to 50GB can be purchased.  The storage can be used for anything from music to documents to email.

Since iPhones and iPads (and other mobile devices) have become ubiquitous among financial institution users, and since it is reasonable to assume that email and other documents stored on these devices (and replicated in iCloud) could contain non-public customer information, the use of this technology must be properly risk managed.  But iCloud is no different than any of the other on-line backup services such as Microsoft SkyDrive, Google Docs, Carbonite, DropBox, Amazon Web Services (AWS) or our own C-Vault…if customer data is transmitted or stored anywhere outside of your protected network, the risk assessment process is always the same.

The FFIEC requires financial institutions to:

  • Establish and ensure compliance with policies for handling and storing information,
  • Ensure safe and secure disposal of sensitive media, and
  • Secure information in transit or transmission to third parties.

These responsibilities don’t go away when all or part of a service is outsourced.  In fact, “…although outsourcing arrangements often provide a cost-effective means to support the institution’s technology needs, the ultimate responsibility and risk rests with the institution.“*  So once you’ve established a strategic basis  for cloud-based data storage, risk assessing outsourced products and services is basically a function of vendor management.  And the vendor management process actually begins well before the vendor actually becomes a vendor, i.e. before the contract is signed.  Again, the FFIEC provides guidance in this area:

Financial institutions should exercise their security responsibilities for outsourced operations through:

  • Appropriate due diligence in service provider research and selection,
  • Contractual assurances regarding security responsibilities, controls, and reporting,
  • Nondisclosure agreements regarding the institution’s systems and data,
  • Independent review of the service provider’s security though appropriate audits and tests, and
  • Coordination of incident response policies and contractual notification requirements.*

So how do you comply (and demonstrate compliance) with this guidance?  For starters, begin your vendor management process early, right after the decision is made to implement cloud-based backup.  Determine your requirements and priorities (usually listed in a formal request for proposal), such as availability, capacity, privacy/security, and price…and perform due diligence on your short list of potential providers to narrow the choice.  Non-disclosure agreements would typically be exchanged at this point (or before).

Challenges & Solutions

This is where the challenges begin when considering large cloud-based providers.  They aren’t likely to respond to a request for proposal (RFP), nor are they going to provide a non-disclosure agreement (NDA) beyond their standard posted privacy policy. This does not, however, relieve you from your responsibility to satisfy yourself any way you can that the vendor will still meet all of your requirements.  One more challenge (and this is a big one)…since large providers may store data simultaneously in multiple locations, you don’t really know where your data is physically located.  How do you satisfy the requirement to secure data in transit and storage if you don’t know where it’s going or how it gets there?  Also, what happens if you decide to terminate the service?  How will you validate that your data is completely removed?  And what happens if the vendor sells themselves to someone else.  Chances are your data was considered an asset for the purposes of valuing the transaction, and now that asset (your data) is in the hands of someone else, someone that may have a different privacy policy or may even be located in a different country.

The only possible answer to these challenges is bullet #4 above…you request, receive and review the providers financials and other third-party reviews (SOC, SAS 70, etc).  Here again, large providers may not be willing to share information beyond what is already public.  So the answer actually presents an additional challenge.

Practically speaking, perhaps the best way to approach this is to have a policy that classifies and restricts data stored in the cloud.  Providers that can meet your privacy, security, confidentiality, availability and data integrity requirements would be approved for all data types, providers that could NOT satisfactorily meet your requirements would be restricted to storing only non-critical, non-sensitive information.  Of course enforcing that policy is the final challenge…and the topic of a future post!  In the meantime, if your institution is using cloud-based data storage, how are you addressing these challenges?

* Information Security Booklet – July 2006, Service Provider Oversight

16 Jul 2012

Commercially UNreasonable Security

So an appellate court has just reversed the PATCO court ruling, essentially deciding against the financial institution.  They ruled that the banks’ security procedures were commercially UN-reasonable.

To summarize, a commercial e-banking customer (PATCO Construction) experienced a financial loss due to an account take-over.  They sued the bank to recover the loss, claiming the bank used poor security.  The original ruling was in favor of the bank.  This ruling was in favor of the customer, and has major implications for all financial institutions as they navigate their way through the increasing risk and increased regulatory requirements of Internet banking.

The entire ruling is worth a read, but here are a few of the highlights from my perspective:

  • In the end, it wasn’t just a single control failure, but a series of failures on the part of the Bank that led to the ruling.  One example is that the Bank lowered the alert trigger for challenge questions from $100,000 to $1, effectively requiring all transactions to require an additional authentication step.  The Bank undoubtedly felt they were increasing the safety of all transactions by taking this step, but it actually had the opposite effect.  By requiring challenge questions for all transactions they substantially increased the number of chances the criminals had to intercept the correct challenge responses.
  • The on-line banking product (NetTeller Premium) and provider (Jack Henry & Associates) offered adequate options for on-line transaction security, but not all options were enabled by the Bank.  And…
  • …of those security options offered by the Bank, not all were accepted by the customer.  And…
  • …of those offered and accepted, some were ignored.  For example the anomaly detection capabilities worked properly, and automated risk profiling correctly generated abnormally high risk scores for the fraudulent transactions, but no action was taken by the Bank to block them.
  • The definition of “commercially reasonable” has evolved from the initial ruling favoring the customer, to the most recent one.  Both rulings make several references to Article 4A of the UCC (Uniform Commercial Code).  The initial ruling stated that because the customer signed the agreement, they implicitly agreed to the security measures, effectively rendering them commercially reasonable.    However the most recent ruling quotes from UCC 4A and states “[t]he standard is not whether the security procedure is the best available. Rather it is whether the procedure is reasonable for the particular customer and the particular bank.”  Therefore…
  • …a “one-size-fits-all” approach will not work, institutions MUST tailor their controls to the risks of the transaction.

But here is the most significant take-away for me, and the one with the biggest implication for financial institutions.  The judge ruled that based on the UCC 4A official comments on Section (1)(b), if and when the security procedures are deemed commercially reasonable, the burden then shifts to the customer

“…to supervise its employees to assure compliance with the security procedure and to safeguard confidential security information and access to transmitting facilities so that the security procedure cannot be breached.”

So, all you have to do is risk assess the customer and transactions and employ layered security suitable to the risk, and then the legal and financial liability shifts to the customer, right?  Maybe not.  According to the FFIEC Internet Authentication update, one of the controls an institution may include (translated from ‘FFIEC-speak’ as SHOULD include) in its layered security program is a “Customer Awareness and Education” program.  Which means you are still on the hook unless you can document that you also maintain a customer awareness program AND your customers are actually being trained.  (As I mentioned here, you may also want to add a summary of your customer awareness program to your annual report to the Board of Directors).

I’m certain we’ll see more lawsuits on this matter and future rulings may go either way, but the risk is real and immediate so don’t wait for the courts to sort things out.  Here is what you need to do:

  1. Complete the risk assessment if you haven’t already.  Define high risk transactions, and identity high risk customers.
  2. Implement a layered security program.  Make sure you know and understand all of the controls available from your e-banking product vendor.  Vendors are adding controls all the time to address the evolving threat environment.
  3. Make sure your customers know and understand all of the controls you’ve made available to them.   If they resist or refuse a particular control that you’ve recommended, have them sign-off that they understand and accept the increased risk.
  4. Educate your customers, initially and periodically throughout the relationship, and regardless of whether they resist.  Regardless of the quantity and sophistication of your technical controls, the customer is, and will always remain, the weakest link in the security chain.
10 Jul 2012

FFIEC issues Cloud Computing Guidance

Actually the document is classified as “for informational purposes only”, which is to say that it is not a change or update to any specific Handbook and presumably does not carry the weight of regulatory guidance.  However, it is worth a read by all financial institutions outsourcing services because it provides reinforcement for, and references to, all applicable guidance and best practices surrounding cloud computing.

It is a fairly short document (4 pages) and again does not represent a new approach, but rather reinforces the fact that managing cloud providers is really just a best practices exercise in vendor management.  It makes repeated reference to the existing guidance found in the Information Security and Outsourcing Technology Services Handbooks.  It also introduces a completely new section of the InfoBase called Reference Materials.

The very first statement in the document pretty well sums it up:

“The (FFIEC) Agencies consider cloud computing to be another form of outsourcing with the same basic risk characteristics and risk management requirements as traditional forms of outsourcing.”

It then proceeds to describe basic vendor management best practices such as information security and business continuity, but one big take-away for me was the reference to data classification.  This is not the first time we’ve seen this term, I wrote about examiners asking for it here, and the Information Security Handbook says that:

“Institutions may establish an information data classification program to identify and rank data, systems, and applications in order of importance.”

But when all your sensitive data is stored, transmitted, and processed in a controlled environment  (i.e. between you and your core provider) a simple schematic will usually suffice to document data flow.  No need to classify and segregate data, all data is treated equally regardless of sensitivity.  However once that data enters the cloud you lose that control.  What path did the data take to get to the cloud provider?  Where exactly is the data stored?  Who else has access to the data?  And what about traditional issues such as recoverability and data retention and destruction?

Another important point made in the document, and one that doesn’t appear in any other guidance,  is that because of the unique legal and regulatory challenges faced by financial institutions, the cloud vendor should be familiar with the financial industry.  They even suggest that if the vendor is not keeping up with regulatory changes (either because the are unwilling or unable) you may determine on that basis that you cannot employ that vendor.

The document concludes by stating that:

“The fundamentals of risk and risk management defined in the IT Handbook apply to cloud computing as they do to other forms of outsourcing. Cloud computing may require more robust controls due to the nature of the service.”

And…

“Vendor management, information security, audits, legal and regulatory compliance, and business continuity planning are key elements of sound risk management and risk mitigation controls for cloud computing.”

…as they are for all outsourced relationships!

03 Jul 2012

“Operational Risk Increasing”

In a recent speech to the Exchequer Club1, Thomas J. Curry, the new head of the OCC, stated that although asset quality has improved, charge-off rates have fallen, and capital now stands at its highest level in a decade, another type of risk is gaining increasing prominence; Operational Risk.

“Some of our most seasoned supervisors, people with 30 or more years of experience in some cases, tell me that this is the first time they have seen operational risk eclipse credit risk as a safety and soundness challenge.  Rising operational risk concerns them, it concerns me, and it should concern you.

In fact, the OCC considers it currently to be at the top of the list of safety and soundness issues for the institutions they supervise.  Earlier this year I wrote about how risk assessments were one of the compliance trends of 2012, and how regulators are now asking about things like strategic risk and reputation risk and operational risk, and expecting that these risks are assessed alongside the more traditional categories like privacy and security.

So the question is:  What exactly is operational risk, and how can financial institutions effectively address it?  The FFIEC defines it this way:

“Operational risk (also referred to as transaction risk) is the risk of loss resulting from inadequate or failed processes, people, or systems. The root cause can be either internal or external events. Operational risk is present across all business lines.”

Furthermore, because the implications of operational risk extend to all other risks….

“Management should distinguish the operational risk component from other risks to enable a stronger focus on operational risk mitigation.

If you are still a bit confused about exactly what operational risk looks like, you are not alone.  Because it exists in all business lines and manifests itself in every other risk, it is one of the most difficult risks to assess.  In other words, it’s everywhere…and affects everything!

Simply put (and assuming your policies and procedures are adequate), most of the time operational risk can be defined as a failure to adhere to your own internal policies and procedures.  In other words, if you don’t do what you say you will do, or you don’t do it the way you say you’ll do it, something will fail as a result.  Whether a it’s a process, a control, a system, or a risk model…if they are in place and operational, but either flawed or not followed, operational risk is the result.2   But here is the kicker, even if your processes/procedures/models, etc. are flawless and followed to the letter, if you can’t document that they are,  you may still have a high operational risk finding in your next safety and soundness examination.

The best way to address operational risk is to implement an internal control self-assessment process to assure that risk management controls are adequate, in place, and functioning properly.  Reporting will document that your day-to-day practices follow your written procedures.  Finally, make sure all business decisions reflect the goals and objectives of the strategic plan, and report to the Board on a regular basis.

In summary, integrate assessment of operational risk into your risk management process, and expect to hear more about it from the regulators in the future.  And don’t think that because you aren’t regulated by the OCC you won’t see this trend.  After all, as Mr. Curry stated:

“As regulators, one of our most important jobs is to identify risk trends and bring them to the industry’s attention in a timely way. No issues loom larger today than operational risk in all its dimensions, the manner in which all risks interact, and the importance of managing those risks in an integrated fashion across the entire enterprise.”

[poll id=”3″]

1 The Exchequer Club is comprised of senior professionals from trade associations, federal regulatory agencies, law firms, congressional committees and national press with a primary interest in national economic and financial policy.

2 Business Continuity Planning uses a slightly different definition of operational risk.  Since the basic assumption of a BCP is that your processes and systems have already failed because of a disaster, operational risk manifests itself in the additional overhead that the alternative recovery processes and procedures temporarily impose on your organization.  Of course if your BCP is inadequate, failed processes will be the result.

12 Jun 2012

Managing Social Media Risk – LinkedIn Edition

By now everyone has heard about the breach at LinkedIn, where 6.5 million email password hashes were leaked (over half of which have been cracked, or converted into plain text).  Those who read this blog regularly know how I feel about social media in general:

“So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic and financial) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit.

This is not to say that social media can never be properly risk managed, only that the decision to engage (or not) must be analyzed the same way you analyze any other business decision.  And this is a challenge because social media does not easily lend itself to traditional risk management techniques, and this incident is a good case in point.

So once again, let’s use this latest breach as yet another incident training exercise.  In your initial risk assessment, chances are you classified the site as low risk.  There is no NPI/PII stored there, and it doesn’t offer transactional services beyond account upgrades.  Additionally, regarding the breach itself, only about 5% of all user password hashes were disclosed, and as I said previously, about half of those were converted into the underlying plain text password.  And what exactly is your risk exposure if your password was one that was stolen and cracked?  First of all, they would also need your login name to go with the password.  But if they were able to somehow put the two together, they might change your employment or background information, or post something that could portray you or your company in a negative light.  So there are certainly some risks, but they come with lots of “ifs”.  So low probability + low impact = low risk…change your password and move on, right?

Well maybe, depending on how you answer this question:  Is your LinkedIn password being used anywhere else?  If you have a “go-to” password that you use frequently (and most people do) you should assume that it’s out there in the wild, and you can also assume it is now being used in dictionary attacks.  So yes, if you are an individual user, change your LinkedIn password, but also change all other occurrences of that password.

But back to our training exercise…if you are an institution with an official (or unofficial) LinkedIn presence through one or more employees, even if they’ve changed their password(s), you may still be at risk.  If the employee uses the same password to access your Facebook or Google+ page, or remotely authenticate to your email system, or access anything else that is connected to you, your response procedures should require (and validate) that all affected passwords have been changed.  In fact, since you have no way of knowing if your employee has a personal LinkedIn (or Facebook, etc.) presence,  it might be good practice to have your network administrator force all passwords to change just to be safe.  You may also want to change your policy to state that  internal (or corporate) passwords should never be duplicated or re-used on external or personal sites (although enforcing that may be a challenge).

As far as what you can do to reduce the chance of this type of incident happening again, there isn’t much.  You have to rely on your service providers to properly manage their own security.  You do this in part by obtaining and reviewing third-party reviews (like the SOC reports) if they exist,  but also by reviewing the vendor’s own privacy and security policy.  For example, LinkedIn’s privacy policy says this about the data it collects from you:

  • Security
    • Personal information you provide will be secured in accordance with industry standards and technology. Since the internet is not a 100% secure environment, we cannot ensure or warrant the security of any information you transmit to LinkedIn. There is no guarantee that information may not be accessed, copied, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. (Bold is mine)
    • You are responsible for maintaining the secrecy of your unique password and account information, and for controlling access to your email communications at all times.

Even though they have made public statements that they have taken steps to address the root cause of the breach, given the above policy there is no indication that LinkedIn feel it necessary to obtain a third-party review for validation of their enhanced privacy and security measures.  Granted, given the nature of the information they collect and store they may not feel compelled to do so, and you may not require it, but at the very least you should expect the passwords to be secure.

The first step in managing risk is to identify it.  In this case because of the breach, the unanswered questions*, the lack of a third-party review, and their privacy policy, you are accepting a higher level of residual risk with them than you would normally find acceptable in another vendor.  You can still rationalize your decision strategically, but you must quantify the expected return and then document that the return justifies the increased risk.  And then do the same for your other social media efforts!

 

*Indeed there are several issues raised by this breach that are yet to be answered:  How did it occur?  Could the breach be worse than disclosed?  Why did they encrypt the passwords using the older SHA1 hash algorithm?  Why did they not salt the hashes?  Why didn’t they have a CIO?  Did they truly use industry standards to secure your information?  If they did, those standards are clearly inadequate, so will they now exceed industry standards?

04 Jun 2012

5 Keys to Understanding a SOC 2 Report

Although I have written about these relatively new reports frequently, and for some time now, it still remains a topic of great interest to financial institutions.  Fully 20% of all searches on this site over the past 6 months include the terms “SOC” or “SOC 2”, or “SAS 70”.  Some of this increased interest comes from new FFIEC guidance on how financial institutions should manage their service provider relationships, and some of it comes from financial institutions that are just now seeing these new reports from their vendors for the first time.  And because the SOC 2 is designed to focus on organizations that collect, process, transmit, store, organize, maintain or dispose of information on behalf of others, you are likely to see many more of them going forward.

Having just completed our own SOC 2 (transitioning from the SAS 70 in the previous period), I can say unequivocally that  not only is it much more detailed, but that it has the potential to directly addresses the risks and controls that should concern you as the recipient of IT related services.  But not all SOC 2 reports are alike, and you must review the report that your vendor gives you to determine its relevance to you.  Here are the 5 things you must look for in every report:

  1. Products and Services – Does the report address the products and services you’ve contracted for?

  2. Criteria – Which of the 5 Trust Services Criteria (privacy, security, confidentiality, availability and data integrity) are included in the report?

  3. Sub-service Providers – Does the report cover the subcontractors (sub-service providers) of the vendor?

  4. Type I or Type II – Does the report address the effectiveness of the controls (Type II), or only the suitability of controls (Type I)?

  5. Exceptions – Is the report “clean”?  Does it contain any material exceptions?

Before we get into the details of each item, it is important to understand how a SOC 2 report is structured.  There are 3 distinct sections to a SOC 2 (and they generally appear in this order);

  1. The Service Auditors Report,
  2. The Managements Assertion, and
  3. The Description of Systems.

So simply put, what happens in a SOC 2 report is that your service providers’ management prepares a detailed description of the systems and processes they use to deliver their products and services to you, and the controls they have in place to manage the risks.  They then make an assertion that the description is accurate and complete.  Finally, the auditor renders an opinion on whether or not the description is “fair” as to control suitability (Type I) and effectiveness (Type II).

Products and Services

The first thing to look for in a SOC 2 report is generally found in the Management’s Assertion section.   It will state something to the effect that “…the system description is intended to provide users with information about the X, Y and Z services…”  You should be able to identify all of your products and services among the “X”, “Y”, and “Z”.  If you have a product or service with the vendor that is not specifically mentioned, you’ll need to satisfy yourself that the systems and processes in place for your products are the same as they are for the products covered in the report.  (You should also encourage the vendor to include your products in their next report.)

Criteria

The next thing to look for is found in the Service Auditor’s Report section.  Look for the term “Trust Services Principles and Criteria”, and make a note of which of the 5 criteria are listed.  The 5 possible covered criteria are:  Privacy, Security, Confidentiality, Integrity and Availability.  Service provider management is allowed to select which criteria they want included in the report, and once again you should make sure your specific concerns are addressed.

Sub-service Providers

The next item is also found in the Service Auditor’s Report section, and usually in the first paragraph or two.  Look for either “…our examination included controls of the sub-service providers”, or “…our examination did not extend to controls of sub-service providers”.  The report may also use the terms “inclusive” to indicate that they DID look at sub-service providers, or “carve-out” to indicate that the auditor DID NOT look at the controls of any sub-service providers.  These are the service providers to your service provider, and if they store or process your (or your customers) data you’ll need assurance that they are being held to the same standards as your first-level service provider.  This assurance, if required and  not provided in the SOC 2, may be found in a review of the sub-service provider’s third-party reviews.

Type I or Type II

As with the older SAS 70, the new SOC 1 and SOC 2 reports come in two versions; a Type I, which reports on the adequacy of controls as of a single point in time, and a Type II, which reports on both control adequacy and effectiveness by evaluating the controls over a period of time, typically 6 months.  Clearly the Type II report is preferred, but because the SOC 2 audit guides were just released last year, most service providers may choose to initially release a Type I.  If your concerns about the service provider include whether or not their risk management controls were both adequate AND effective (and in most cases they should), make sure they immediately follow up the Type I with a Type II.

Exceptions

Finally, scan the Service Auditor’s Report section for verbiage such as “except for the matter described in the preceding paragraph…”, or “the controls were not suitably designed…” or “…disclaim an opinion…”, or terms such as “omission” or “misrepresentation” or “inadequate”.  These are an indication that the report could contain important material exceptions that would be cause for concern.

One more thing…pay particular attention to a sub-section (usually found in Description of Systems section) called “Complementary End-User (or User-Entity) Controls”.  This is not new to the SOC reports, the SAS 70 had a similar section, but it is one of the most important parts of the entire report, and one that is often ignored.  This is a list of what the vendor expects from you.  Things without which some or all of the criteria would not be met.  This is the vendor saying “we’ll do our part to keep your data private, secure, available, etc.,  but we expect you to do a few things too”.  It’s important that you understand these items, because the entire auditor’s opinion depends on you doing your part, and failure to do so could invalidate some or all of the trust criteria.  By the way, you should be able to find a corresponding list of these end-user controls repeated in your contracts.

The lesson here is that vendor third-party reviews like the SOC 2 are no longer a “check the box and be done” type of exercise.  As part of your vendor management process, you must actually review the reports, understand them (don’t hesitate to enlist the help of your own auditor if necessary), and document that they adequately address your concerns.