Category: Hot Topics

14 Jun 2011

SOC 2 vs. SAS 70 – 5 reasons to embrace the change

The SOC 2 and SOC 3 audit guides have recently been released by the AICPA, and the SAS 70 phase-out becomes effective tomorrow.  The more I learn about these new reports the more I like them.  First of all, as a service provider to financial institutions we will have to prepare for this engagement (just as we did for the SAS 70), so it’s certainly important to know what changes to expect from that perspective.  But as a trusted adviser to financial institutions struggling to manage the risks of outsourced services (and the associated vendors), the information provided in the new SOC 2 and SOC 3 reports are a welcome change. In fact, if your vendor provides services such as cloud computing, managed security services, or IT outsourcing, the new SOC reports provide exactly the assurances you need to address your concerns. Here is what I see as the 5 most significant differences between the SAS 70 and the SOC 2 reports, and why you should embrace the change:

Management Assertion – Management must now provide a written description of their organizations’ system (software, people, procedures and data) and controls. The auditor then expresses an opinion on whether or not managements’ assertion is accurate. Why should this matter to you? For the same reason the regulators want your Board and senior management to approve everything from lending policy to DR test results…accountability.

Relevance – The SAS 70 report was always intended to be a your-auditor-to-their-auditor communication, it was never intended to be used by you (the end-user) to address your institutions’ concerns about vendor privacy and security. The SOC reports are intended as a service provider to end-user communication, with the auditor providing verification (or attestation) as to accuracy.

Scope – Although the SAS 70 report addressed some of these, the SOC 2 report directly addresses all of the following 5 concerns:

  1. Security. The service provider’s systems is protected against unauthorized access.
  2. Availability. The service provider’s system is available for operation as contractually committed or agreed.
  3. Processing Integrity. The provider’s system is accurate, complete, and trustworthy.
  4. Confidentiality. Information designated as confidential is protected as contractually committed or agreed.
  5. Privacy. Personal information (if collected by the provider) is used, retained, disclosed, and destroyed in accordance with the providers’ privacy policy.

If these sound familiar, they should.  The FFIEC Information Security Booklet lists the following security objectives that all financial institutions should strive to accomplish:

  1. The Privacy and Security elements of GLBA
  2. Availability
  3. Integrity of data or systems
  4. Confidentiality of data or systems
  5. Accountability
  6. Assurance

As you can see, there is considerable overlap between the FFIEC requirements and the scope of a typical SOC 2 engagement.

Testing – Like the SAS 70, the SOC 1 and SOC 2 are available in both a Type 1 and Type II format.  A Type I speaks only to the adequacy of vendor controls, but the Type II gives management assurance that the vendor’s controls are not just adequate, but also effective.  The auditor can do this in a Type II engagement because they are expected to not just inquire about control effectiveness, but actually observe the control operating effectively via testing.  And because the scope of the SOC 2 is significantly greater than the SAS 70 (see above), the test results are much more meaningful to you.  In fact, the SOC 2 audit guide itself suggests that because your concerns (particularly as a financial institution) are in both control design and effectiveness, a Type I report is unlikely to provide you with sufficient information to assess the vendor’s risk management controls.  For this reason you should insist on a Type II report from all of your critical service providers.

Vendor Subcontractors – This refers to the subcontractor(s) of your service provider, and again this is another FFIEC requirement that is directly addressed in the new SOC reports.  The FFIEC in their Outsourcing Technology Services Booklet states that an institution can select from two techniques to manage a multiple service provider relationship:

  1. Use the lead provider to manage all of their subordinate relationships, or
  2. Use separate contracts and operational agreements for each subcontractor.

The booklet suggests that employing the first technique is the least cumbersome for the institution, but that either way;

“Management should also ensure the service provider’s control environment meets or exceeds the institution’s expectations, including the control environment of organizations that the primary service provider utilizes.”

The audit guidelines of the SOC 2 engagement require the service auditor to obtain an understanding of the significant vendors whose services affect the service provider’s system, and assess whether they should be included in the final report, or “carved-out”.  Given the regulatory requirement for managing service providers you should insist on an “inclusive” report.

In summary, there will be an adaptation curve as you adjust to the new reports, but in the end I think this is an overwhelming step in the right direction for the accuracy and effectiveness of your vendor management program.

31 May 2011

Time to re-think the role of the network administrator?

Traditionally, the network administrator needed to operate at “ground-level”. Network maintenance was highly specialized and problematic, requiring a constant hands-on approach. And in the very early days (when the Guru started… “he who speaks of floppy disks”…) there were few formal training classes, most of what you learned was by trial and error…lots of error!

Today’s network administrator still has plenty of trial and error learning, but there is much less of it then there used to be. Consider this:

  • How important is the Internet to your problem resolution process? Can you even imagine doing your job without the Internet?
  • Colleges, universities and technical schools have had formal degree programs for training network administrators for years, assuring that even the most inexperienced admin has a broad base of knowledge to draw from starting on day one.
  • Although server and desktop operating systems and applications are more complex today, they are also much (somewhat?) more stable than they used to be, with much more mature, feature-rich, administrative interfaces.
  • Largely because of the first three items, there are a lot more resources available for support today. It’s often more cost effective to reach out to an expert in a particular area then it is to spend hours on trial and error.
  • Most importantly though, many of the routine administrative tasks can now be automated and/or outsourced (patch management, AV updates, etc.), removing not only the drudgery of the task, but also removing the uncertainty of the human element as well. And as I’ve written about here, both auditors and examiners prefer automated controls for that very reason.

So the focus of the network administrator’s job has really evolved from a hands-on, high-touch, ground-level role, into more of a higher-level, managerial role. They still have primary responsibility in their traditional role of (as the FFIEC states), “…implementing the policies, standards, and procedures in their day-to-day operational role”, but they now often assist in the development of those policies as well. Most admins also sit on the IT steering committee, and in that capacity they also have the shared responsibility to coordinate the IT strategic plan, and by extension, the overall strategic plan. But it’s difficult to have an enterprise-wide view if you’re stuck in the trenches unlocking a user account or struggling with AV or patch updates.

Given the right tools, today’s network administrator is able to add value in many ways. Furthermore I don’t know of a single one that wouldn’t jump at the chance to assume a higher profile in their organization (with the associated increase in net value). If you are a network administrator, here are 3 ways you could get the conversation started:

  1. “You know, regulators are increasingly focusing on reporting to verify that we are following our procedures. Give me the tools I need to gather, analyze and report, and I’ll be able validate that we’re doing what we say we’re doing, the way we say we’re doing it. This should reduce our exposure to future regulatory findings.”
  2. “Recent experience has shown that auditors and examiners really prefer automated tools for routine repetitive tasks. An added advantage is that this will free me up to manage the process from a slightly different perspective, allowing me to not only apply controls, but assess their effectiveness as well.”
  3. “Effectively managing strategic risk means providing management with timely, actionable information that will allow them to rapidly respond and react to changes in the information landscape. Include me in the strategic planning process and I’ll be able to better understand the mission, and deliver the right information in the right format at the right time.”

And if you manage a network administrator and you haven’t had this discussion yet, don’t wait for them to approach you…ask them what you can do to elevate them above the drudgery, and get them more involved in managing the process instead of drowning in it. I’ll bet you’ll find a source of value you never knew you had!

29 Apr 2011

Vendor Management and the SAS 70 Replacement

I’ve written about the replacement for the SAS 70, which officially phases out on June 15th, previously.  But because this one report is being replaced with 3 new reports, financial institutions have an additional challenge that they didn’t have before.  Your vendor management program must now determine the most appropriate report to request based on your specific concerns regarding the vendor. Of course, once the correct report is identified, you must then acquire and review it…this step doesn’t change from the old SAS 70 world.

In the past, determining the correct report wasn’t really necessary, as the SAS 70 was the only reporting tool available if you needed to validate the security controls in place at a service provider.  With the SAS 70 being replaced with the SOC 1, SOC 2, and SOC 3, you have 3 options to choose from (and with Type I and Type II versions for the SOC 1 and SOC 2, you really have 5 options!).   So how do you choose?  It might make sense at this point to back up and take a look at the overall vendor management process.

The FFIEC considers risk management of outsourced services to consist of the following components:

  1. Risk Assessment (assessing the risk of outsourcing)
  2. Service Provider Selection (the due diligence process)
  3. Contract Issues (prior to signing the contract)
  4. Ongoing Monitoring (post contract)

Most institutions believe that their vendor management program begins once the contract is signed, i.e. once the vendor become a vendor.  But it’s clear that the vendor management process must begin well before that, and in fact third-party reviews like the old SAS 70, and the new SOC reports, should be obtained during the due diligence phase.  This is the proposal phase (step 2 above), well before the decision to engage the vendor.

According to the FFIEC, the due diligence process should determine the following about the vendor:

  • Existence and corporate history;
  • Qualifications, backgrounds, and reputations of company principals, including criminal background checks where appropriate;
  • Other companies using similar services from the provider that may be contacted for reference;
  • Financial status, including reviews of audited financial statements;
  • Strategy and reputation;
  • Service delivery capability, status, and effectiveness;
  • Technology and systems architecture;
  • Internal controls environment, security history, and audit coverage;
  • Legal and regulatory compliance including any complaints, litigation, or regulatory actions;
  • Reliance on and success in dealing with third party service providers;
  • Insurance coverage; and
  • Ability to meet disaster recovery and business continuity requirements.

That is a lot of information to obtain from a non-vendor, but the new SOC reports, and the SOC 2 report in particular, will go a long way towards addressing many of the above concerns.  Specifically; systems architecture, internal controls, any third-party providers, insurance coverage, and business continuity would all be addressed in a SOC 2 Type II* report.

I’ve developed this flowchart to assist you with the correct SOC report selection process, and I encourage you to discuss it with your auditor.  Of course once the correct report for that vendor has been determined, you must then obtain and evaluate it…that is a topic for a future post.

*Note:  We are still waiting for the AICPA to finalize the work program for the SOC 2 and SOC 3 reporting format.  Check with your auditor for additional guidance.

 

20 Apr 2011

A Recurring Theme in FDIC Consent Orders

If you look at any of the recent FDIC Consent Orders, you will quickly see a common theme.  I randomly pulled a few off the top of the list, and the verbiage was very similar, and in many cases identical:

  • …the Board shall enhance its participation in the affairs of the Bank
  • …the Bank’s board of directors shall increase its participation in the affairs of the Bank
  • …the Board shall participate fully in the oversight of the Bank’s compliance management system
  • …the Board shall participate fully in the oversight of the Bank’s Compliance Management System
  • …the Board shall increase its participation in the affairs of the Bank
  • …the Bank shall have and retain qualified management
  • …Bank’s board of directors shall increase its participation in the affairs of the Bank
  • …the Bank shall have and retain qualified management.
  • …the Board shall increase its participation in the affairs of the Bank
  • …the Bank’s board of directors (“Board”) shall increase its participation in the affairs of the Bank

In almost every case, regardless of the main thrust of the Consent Order, this was usually the first requirement.  In other words, although the Order may have been imposed because of financial weakness, or lending policy non-conformance, or some other reason, the examiners want to establish up front that the Board and Senior Management are at fault for failing to prevent, detect, and/or correct the problem ahead of time.  Furthermore, regardless of their past participation, in every case they are expected to increase their oversight in the future.

Of course, not only must this occur, but it must also be documented.  If recent examination experience has taught us anything, it is that if you don’t have it documented, it didn’t happen.  The challenge is this; typically the Board defines the broad goals and objectives of the institution in the strategic plan, and delegates the day-to-day responsibility of implementing those goals to committees.  In a perfect world, the mandates flow down from the Board to the committees, and status reporting flows back up from the committees to the Board.  (Graphic illustration) In reality, there are multiple points of failure in this top-down, bottom-up model:

  1. Does the Board have a well-defined, 3-5 year Strategic  Plan?
  2. Has this plan been communicated to all stakeholders?
  3. Have committees been formed, staffed, and tasked with implementing the details of the plan?
  4. Are there well-defined objectives and benchmarks in place to measure alignment between strategic goals and actual performance?
  5. Does the Board have access to adequate, timely information (reporting), and the necessary expertise, to determine if their strategic goals and objectives are being achieved?

A “No” answer at any point in this process causes the whole process to break down.  And even a “Yes, but we didn’t document it…”, is not enough to satisfy the examiners.  So how best to document each step?  Taken in order from above:

  1. Make sure the institution has a valid, up-to-date, Strategic Plan, and…
  2. …the plan has been communicated to all stakeholders.  This isn’t as onerous as it sounds…the plan shouldn’t change that often.
  3. The mission statement for all committees should reinforce their alignment and coordination with the Strategic Plan, and any risk assessments conducted by the committees must measure strategic risk.
  4. Evaluate each new product, service and vendor against its ability to further the objectives of the Strategic Plan, and…
  5. …make sure this information is summarized and presented to the Board at a frequency commensurate with the pace of change within the institution.

As I’ve mentioned before, the Tech Steering Committee is the ideal committee to report all things IT to the Board.  If you utilize a standard agenda, which includes discussion of on-going or proposed IT initiatives (and their alignment with Strategy), document the meetings, and report progress to the Board periodically, you will satisfy the IT oversight requirement.  Once the top-down and bottom-up process is in place for IT, simply duplicate it across the enterprise!

27 Mar 2011

The RSA breach, and 5 things you should do

For those of us already waiting for the latest update on guidance from the FFIEC on Internet Authentication, the news of the recent RSA SecurID breach complicates things a bit.  One-time password (OTP) hardware devices (tokens and smartcards) are considered one of the most secure forms of the “something you have” element in complying with the multifactor authentication requirement.  So let’s take a look at the RSA breach in the context of authentication guidance, and what you should do to respond.

When the FFIEC released its original guidance on Internet Authentication in 2005, they said this about tokens: 

Password-generating tokens are secure because of the time-sensitive, synchronized nature of the authentication. The randomness, unpredictability, and uniqueness of the OTPs substantially increase the difficulty of a cyber thief capturing and using OTPs gained from keyboard logging.

And 6 years later, in the draft release of the FFIEC updated guidance, they said:

“OTP tokens have been used for several years and have been considered to be one of the stronger authentication technologies in use.”

And they are correct; in the last few years OTP tokens for authentication have proven to be very secure and have become very popular, and arguably the biggest player in that market is RSA.  There are millions of RSA SecurID tokens in use today, many of them in financial institutions, and many of those authenticating Internet based financial transactions…perhaps for your customers.

So what exactly happened?  Well, their Website is (strangely) completely silent on the event, and RSA customers I’ve spoken to say that information is slow coming to them, and extremely vague when it does, but according to what has been disclosed by the RSA, here is what we do know:

“…the attack resulted in certain information being extracted from RSA’s systems. Some of that information is related to RSA SecurID authentication products.”

So…according to the FFIEC, the security of the OTP is based on “randomness, unpredictability, and uniqueness”, but we don’t know if the “certain information” mentioned by the RSA included the main algorithm or some other critical information necessary to generate the OTP.

As a financial institution responsible (and liable) for the security of your customers’ Internet based transactions, you must err on the side of caution here if you utilize RSA tokens.  I’ve got to believe that RSA will do the right thing here, and place their customer’s security ahead of their own business interests, but in the meantime it may be prudent to consider some additional measures, such as:

  • Since multi-factor authentication relies on “something you know” in addition to “something you have”, encourage (require?) your customers to change their user names and passwords.
  • Review (and possibly temporarily adjust) your built-in transaction monitoring metrics, such as dollar volumes, transaction frequency, ACH / Wire recipient lists, etc.
  • Implement “Out-of-Band” confirmation for all high-risk transactions.  In other words, temporarily require all transactions to be confirmed via a return phone call, fax, SMS, or similar method.
  • Make sure your customers know exactly who they can contact if they suspect unauthorized activity, and most importantly, let them know under what circumstances (and what methods) you will contact them.
  • Finally, consider an alternate token vendor.  You may be at the mercy of your on-line banking software vendor on this, but there are 2 trust issues in jeopardy here…the one between you (or your vendor) and the RSA, and the much more important one between you and your customer.  RSA may be able to fix whatever problems allowed the breach, and thereby repair the trust (or not) with their customers (your vendor), but the trust issue with your customers may not be repairable.  Rightly or not, they may be reluctant to use anything with “RSA” printed on it.

All of these items (except the last) are best practices anyway, but the key is that you must be pro-active on this.  Do not wait for RSA to release all the details (we may never know them anyway), because what we do know now is enough to justify additional security measures.

In conclusion, tokens and OTPs are still very effective as one element in one layer of a multi-layer, multi-factor, authentication process, but clearly the lesson here is that there is no fool-proof method.  Indeed as we await the FFIEC update, this line from the draft release is almost prophetic:

“Since virtually every authentication technique can be compromised, financial institutions should not rely on any one authentication method or security technique in authorizing high risk transactions, but rather institute a system of layered security.”

Perhaps the only change necessary to that statement in the final release is to remove the word “virtually”.

16 Mar 2011

Risk Managing Social Media – 4 Challenges

Twitter, LinkedIn, Facebook, Google+…the decision to establish an on-line presence is a very popular topic these days, and it is extremely easy to do, but effectively managing social media risk can be frustratingly complicated.  In many ways. it just doesn’t lend itself to traditional risk management techniques, so the standard pre-entry justification process is much more difficult.  And because you are expected to assess the risks before you jump in, many of you may already be accepting unknown risks.

I see 4 big challenges to managing social media risk:

  1. Strategic Risk – If you determine that engaging in social media would be beneficial to achieving the goals and objectives of your business plan, you’ve made a strategic decision.  But even if you decide NOT to engage, you’ve still made a strategic decision because strategic risk exists if you fail to respond to industry changes.  (“If you choose not to decide, you still have made a choice”*.)  And you are expected to justify your strategy by periodically assessing whether or not you have achieved the goals you anticipated when you made the decision  to engage in social media, which leads to challenge #2:
  2. Cost / Benefit – This is closely related to strategic, but relates to the difficulty of quantifying both the costs (strategic and otherwise) and the tangible benefits.  Most institutions decide to engage in social media as a “me too” reaction, but 1 or 2 years later they can’t go back and validate their decision on business grounds because they didn’t have well defined, quantifiable, expectations going in.  Anchor your decision on a set of specific goals, which could include increased brand or product exposure, but which should ultimately be defined  in terms of an increase in capital and earnings.  And although there is a very small financial barrier to entry, there are other costs which leads to my next challenge;
  3. Reputation Risk – This is where the decision to not engage in social media really manifests itself, because reputation risk exists regardless…it cannot be avoided.  All it takes is one disgruntled employee or customer (or a competitor) to post a negative comment about you or your products or services on-line, and your reputation could suffer.  If you do have an on-line presence, you may be able to quickly respond to counter the comments, but if you decided to stay out you have no recourse.  Also, are your employees blurring the line between their professional lives as official (and controllable) representatives of your institution, and their (un-controlled) personal, on-line lives?  In a traditional risk management model, each risk identified would be accompanied by an off-setting control or set of controls.  In the case of reputation risk, there really in no way to off-set, or control,  the risk.  This brings me to the final, and perhaps biggest, challenge;
  4. Residual Risk – This is the end result of the risk management process; the amount of risk remaining after the application of controls.  Essentially, this is what you deem “acceptable” risk.  Since social media risk can never be completely avoided (see #3 above), you are already accepting some measure of risk.  The challenge is to quantify it.  Auditors and examiners expect you to have a firm grasp on residual risk, because that is really the only way to validate the effectiveness of your risk management program.  An uncertain or inaccurate level of residual risk implies to examiners an ineffective (or even non-existent) risk assessment.

So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit. Ordinarily that would be a regulatory red-flag, but clearly many institutions currently have an on-line social media presence.  So at this point the question becomes not so much how did they arrive at that decision, but how will they justify their decision (and manage the risk) going forward?

 

*Lee, Geddy; Lifeson, Alex; Peart, Neil