Tag: risk management

02 Oct 2012

BYOD Redux – The Policy Solution (Part 2)

In the previous post, I suggested that because mobile devices (smart phones and PDA’s) were not that functionally different in how they process, transmit, and store information than other mobile computing devices like laptops, a separate policy wasn’t necessary.  Since data security, confidentiality and integrity concerns were the same as other devices, you should be able to simply extend your existing policy to include them.  But in fact the risks are greater, and often more difficult to control, resulting in substantially higher residual risk (risk remaining after the application of controls) than other computing devices.  Because of this, employee-owned mobile devices really represent an exception to your policies as opposed to an extension of them.  And because all policy exceptions must be approved by your Board, perhaps separate policies and procedures are appropriate.

The FFIEC is fairly silent on this topic, but fortunately the NIST is in the process of formulating several pieces of guidance on risk managing BYOD, and it is always useful to see where they are on this issue as very often we’ve seen NIST guidelines make their way into other federal regulations.

NIST Special Publication 800-124 entitled “Guidelines for Managing and Securing Mobile Devices in the Enterprise” is currently in draft status, and is an update to a 2008 document “Guidelines on Cell Phone and PDA Security”.  The updated guidance recognizes the evolution of the technology over the past few years, as well as the unique security challenges inherent in both corporation and employee-owned mobile computing devices.  They advise institutions to implement the following guidelines to improve the security of their mobile devices:

  1. Develop system threat models for mobile devices and the resources that are accessed through the mobile devices.  Recognize that these devices are not the same as your other computing devices.  The threats are not the same and the available controls are not the same, therefore both the probability and the impact of an attack on these devices is likely greater.  Make sure your threat model understands how the device will connect to your network, and what data it will transmit and store.  Data-flow diagrams can be very helpful in this modeling process.
  2. Once the threat is understood, deploy only those devices that offer the minimum threat required given the job requirements of the employee.  This will be one of the biggest challenges for institutions, as many employees will want the latest devices with all the bells and whistles.  Prior to deploying, make sure you have centralized mobile device management that offers the following minimum capabilities:

•  Ability to enforce enterprise security policies, such as user rights and permissions, as well as the ability to report policy violations.
•  Data communication and storage should be encrypted, with the ability to remotely wipe the device.
•  User authentication should be required before the device can access enterprise resources, with incorrect password lockout periods consistent with your other computing devices.
•  Restrict which applications may be installed, and have procedures in place for updating the applications and the operating system.

  1. Have a separate mobile device policy.  The policy should define which types of mobile devices are permitted to access the institution’s resources, the degree of access that mobile devices may have, and how they will be managed.  It should differentiate between institution-owned and employee-owned devices, and be as consistent as possible with your policy for non-mobile devices.
  2. Test the policy initially, and periodically thereafter, to verify management capabilities.  Perform either passive (log review) or active (PEN testing) assessments to confirm that the mobile device policies, procedures and practices are being followed properly.
  3. Secure each device prior to deployment.  This is slightly easier for institution-owned devices, much harder (but arguably more important) for already deployed, employee-owned devices.

I’m sure you can already hear the howls of protest for this last one, but the guidance actually states that for employee-owned (BYOD) devices organizations should recover them, restore them to a known good state, and fully secure them before returning them to their users.

So when it comes to BYOD you basically have two choices; you can properly manage the devices and the risks consistent with your other computing devices, or you can recognize that they represent a deviation from your risk management policies and get Board approval for the exception.  And if you choose to classify them as policy exceptions, you should be prepared to explain the potential impact of the higher risk to the organization, and exactly how the higher risk is justified.

13 Sep 2012

BYOD Redux – The Policy Dilemma (Part 1)

Employee-owned mobile devices are everywhere, and they’re being used for everything from email to document storage and editing.  Proper risk management procedures are defined in your policies, but do you need a separate mobile device policy, or can you simply mention them in the same policy sections that address other portable devices?  Or is there another option you need to consider?  Let’s follow the same risk management process for mobile device deployment as you would to deploy any other new technology:

    1. First, before mobile devices are deployed, a decision is made that they should be considered for implementation because they will somehow further the goals and objectives of the strategic plan.
    2. Next, a cost-benefit analysis is done, and the results should reinforce the decision to implement.
    3. Finally, a risk assessment is conducted that identifies potential risk exposure due to unauthorized disclosure of customer, confidential, or sensitive information.

Since most mobile devices can process, store, and transmit information, this looks very similar to your risk assessment for other portable computing devices like laptops.  (Indeed the FFIEC mentions “…laptops and other mobile devices…” together  in their Information Security Handbook, suggesting the risks are similar.)  Except in this case the risk is magnified by the extreme portability of the devices, the “always-on” and “always-remote” nature of them, and the fact that many more people will use mobile devices than will use laptops.

Once the inherent risk is assessed (most likely higher than your other computing devices), controls are identified to reduce the risk.  Again, since the capabilities are similar, the list of potential administrative and technical controls looks very similar to those on your other computing devices.  Your existing policy probably mandates that there first be a legitimate business reason for the employee to use a portable device.  Once need is established, the employee agrees to a “proper use” policy, i.e.  what is allowed and what isn’t.  Finally, technical controls are applied; 8-10 character complex passwords, encrypted storage, patch management, Anti-virus/Anti-malware software, user rights and permissions restrictions, Active Directory integration, etc.  But even if you’ve followed your risk management procedures to the letter so far, this is where the real challenges begin, because mobile devices simply don’t have the same controls available to them that other portable devices like laptops do.  There are some additional controls available (like remote-wipe capability), but the end result of your risk assessment would most likely be that you have a higher inherent risk and insufficient controls, leading to a higher residual risk.  Under “normal” conditions, this would lead to a decision to NOT deploy mobile devices until risks can be reduced within acceptable levels, right?

And yet they are ubiquitous.

So back to the original question.  I’m not a big believer of writing a new policy to accommodate every new piece of technology you decide to implement unless the technology cannot be accommodated within your existing policy framework.  It is far easier to make a simple policy change by mentioning the new technology, thereby acknowledging that it exists and that it fits within your current policy framework.  But in this case you are not really making a change, you are actually making a policy exception.  You are admitting that the residual risk of BYOD is unacceptably high, but that you are willing to accept the additional risk in return for potential productivity gains.  Since the Board of Directors is responsible for providing “…clear guidance regarding acceptable risk exposure levels…”, and for ensuring that “…appropriate policies, procedures, and practices have been established”, policy exceptions must be approved by the Board as well.   It is your responsibility to make sure they understand exactly what the risks are, and why you feel they are risks worth taking.

Hopefully risk management controls for mobile devices will continue to evolve and mature to the point where they match controls for the other portable devices you currently manage.  But until they do, until they are capable of being risk managed consistent with your existing policies, they (and all policy exceptions) represent an net reduction in your existing security profile.  And you cannot rationalize or justify taking short-cuts just because “everyone else is doing it”…or even worse, “we can’t stop it”.

Next, I’ll discuss possible solutions to this risk management challenge.

13 Jul 2011

Interpreting The New FFIEC Authentication Guidance – 5 Steps to Compliance

We’ve all now had a couple of weeks to digest the new guidance, and what has emerged is a clearer understanding of what the guidance requires…and what it doesn’t.  But before we can begin to formulate the specific compliance requirements, we have to interpret what the guidance is actually saying…and what it isn’t.  And along the way I’ll take the liberty of suggesting what it should have required…and what it should have said.

The release consists of 12 pages total, but only a couple of pages are directly relevant to your compliance efforts.  Pages 1 and 2 talk about the “why”, and pages 6 through 12 discuss the effectiveness (or ineffectiveness) of various authentication techniques and controls.  But beginning on page 3 (“Specific Supervisory Expectations”), through the top of page 6, is where the FFIEC details exactly what they expect from you once compliance is required in January of next year.  Since this is the real “meat” of the guidance, let’s take a closer look at what it says, and try to interpret what that means to you and your compliance efforts.

Here are the requirements:

  • Risk Assessments, conducted…
    • …prior to implementing new electronic services
    • …anytime “new” information becomes available, defined as:
      • Changes in the internal and external threat environment, i.e. if you become aware of any new threats
      • If your customer base changes, i.e. you take on a higher risk class of customer
      • Changes in the functionality of your electronic banking product, i.e. your vendor increases the capabilities of the product
      • Your fraud experience, i.e. you experience an account takeover or security breach
    • …at least every 12 months, if none of the preceding occurs

According to the guidance, your risk assessment must distinguish “high-risk” transactions from “low-risk” transactions. This is not as simple as saying that all retail customers are low-risk, and all commercial customers are high-risk. In fact, the FFIEC defines a “high-risk” transaction as “involving access to customer information or the movement of funds to other parties.” By this definition almost ALL electronic transactions are high-risk! A retail customer with bill-pay would qualify as high-risk because they access customer information (their own), and make payments (move funds) to other parties. So perhaps the way to interpret this is that your risk assessment process should distinguish between “high-risk” and “higher-risk”. This actually makes sense, because the frequency and dollar amounts of the transactions are what should really define risk levels and drive your specific layered controls, which is the next requirement.

  • Layered Security Programs (also known as “defense-in-depth”)

This concept means that controls should be layered so that gaps or weaknesses in one control (or layer of controls) are compensated for by other controls. Controls are grouped by timing (where they fit into a hierarchy of layers), and also by type.  So taken from the top layer down:

  1. Preventive- This is the first layer, the top of the hierarchy, and for good reason…it’s alwaysbest to prevent the problem in the first place. These are by far the most important, and should comprise the majority of your controls. Some examples drawn from the guidance are:
    1. Fraud detection and monitoring (automated and manual)
    2. Dual customer authorization and control
    3. Out-of-Band (OOB) verification of transactions
    4. Multi-factor authentication
    5. “Positive Pay”, or payee white-lists
    6. Transaction restrictions such as time-of-day, dollar volume, and daily volume
    7. IP restrictions, or black lists
    8. Restricting account administrative activity by the customer
    9. Customer education (this was actually listed as a requirement, but it’s really a preventive control)
  2. Detective – Positioned directly below preventive in the hierarchy because detecting a problem is not as optimal as preventing it. However they provide a safety net in the event preventive controls fail. Often a preventive control can also have a detective component to it. For example, control # 2 above can both require dual control, and also report if authentication fails on either login. Same with #4…if payment is blocked because the payee is not “white-listed (or is blocked by IP restrictions as in control #6), the institution can be automatically notified. In fact, most preventive controls also have a detective element to them.
  3. Corrective / Responsive – An important step to make sure the event doesn’t re-occur. This typically involves analyzing the incident and trying to determine what control failed, and at what level of the hierarchy. However, just as many preventive controls can detect, the best controls have the ability to self-respond as well. For example, the same control that requires dual control, and reports on failed login, can also lock the account. In fact, just as before, many preventive controls also have a responsive capability.

To properly demonstrate compliance with the layered security program requirement, you should have controls at each of the three layers. Simply put, higher risk transactions require more controls at each layer.

OK, now that you’ve classified your controls as preventive, detective and corrective (preferably a combination of all three!), you’ll need to determine the control type;  overlapping, compensating, or complimentary.

Overlapping controls are simply different controls designed to address the same risk.  Sort of a belt-and-suspenders approach, where if one controls misses something, a second (or third, or forth) will catch it.  Again, higher risk transactions should have multiple overlapping controls and they should exist at each layer.  So you should have overlapping preventive controls, as well as overlapping detective and even corrective controls.  The idea is that no single failure of any control at any layer will cause a vulnerability.

Compensating controls are those where one control makes up for a known gap in another control (belt-or-suspenders).  This gap may exist either because it is impossible or impractical to implement the preferred control, or because of a known shortcoming in the control.  An example might be that you feel transaction restrictions (#5 above) are appropriate for a particular account, but the customer requires 24/7 transaction capability.  To compensate for this, you might require that positive pay and IP restrictions be implemented instead.

Complimentary controls are often overlooked (indeed they aren’t mentioned at all in the guidance), but in my experience they are the one of the most effective controls…and the most common control to fail.  Complimentary controls (sometimes called “complimentary user controls”) are those that requires action on the part of the customer.  For example, the dual authentication control (#2 above) can be very effective IF the customer doesn’t share passwords internally.  Similarly, customer education (#8) requires cooperation from the customer to be effective.  Given the partnership between the institution and the customer (and recent litigation), I’m surprised there isn’t more discussion about the importance of complimentary controls.  (I addressed some of these issues here.)

In summary, demonstrating compliance to the spirit and letter of the guidance is simply a matter of following these 5 steps:

  1. Risk rank your on-line banking activities from higher to lower based on account type (consumer or commercial), and by frequency and dollar amount.
  2. Apply overlapping, compensating and complimentary controls (# 1 – #9 above) in…
  3. …preventive, corrective, and detective layers,…
  4. …commensurate with the risk of the transaction.  Higher risk = more controls.
  5. Periodically repeat the process.

Nothing to it, right?!  Well not exactly, but hopefully this makes the guidance a bit clearer and easier to follow.

By the way, although I have been somewhat critical of the updated guidance, I don’t share the same criticism of others that the guidance should have gone further and addressed mobile banking, or other emerging technology.  Frankly if new guidance was issued every time technology progressed we’d have a couple of updates every year.  Instead, focus on the fundamentals of good risk management.  After all, there is a good reason why the FFIEC hasn’t had to update their guidance on information security since 2006 (even though the threat landscape has changed dramatically since then)….they got it right the first time!

14 Jun 2011

SOC 2 vs. SAS 70 – 5 reasons to embrace the change

The SOC 2 and SOC 3 audit guides have recently been released by the AICPA, and the SAS 70 phase-out becomes effective tomorrow.  The more I learn about these new reports the more I like them.  First of all, as a service provider to financial institutions we will have to prepare for this engagement (just as we did for the SAS 70), so it’s certainly important to know what changes to expect from that perspective.  But as a trusted adviser to financial institutions struggling to manage the risks of outsourced services (and the associated vendors), the information provided in the new SOC 2 and SOC 3 reports are a welcome change. In fact, if your vendor provides services such as cloud computing, managed security services, or IT outsourcing, the new SOC reports provide exactly the assurances you need to address your concerns. Here is what I see as the 5 most significant differences between the SAS 70 and the SOC 2 reports, and why you should embrace the change:

Management Assertion – Management must now provide a written description of their organizations’ system (software, people, procedures and data) and controls. The auditor then expresses an opinion on whether or not managements’ assertion is accurate. Why should this matter to you? For the same reason the regulators want your Board and senior management to approve everything from lending policy to DR test results…accountability.

Relevance – The SAS 70 report was always intended to be a your-auditor-to-their-auditor communication, it was never intended to be used by you (the end-user) to address your institutions’ concerns about vendor privacy and security. The SOC reports are intended as a service provider to end-user communication, with the auditor providing verification (or attestation) as to accuracy.

Scope – Although the SAS 70 report addressed some of these, the SOC 2 report directly addresses all of the following 5 concerns:

  1. Security. The service provider’s systems is protected against unauthorized access.
  2. Availability. The service provider’s system is available for operation as contractually committed or agreed.
  3. Processing Integrity. The provider’s system is accurate, complete, and trustworthy.
  4. Confidentiality. Information designated as confidential is protected as contractually committed or agreed.
  5. Privacy. Personal information (if collected by the provider) is used, retained, disclosed, and destroyed in accordance with the providers’ privacy policy.

If these sound familiar, they should.  The FFIEC Information Security Booklet lists the following security objectives that all financial institutions should strive to accomplish:

  1. The Privacy and Security elements of GLBA
  2. Availability
  3. Integrity of data or systems
  4. Confidentiality of data or systems
  5. Accountability
  6. Assurance

As you can see, there is considerable overlap between the FFIEC requirements and the scope of a typical SOC 2 engagement.

Testing – Like the SAS 70, the SOC 1 and SOC 2 are available in both a Type 1 and Type II format.  A Type I speaks only to the adequacy of vendor controls, but the Type II gives management assurance that the vendor’s controls are not just adequate, but also effective.  The auditor can do this in a Type II engagement because they are expected to not just inquire about control effectiveness, but actually observe the control operating effectively via testing.  And because the scope of the SOC 2 is significantly greater than the SAS 70 (see above), the test results are much more meaningful to you.  In fact, the SOC 2 audit guide itself suggests that because your concerns (particularly as a financial institution) are in both control design and effectiveness, a Type I report is unlikely to provide you with sufficient information to assess the vendor’s risk management controls.  For this reason you should insist on a Type II report from all of your critical service providers.

Vendor Subcontractors – This refers to the subcontractor(s) of your service provider, and again this is another FFIEC requirement that is directly addressed in the new SOC reports.  The FFIEC in their Outsourcing Technology Services Booklet states that an institution can select from two techniques to manage a multiple service provider relationship:

  1. Use the lead provider to manage all of their subordinate relationships, or
  2. Use separate contracts and operational agreements for each subcontractor.

The booklet suggests that employing the first technique is the least cumbersome for the institution, but that either way;

“Management should also ensure the service provider’s control environment meets or exceeds the institution’s expectations, including the control environment of organizations that the primary service provider utilizes.”

The audit guidelines of the SOC 2 engagement require the service auditor to obtain an understanding of the significant vendors whose services affect the service provider’s system, and assess whether they should be included in the final report, or “carved-out”.  Given the regulatory requirement for managing service providers you should insist on an “inclusive” report.

In summary, there will be an adaptation curve as you adjust to the new reports, but in the end I think this is an overwhelming step in the right direction for the accuracy and effectiveness of your vendor management program.

16 Mar 2011

Risk Managing Social Media – 4 Challenges

Twitter, LinkedIn, Facebook, Google+…the decision to establish an on-line presence is a very popular topic these days, and it is extremely easy to do, but effectively managing social media risk can be frustratingly complicated.  In many ways. it just doesn’t lend itself to traditional risk management techniques, so the standard pre-entry justification process is much more difficult.  And because you are expected to assess the risks before you jump in, many of you may already be accepting unknown risks.

I see 4 big challenges to managing social media risk:

  1. Strategic Risk – If you determine that engaging in social media would be beneficial to achieving the goals and objectives of your business plan, you’ve made a strategic decision.  But even if you decide NOT to engage, you’ve still made a strategic decision because strategic risk exists if you fail to respond to industry changes.  (“If you choose not to decide, you still have made a choice”*.)  And you are expected to justify your strategy by periodically assessing whether or not you have achieved the goals you anticipated when you made the decision  to engage in social media, which leads to challenge #2:
  2. Cost / Benefit – This is closely related to strategic, but relates to the difficulty of quantifying both the costs (strategic and otherwise) and the tangible benefits.  Most institutions decide to engage in social media as a “me too” reaction, but 1 or 2 years later they can’t go back and validate their decision on business grounds because they didn’t have well defined, quantifiable, expectations going in.  Anchor your decision on a set of specific goals, which could include increased brand or product exposure, but which should ultimately be defined  in terms of an increase in capital and earnings.  And although there is a very small financial barrier to entry, there are other costs which leads to my next challenge;
  3. Reputation Risk – This is where the decision to not engage in social media really manifests itself, because reputation risk exists regardless…it cannot be avoided.  All it takes is one disgruntled employee or customer (or a competitor) to post a negative comment about you or your products or services on-line, and your reputation could suffer.  If you do have an on-line presence, you may be able to quickly respond to counter the comments, but if you decided to stay out you have no recourse.  Also, are your employees blurring the line between their professional lives as official (and controllable) representatives of your institution, and their (un-controlled) personal, on-line lives?  In a traditional risk management model, each risk identified would be accompanied by an off-setting control or set of controls.  In the case of reputation risk, there really in no way to off-set, or control,  the risk.  This brings me to the final, and perhaps biggest, challenge;
  4. Residual Risk – This is the end result of the risk management process; the amount of risk remaining after the application of controls.  Essentially, this is what you deem “acceptable” risk.  Since social media risk can never be completely avoided (see #3 above), you are already accepting some measure of risk.  The challenge is to quantify it.  Auditors and examiners expect you to have a firm grasp on residual risk, because that is really the only way to validate the effectiveness of your risk management program.  An uncertain or inaccurate level of residual risk implies to examiners an ineffective (or even non-existent) risk assessment.

So managing social media risk boils down to this:  You must be able to justify your decision (both to engage and to not engage) strategically, but to do so requires an accurate cost/benefit analysis.  Both costs (reputation, and other residual risks) and benefits (strategic) are extremely difficult to quantify, which means that in the end you are accepting an unknown level of risk, to achieve an uncertain amount of benefit. Ordinarily that would be a regulatory red-flag, but clearly many institutions currently have an on-line social media presence.  So at this point the question becomes not so much how did they arrive at that decision, but how will they justify their decision (and manage the risk) going forward?

 

*Lee, Geddy; Lifeson, Alex; Peart, Neil

03 Mar 2011

FDIC issues new FIL…

…and pretty much confirms what most of us already knew; regulatory scrutiny has increased across the board.  FIL-13-2011 entitled “Reminder on FDIC Examination Findings” was just released March 1st, and in spite of the title,  is not so much a reminder but a response.  Here is the one-line summary:

“Recently, the FDIC has received some criticism that its examination findings have been overly harsh.”

Make no mistake, this is NOT a reminder, this is a response to a flurry of criticism from financial institutions who feel that:

  1. Their examiners are finding fault with policies, procedures and practices that they have not had problems with in past examinations, and
  2. The examiners are less willing to “work with them” to resolve the findings during the examination…before they appear in the exit letter.

I have heard the same criticism from our customers, and I think it is highly significant that the FDIC has seen fit to issue an FIL to address this.  This confirms that the problem is not sporadic, it is endemic, and it is the new normal.

The FIL goes on to describe the procedures by which an institution might formally express their concerns, but in the end there is little the institution can do to change the findings.  My attitude is that there are really only 3 ways to respond to an examiner finding:

  1. Admit that the finding is valid, and commit to making the recommended change(s). The vast majority are handled this way.
  2. Contest the finding.  This is a viable option only if you can demonstrate that you’ve made a different interpretation of the underlying guidance, and as a result of your risk analysis, you’ve come to a different conclusion.  If properly documented, this can be a very effective response.
  3. Refuse the finding.  This is an adversarial position and NOT really recommended, but I see this more often than you would think.

Given the new normal, the second option makes the most sense IF you’ve implemented an effective risk management process, because in the final analysis all examiner findings are about one thing…they believe you’ve accepted too much risk.  I’ve addressed effective risk management in detail here.

One other thing caught my eye in the FIL, because the fact that the FDIC felt necessary to address it indicates that it has become an issue:  “Prohibition Against Retaliation”. Apparently some institutions feel that not only are the examiners more critical, but that they have experienced “…retaliation, abuse, or retribution by an agency examiner…”.  This may be because institutions are choosing the adversarial option.  Even more reason to make sure that if and when you do decide to push back on an examiner finding, you do so in a logical, dispassionate way.  Make a risk-based case that focuses on the residual, or remaining, risk.  The vast majority of findings revolve around the examiner’s belief that you haven’t properly recognized that residual risk, and that as a result, it’s unacceptably high.  If you can demonstrate that you do in fact understand the risks, and have decided to accept them as a business decision, you will eliminate the vast majority of examination findings.