Author: Tom Hinkel

As author of the Compliance Guru website, Hinkel shares easy to digest information security tidbits with financial institutions across the country. With almost twenty years’ experience, Hinkel’s areas of expertise spans the entire spectrum of information technology. He is also the VP of Compliance Services at Safe Systems, a community banking tech company, where he ensures that their services incorporate the appropriate financial industry regulations and best practices.
20 Aug 2013

Ask the Guru: Vendor vs. Service Provider

Hey Guru
I recently had an FDIC examiner tell me that we needed to make a better distinction between a vendor and a service provider.  His point seemed to be that by lumping them together in our vendor management program we were “over-analyzing” them.  He suggested that we should be focused instead only on those few key providers that pose the greatest risk of identity theft.  Our approach has always been to assess each and every vendor.  Is this a new approach?


I don’t think so, although I think I know where the examiner is coming from on the vendor vs. service provider distinction.  First of all, let’s understand what is meant by a “service provider”.  The traditional definition of a service provider was one who provided services subject to the Bank Service Company Act (BSCA), which dates back to 1962.  As defined in Section 3 of the Act, these services include:

“…check and deposit sorting and posting, computation and posting of interest and other credits and charges, preparation and mailing of checks, statements, notices, and similar items, or any other clerical, bookkeeping, accounting, statistical, or similar functions performed for a depository institution.”

But lately the definition has expanded way beyond the BSCA, and today almost anything you can outsource can conceivably be provided by a “service provider”.  In fact according to the FDIC, the products and services provided can vary widely:

“…core processing; information and transaction processing and settlement activities that support banking functions such as lending, deposit-taking, funds transfer, fiduciary, or trading activities; Internet-related services; security monitoring; systems development and maintenance; aggregation services; digital certification services, and call centers.”

Furthermore, in a 2010 interview with BankInfoSecurity, Don Saxinger (Team Lead – IT and Operations Risk at FDIC) said this regarding what constitutes a service provider:

“We are not always so sure ourselves, to be quite honest…but, in general, I would look at it from a banking function perspective. If this is a function of the bank, where somebody is performing some service for you that is a banking function or a decision-making function, including your operations and your technology and you have outsourced it, then yes, that would be a technology service that is (BSCA) reportable.”

Finally, the Federal Reserve defines a service provider as:

“… any party, whether affiliated or not, that is permitted access to a financial institution’s customer information through the provision of services directly to the institution.   For example, a processor that directly obtains, processes, stores, or transmits customer information on an institution’s behalf is its service provider.  Similarly, an attorney, accountant, or consultant who performs services for a financial institution and has access to customer information is a service provider for the institution.”

And in their Guidance on Managing Outsourcing Risk

“Service providers is broadly defined to include all entities that have entered into a contractural relationship with a financial insitiution to provide business functions or activities”

So access to customer information seems to be the common thread, not necessarily the services provided.  Clearly the regulators have an expanded view of a “service provider”, and so should you.  Keep doing what you’re doing.  Run all providers through the same risk-ranking formula, and go from there!

One last thought…don’t get confused by different terms.  According the the FDIC as far back as 2001, other terms synonymous with “service providers” include vendors, subcontractors, external service provider (ESPs) and outsourcers.

05 Aug 2013

Critical Controls for Effective Cyber Defense – Converging Standards?

Earlier this year the SANS Institute issued a document titled “Critical Controls for Effective Cyber Defense“.  Although not specific to financial institutions, it provides a useful prescriptive framework for any institution looking to defend their networks and systems from internal and external threats.  The document lists the top 20 controls institutions should use to prevent and detect cyber attacks.

This document actually preceded the announcement by the FFIEC in June that they were forming a working group to “promote coordination across the federal and state banking regulatory agencies on critical infrastructure and cybersecurity issues”.  I mentioned this announcement here in relation to its possible effect on future regulatory guidance.  So I was particularly interested in any overlap, any common thread, between the this initiative and the SANS document.  If there was any overlap between the organizations contributing to the SANS list and the FFIEC Cybersecurity working group, we might have the basis for  a common, consistent set of prescriptive guidance. Could a single “check-list” type information security standard be in the works?

For example, the Information Security Handbook requires financial institutions to have “…numerous controls to safeguard and limits access to key information system assets at all layers in the network stack.”  They then go on to suggest general best practices in various categories for achieving that goal, leaving the specifics up to the institution.

Contrast that to the much more specific SANS Critical Control list.  Here are the first 5:

  • Critical Control 1:  Inventory of Authorized and Unauthorized Devices
  • Critical Control 2:  Inventory of Authorized and Unauthorized Software
  • Critical Control 3:  Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
  • Critical Control 4:  Continuous Vulnerability Assessment and Remediation
  • Critical Control 5:  Malware Defenses

As you can see, although the goal of protecting information assets is the same in each case, the SANS list is much more specific.  Could we possibly see a converging of the general guidance of the FFIEC with the more specific control requirements of SANS, with cybersecurity as the common goal?  Again, a look at the common contributors to each group might provide a clue.

The SANS group credits input from multiple agencies of the U.S. government; the Department of Defense, Homeland Security, NIST, FBI, NSA, Department of Energy, and others.  The FFIEC working group coordinates with groups such as the FFIEC’s Information Technology Subcommittee of the Task Force on Supervision, the Financial and Banking Information Infrastructure Committee, the Financial Services Sector Coordinating Council, and the Financial Services Information Sharing and Analysis Center (FS-ISAC).  SO no direct common thread there, unfortunately.  However the FS-ISAC group does share many partners with the SANS group, including the Departments of Defense, Energy, and Homeland Security, so we may yet see the FFIEC Information Security guidance evolve.  Particularly since the Handbook was published back in 2006, and is overdue for a major update.  In the meantime, financial institutions would be well advised to use the SANS Critical Controls as a de-facto checklist to measure their own security posture.*

By the way, the document  also lists 5 critical tenets of an effective cyber defense system, 2 of which are ‘Continuous Monitoring’ and ‘Automation’.   More on those in a future post (although I already addressed the advantages of automation here).

* There is nothing in the SANS list that is inconsistent with FFIEC requirements, in fact we’ve already seen at least one company servicing the Credit Union industry adopt this list as their framework.  However, keep in mind that although the controls listed are necessary for cyber defense, they are not sufficient.  A fully compliant information security program must also address management and oversight…an area conspicuously absent on the SANS list.

23 Jul 2013

Ask the Guru: Fedline in the lobby

Hey Guru,

I have a question about Fedline.  Will regulators write us up for having Fedline on a PC in the lobby of the bank?

Possibly, I have seen that.  The issue is with the extreme sensitivity of data processed on that device, so if you want to leave it where it is, your response should focus on the physical and administrative controls in place.  For example how is the device physically secured?  Is it completely in the open or behind a barrier of some sort?  Could anyone simply walk up to it and sit down?  Is it clearly identified as the Fedline machine?  What about passwords and authentication devices?  Is it left logged in?  Is dual authentication required to access Fedline; one to the network and another one to the application?  What about dual control for transactions?  Who reviews activity reports?  How often?

SO they may say something, but if you have a response ready that addresses these questions they probably won’t write you up for it.  On the other hand if you just don’t want to deal with the hassle, you can put it behind the teller line.  Oh, and one more thing…wherever you decide to put the Fedline PC, don’t use a wireless keyboard!

04 Jun 2013

Incident Response in an Outsourced World

UPDATE – On June 6th the FFIEC formed the Cybersecurity and Critical Infrastructure Working Group, designed to enhance communications between and among the FFIEC members agencies as well as other key financial industry committees and councils.  The goal of this group will undoubtedly be to increase the defense and resiliency of financial institutions to cyber attacks, but the question is “what effect will this have on new regulatory requirements and best practices”?  Will annual testing of your incident response plan be a requirement, just as testing your BCP is now?  I think you can count on it…

I’ve asked the following question at several recent speaking engagements:  “Can you remember the last time you heard about a financial institution being hacked, and having its information stolen?”  No responses.  I then ask a second question:  “Can anyone remember the last time a service provider was hacked, and financial institution data stolen?”.  Heartland…TJX…FIS…almost every hand goes up.

As financial institutions have gotten pretty good at hardening and protecting data, cyber criminals are focusing more and more on the service providers as the weak link in the information security chain.  And wherever there are incidents making the news, the regulators are sure to follow with new regulations and increased reinforcement of existing ones.

The regulators make no distinction between your responsibilities for data within your direct control, and data outside your direct control;

“Management is responsible for ensuring the protection of institution and customer data, even when that data is transmitted, processed, stored, or disposed of by a service provider.” (Emphasis added)

In other words, you have 100% of the responsibility, and zero control.  All you have is oversight, which is at best predictive and reactive, and NOT preventive.  So you use the vendor’s past history and third-party audit reports to try to predict their ability to prevent security incidents, but in the end you must have a robust incident response plan to effectively react to the inevitable vendor incident.

The FFIEC last issued guidance on incident response plans in 2005 (actually just an interpretation of GLBA 501b provisions), stating that…

“…every financial institution should develop and implement a response program designed to address incidents of unauthorized access to sensitive customer information maintained by the financial institution or its service provider.” (Emphasis added)

The guidance specified certain minimum components for an incident response plan, including:

  • Assessing the nature and scope of an incident and identifying what customer information systems and types of customer information have been accessed or misused;
  • Notifying its primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of sensitive customer information;
  • If required, filing a timely SAR, and in situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, promptly notifying appropriate law enforcement authorities;
  • Taking appropriate steps to contain and control the incident to prevent further unauthorized access to or use of customer information; and
  • Notifying customers when warranted in a manner designed to ensure that a customer can reasonably be expected to receive it.

The guidance goes on to state that even if the incident originated with a service provider the institution is still responsible for notifying their customers and regulator.  Although they may contract that back to the service provider, I have personally not seen notification outsourcing to be commonplace, and in fact I would not recommend it.  An incident could carry reputation risk, but mishandled regulator or customer notification could carry significant regulatory and financial risks.  In other words, while the former could be embarrassing and costly, the latter could shut you down.

So to summarize the challenges:

  • Financial institutions are outsourcing more and more critical products and services.
  • Service providers must be held to the same data security standards as the institution, but…
  • …the regulators are only slowly catching up, resulting in a mismatch between the FI’s security, and the service provider’s.
  • Cyber criminals are exploiting that mismatch to increasingly, and successfully, target institutions via their service providers.

What can be done to address these challenges?  Vendor selection due diligence and on-going oversight are still very important, but because of the lack of control, an effective incident response plan is the best, and perhaps only, defense.  Yes, preventive controls are always best, but lacking those, being able to quickly react to a service provider incident is essential to minimizing the damage.  When was the last time you reviewed your incident response plan?  Does it contain all of the minimum elements listed above?  Better yet, when was the last time you tested it?

Just as with disaster recovery, the only truly effective plan is one that is periodically updated and tested.  But unlike DR plans, most institutions don’t even update their incident responses plans, let alone test them.  And while there are no specific indications that regulators have increased scrutiny of incident response plans just yet, I would not be at all surprised if they do so in the near future.  Get ahead of this issue now by updating your plan and testing it.  Use a scenario from recent events, there are certainly plenty of real-world examples to choose from.  Gather the members of your incident response team together and walk through your response, the entire test shouldn’t take more than an hour or so.  Answer the following questions:

  1. What went wrong to cause the incident?  Why (times 5…root cause)?  If this is a vendor incident, full immediate disclosure of all of the facts to get to the root cause may be difficult, but request them anyway…in writing.
  2. Was our customer or other confidential data exposed?  If so, can it be classified as “sensitive customer information“?
  3. Is this a reportable incident to our regulators?  If so, do we notify them or does the vendor?  (Check your contract)
  4. Is this a reportable incident to our customers?  How do we decide if “misuse of the information has occurred or it is reasonably possible that misuse will occur“?
  5. Is this a reportable incident to law enforcement?
  6. What if the incident involved a denial of service attack, but no customer information was involved?  A response may not be required, but should you?
  7. What can we do to prevent this from happening again (see #1), and if we can’t prevent it, are there steps we should take to reduce the possibility?  Can the residual risk be transferred?

Make sure to document the test, and then test again the next time an incident makes the news.  It may not prevent the next incident from involving you, but it could definitely minimize the impact!

 

NOTE:  For more on this topic, Safe Systems will be hosting the webinar “How to Conduct an Incident Response Test” on 6/27.  The presentation will be open to both customers and non-customers and is free of charge, but registration is required.  Sign up here.

 

24 Apr 2013

The Financial Institutions Examination Fairness and Reform Act – Redux

This new bill (H.R. 1553) introduced on April 15th is actually a word-for-word duplicate of H.R. 3461 which I wrote about here.   The previous bill died in committee, but H.R. 1553 has a few more sponsors.  Now, I know what you are thinking…that there is no such thing as “good” regulation.   But bear with me, because this bill actually is good for the industry, banks and credit unions alike.

The full text of the bill is here, and I encourage everyone to read it and consider throwing your support behind it, but in summary it…

  1. …requires that examiners issue their final examination reports in a timely manner, no later than 60 days following the exit interview.  This is important for FI’s because all examination findings must be reported to the Board, and assigned to responsible parties for remediation.  I have heard stories of institutions waiting 6+ months for a final report, and this just isn’t fair.  Since most institutions are on a 12 month examination cycle, this only leaves a few months for remediation in order to avoid repeat findings.
  2. …requires that the examiner make available all factual information relied upon by the examiner in support of their findings.  This levels the playing field, allowing institutions to see exactly why findings occurred and better prepare you to push back if you think the finding is incorrect.
  3. …changes the treatment of commercial loans.  Currently, if the value of the underlying collateral declines, the loan may be forced into non-accrual status regardless of the repayment capacity of the borrower.  The bill would prevent that from happening.  It would also prevent a new appraisal on a performing loan unless new funds were involved.  It stands to reason that if non-accrual status is tied to non-performing status, it should result in higher asset quality assessments, resulting in fairer reserve requirements, fewer enforcement actions, and fewer CAMELS score downgrades.
  4. …requires a standard definition of “non-accrual” along with consistent reporting requirements.  The less institutions are subject to examiner interpretation, the more predictable the examination experience will be.  Lack of consistency resulting in unpredictable results is the single biggest complaint of the examination process.  I don’t know of a single institutions that wants to “get away” with anything during an exam, they only want to know what to expect.
  5. …establishes an “examination ombudsman” in the office of the FFIEC, independent of any regulatory agency.  In addition to being a more impartial forum for presentation of grievances regarding the examination process, they would also be responsible for assuring that all examinations adhere to the same standards of consistency.  In the survey conducted with my previous post on this topic, 60% of respondents said that they would be more likely to appeal an exam finding if the appeal process was with the FFIEC as opposed to the regulator that conducted the exam.
  6. …would prohibit retaliation against the FI for exercising their rights under the appeals process, and delay any further agency action until the appeals process was complete.

Of course it’s a long road from a bill to a law, but I think you would agree that all these things are good for the industry.  At the very least, any regulation that gives bankers more control and less uncertainty is a welcome change from recent events!  You can track it here.  A companion Senate bill was also just introduced, S. 727.  Track it here.

05 Apr 2013

The Problem with PEN Tests

This is a true story, the names have been changed to protect the guilty.  Al Akazam (not his real name) is an IT consultant with a solid background in technology, and wants to expand his practice into network penetration (PEN) testing.  So he downloaded a copy of Nessus, which is a powerful, open source, vulnerability scanner…and just like that Al Akazam was a PEN tester!  Armed with this new tool, Al secured his first client, a financial institution.  The institution was aware of the FFIEC guidance to periodically validate the effectiveness of their security controls through testing, and although Al didn’t possess audit credentials, nor vast experience with financial institutions, he seemed to know what he was talking about, and the institution engaged him.

Al got the institution to allow him to connect to the internal trusted network, where he activated his scanner and sat back to let it do its magic.  An hour or 2 later the scan was complete, and Al had a couple hundred pages of results, some of which (according to his magic scanning tool) were very severe indeed.  Confident that he had uncovered serious and immediate threats to the network, Al rushed the 200 page report to management, who were understandably very alarmed.  Al completed the engagement secure in his belief that he had performed a valuable service…but in fact he had done just the opposite.  He had done the institution a disservice.  By not evaluating the threats in the context of the institutions’ entire security environment, Al misrepresented the actual severity of the threats, and unnecessarily alarmed management.

A vulnerability’s true threat impact, its exploitation factor, is best expressed in a formula:

Threat impact = (vulnerability * exploitation probability) – mitigating controls

Al simply took the list of potential vulnerabilities the scanner spit out, and without factoring in the exploitation probability, or factoring out the existing controls, changed the equation to:

Threat = vulnerability

What he should have done was take the threats he found, and evaluate them in the context of the institutions’ specific environment by ascertaining what preventive measures were in place, and how effective are they…i.e. the likelihood that the vulnerability would be exploited, and if preventive measures failed, what detective and corrective measures are in place to minimize the impact?  The question Al should be addressing is not “what does my magic scanner say about the risk”, but “what is the actual risk”.  Simply put, Al got lazy (more on that later).

What else did Al do wrong?:

  • He didn’t start with an external scan.  Since the external interface(s) are the ones getting the most attention from the hackers, they should also get more preventive, detective and corrective resources directed towards them.  A risk-based approach demands that testing should always start at the outside, and work its way in.
  • The institution gave him privileged access to the internal network, which is not realistic and does not simulate a real attack.  Sure it’s possible that malware could allow an attacker access, and privilege elevation exploits can theoretically allow the attacker to gain privileged access, but is it likely?  How many layers of controls would have to fail for that to happen?
  • Again, he got lazy.  He should have gone further in his testing by taking one of the most severe vulnerabilities, and tying to exploit it.  Only then would management understand the true risk to the institution, and cost justify the allocation of resources to address it.
  • He didn’t understand financial institutions.  Bankers understand the concept of “layered security”, and how having multiple controls at various control points reduces the risk that any one failed control will result in an exploit.  The vast majority of today’s financial institution networks are built using a layered security concept, and have been for some time.  Shame on Al for not recognizing that.
  • He presented management with a meaningless report.  Instead of simply regurgitating the scanner severity ratings in the management report, he should have adjusted them for the control environment.  In other words, if the scanner said a particular vulnerability was a 10 on a scale of 1 – 10, but the probability of exploit was 50%, and other overlapping and compensating controls are present, the actual threat might be closer to 3 or 4.

I’ve seen this scenario several times over the last few years, and in most (but not all) cases when the PEN tester is presented with the flaws in their methodology, they adjust accordingly.  This is important, because a bad PEN test result has a ripple effect…you now have to expend resources to address issues that may not actually need addressing when placed in proper context.  You have to present the report to management, with an explanation of why it’s really not as bad as it looks, and you have to make the report available to your examiner during your next safety and soundness examination.  So for all these reasons, if you are a banker facing a similar situation, push back as hard as you can.  And get outside help from an auditor or IT consultant to help make your case if necessary.

Are you a PEN tester or auditor?  What is your approach to automated scanners and test results, do you adjust for the overall controls environment?