Category: From the Field

09 Oct 2012

FDIC Institutions still getting UIGEA (Reg GG) findings – UPDATE

Update 1 –  12/5/2011 to add examination procedures*. 

Update 2 – 2/13/2012 to emphasize policy requirements.

Update 3 – 10/8/2012 to add specific courses of action if the FI has “actual knowledge” of restricted transactions.

We first saw this trend back in July 2011, and continue to see it, so I’m calling this a definite trend as opposed to an anomaly.  Here is the background:  The Unlawful Internet Gambling Enforcement Act of 2006 (“UIGEA”) prohibits any person, including a business, engaged in the business of betting or wagering from knowingly accepting payments in connection with the participation of another person in unlawful Internet gambling.  As a result, the Agencies (FDIC, OCC, NCUA, Federal Reserve) issued Reg GG, requiring financial institutions to establish policies and procedures “reasonably designed to identify and block, or otherwise prevent or prohibit, restricted (gambling) transactions” with compliance required as of June 1, 2010.

Most institutions have measures built in to their account opening procedures by their core vendor to comply with this Reg, but the recent examination findings seem to address the lack of a specific UIGEA policy.   This would indicate that procedures alone may not be enough to demonstrate compliance anymore (i.e., “we’re doing it even though we don’t say we are” isn’t enough).  So what are you supposed to do?  Make sure you have a specific written UIGEA policy, and that it is designed to address the following:

  • Don’t assume that just because you have no (or a few) commercial customers you aren’t required to have a policy.  The implementation burden is lessened, but a policy is still required.
  • Designate a person responsible for UIGEA compliance (this was a specific finding in one of the recent examinations).
  • Focus on establishing a due diligence process when initiating a commercial customer relationship.
  • Communicate to your commercial customers contractually up  front (and periodically throughout the relationship) that restricted transactions are prohibited.  Your policy should state that the commercial customer agrees to not originate or receive restricted transactions throughout the customer relationship.  If the risk warrants, a certification from the customer is recommended.
  • Your due diligence obligations do not end once the account is opened.
  • Specify a specific course of action to be followed in case you have “actual knowledge” that a customer has violated the policy.  For example:
    •  Perform an account review
    • Suspend activity on the account
    • Contact the customer
    • Contact legal counsel (if appropriate)
    • Close the account
    • File a SAR, if warranted
    • Contact regulatory authorities
    • Contact law enforcement
    • If cooperating with law enforcement, and so advised by same, continue processing

There are additional regulatory expectations if you actually have customers that are legally allowed to engage in an Internet gambling business, i.e. through U.S. State or Tribal authority.  In fact when I started getting reports of UIGEA policy deficiencies, my first thought was that all the institutions may have had that common denominator…they had customers legally engaging in Internet gambling.  That was not the case, however.  It would appear that this is just the latest regulatory “hot button”.

* Download Full Act, examination procedures in Attachment C

03 Jul 2012

“Operational Risk Increasing”

In a recent speech to the Exchequer Club1, Thomas J. Curry, the new head of the OCC, stated that although asset quality has improved, charge-off rates have fallen, and capital now stands at its highest level in a decade, another type of risk is gaining increasing prominence; Operational Risk.

“Some of our most seasoned supervisors, people with 30 or more years of experience in some cases, tell me that this is the first time they have seen operational risk eclipse credit risk as a safety and soundness challenge.  Rising operational risk concerns them, it concerns me, and it should concern you.

In fact, the OCC considers it currently to be at the top of the list of safety and soundness issues for the institutions they supervise.  Earlier this year I wrote about how risk assessments were one of the compliance trends of 2012, and how regulators are now asking about things like strategic risk and reputation risk and operational risk, and expecting that these risks are assessed alongside the more traditional categories like privacy and security.

So the question is:  What exactly is operational risk, and how can financial institutions effectively address it?  The FFIEC defines it this way:

“Operational risk (also referred to as transaction risk) is the risk of loss resulting from inadequate or failed processes, people, or systems. The root cause can be either internal or external events. Operational risk is present across all business lines.”

Furthermore, because the implications of operational risk extend to all other risks….

“Management should distinguish the operational risk component from other risks to enable a stronger focus on operational risk mitigation.

If you are still a bit confused about exactly what operational risk looks like, you are not alone.  Because it exists in all business lines and manifests itself in every other risk, it is one of the most difficult risks to assess.  In other words, it’s everywhere…and affects everything!

Simply put (and assuming your policies and procedures are adequate), most of the time operational risk can be defined as a failure to adhere to your own internal policies and procedures.  In other words, if you don’t do what you say you will do, or you don’t do it the way you say you’ll do it, something will fail as a result.  Whether a it’s a process, a control, a system, or a risk model…if they are in place and operational, but either flawed or not followed, operational risk is the result.2   But here is the kicker, even if your processes/procedures/models, etc. are flawless and followed to the letter, if you can’t document that they are,  you may still have a high operational risk finding in your next safety and soundness examination.

The best way to address operational risk is to implement an internal control self-assessment process to assure that risk management controls are adequate, in place, and functioning properly.  Reporting will document that your day-to-day practices follow your written procedures.  Finally, make sure all business decisions reflect the goals and objectives of the strategic plan, and report to the Board on a regular basis.

In summary, integrate assessment of operational risk into your risk management process, and expect to hear more about it from the regulators in the future.  And don’t think that because you aren’t regulated by the OCC you won’t see this trend.  After all, as Mr. Curry stated:

“As regulators, one of our most important jobs is to identify risk trends and bring them to the industry’s attention in a timely way. No issues loom larger today than operational risk in all its dimensions, the manner in which all risks interact, and the importance of managing those risks in an integrated fashion across the entire enterprise.”

[poll id=”3″]

1 The Exchequer Club is comprised of senior professionals from trade associations, federal regulatory agencies, law firms, congressional committees and national press with a primary interest in national economic and financial policy.

2 Business Continuity Planning uses a slightly different definition of operational risk.  Since the basic assumption of a BCP is that your processes and systems have already failed because of a disaster, operational risk manifests itself in the additional overhead that the alternative recovery processes and procedures temporarily impose on your organization.  Of course if your BCP is inadequate, failed processes will be the result.

23 May 2012

Patch deployment – now or later? (with interactive poll!)

We recently saw an examination finding that recommended that “Critical Patches be deployed within 24 hours of notice (of patch release)”.  This would seem to contradict the FFIEC guidance in the Information Security Handbook that states that the institution:

Apply the patch to an isolated test system and verify that the patch…

(1) is compatible with other software used on systems to which the patch will be applied,

(2) does not alter the system’s security posture in unexpected ways, such as altering log settings, and

(3) corrects the pertinent vulnerability.”

If this testing process is followed correctly, it is highly unlikely that it will be completed within 24 hours of patch release.  The rational behind immediate patch release is that the risk of “zero-day exploits” is greater than the risk of installing an untested patch that may cause problems with your existing applications.  So the poll question is:
[poll id=”2″]
Regardless of your approach, you’ll have to document the risk and how you plan to mitigate it.  A “test first” approach might choose to increase end-user training and emphasize other controls such as firewall firmware, IPS/IDS, and Anti-virus/Anti-malware.  If you take a “patch first” approach you may want to leave one un-patched machine in each critical department to allow at least minimal functionality in case something goes wrong.  You should also test the “roll-back” capabilities of the particular patch prior to full deployment.

I’ll be watching to see if this finding appears in other examinations, and also to see if the guidance is updated online.  Until then, because of the criticality of your applications and the required up-time of your processes, I believe a “test-first” approach that adheres to the guidance is the most prudent approach…for now.  However you manage it though, be prepared to explain the why and how to the Board and senior management.  Not only are the results expected to be included in your annual Board report, it may help to explain repeat future examination findings if your current approach differs from examiner expectations.

05 Apr 2012

5 “random” facts

Fact 1 – According to the U.S. Bureau of Labor Statistics, the increasing complexity of financial regulations will spur employment growth of financial examiners.  In fact it is expected to experience the third largest growth of all career paths through 2018:
Fact 2 – According to Rep. Shelly Moore Capito (R-W.Va.), author of H.R. 3461, “The Dodd-Frank Act has added so many new regulations to financial institutions, it has helped boost a 31% projected growth in job opportunities for Compliance Officers.”

Fact 3 – Speaking of H.R. 3461…It is also called the Financial Institution Examination Fairness and Reform Act, and aims to provide “more transparent, timely and fair examinations” by reducing the disconnect between exam results and their regulating agencies.  It now has 154 co-sponsors.

Fact 4 – A related bill (S. 2160) has just been introduced in the Senate.

Fact 5 – The provision in both bills that is getting the greatest push-back from regulators is the one that grants a financial institution the right to appeal an examination finding to an ombudsman at the FFIEC, not the regulator that made the finding.

I’ll let you connect the dots of these “random” facts.

28 Mar 2012

CFPB Examinations Are Coming – UPDATE 2

UPDATE 2 – June 2012:  Memorandum of Understanding issued on CFPB examinations

Examinations are coming, but hopefully they won’t impose too much of an additional burden on you.  At least that is the intent of an MOU was recently signed between the CFPB and the other Federal regulators (Federal Reserve, NCUA, FDIC and OCC).  The MOU provides for information sharing among and between all agencies in order to minimize unnecessary duplication of examination efforts, and provides guidelines for “Simultaneous and Coordinated Examinations” between the agencies.  So expect additional visitors during future examinations, but if they truly expect to achieve the stated objective to “minimize unnecessary regulatory burden on Covered Institutions” they could start by doing away with CFPB examinations entirely.

UPDATE 1  –  May 2012:  Ramping Up…

Coming soon to your financial institution –

Dear Board of Directors:

Pursuant to the authority of the Dodd-Frank Wall Street Reform and Consumer Protection Act, the Consumer Financial Protection Bureau (CFPB) performed a risk-focused examination of your institution.  The examination began on April 1, 2012.  The following report summarizes the findings of our examination.

Any matters of criticism, violations of laws or regulations, and other matters of concern identified within this Examination Report require the Board of Director’s and management’s prompt attention and corrective action….

Although by law the CFPB will only  examine large depository institutions (assets greater than $10B) individually, Section 1026 extends coverage to smaller institutions on a sampling basis.  This means all institutions can eventually expect a visit from CFPB examiners (either with or without your primary federal regulator) at some point in the future.  And it is my opinion that the influence of the CFPB will continue to expand to all financial institutions regardless of size.  Consider the following:

  1. The CFPB is now one of the agencies comprising the inter-agency council of the FFIEC (replacing the OTS).  This means that CFPB will have input into all FFIEC guidance going forward.
  2. The head of the CFPB sits on the FDIC Board of Directors
  3. So far, 19 (Regs. B – P, V, X, Z & DD) out of the total of 39 Regulations have been turned over to CFPB for enforcement.  (I wonder if including Reg E will affect all electronic funds transfers, or only those initiated by non-business customers?  I find it hard to believe that there would be 2 sets of standards.)

So they are coming, but believe it or not there is good news.  Not only are they telling you what they are looking for ahead of time, they are giving you lots of helpful templates to fill out in preparation.  True, the templates are for their examiners, but there is no reason why you can’t use them too.  Particularly helpful is the Consumer Risk Assessment Template which CFPB examiners will use to determine inherent risk, which is then reduced by the appropriate controls to arrive at the overall risk (also called residual risk).  This table represents the summary of the consumer risk assessment process:

Notice that if the inherent risk is high, the residual risk can be no lower than moderate, regardless of the strength of the controls.  I think this is significant because of the potential implications for all risk assessments going forward.  Remember, CFPB now has a seat at the FFIEC (and FDIC) table.

But consider this…could we be looking at a fundamental change in how all risk assessments are conducted, and examined, in the future?  One single standardized risk assessment template for all risks?  Inherent risk levels are pre-defined, and control strength is pre-determined, making residual risk a purely objective calculation.  The complete lack of subjectivity means that all examiners evaluate all institutions against the exact same set of standards.  No exit meeting surprises, no unexpected CAMELS score downgrades, no spending hours and hours preparing for one area of compliance, only to have the examiners focus on something else.

So could the influence of the CFPB be a smoother, more predictable examination experience overall?  Or am I dreaming?

02 Mar 2012

“Data-flow diagrams”

This request was seen in a recent State examiners pre-examination questionnaire, and although I usually like to see a request a couple of times from different examiners before identifying it as a legitimate trend, this one could prove so potentially problematic that I thought I needed to get ahead of it.

Before we go much farther, it’s important to distinguish between “data-flow diagrams” and the “work-flow analysis”, which is a requirement of the business continuity planning process.  They are completely separate endeavors designed to address two very different objectives.  The BCP “work-flow analysis” is designed to identify the interdependencies between critical processes in order to determine the order of recovery for the processes.  The “data-flow diagram” is designed to:

Supplement (management’s) understanding of information flow within and between network segments as well as across the institution’s perimeter to external parties.

It’s important to note here that what the examiner asked for actually wasn’t unreasonable, in fact it appears word-for-word in the FFIEC AIO Handbook:

Common types of documentation maintained to represent the entity’s IT and business environments are network diagrams, data flow diagrams, business process flow diagrams, and business process narratives. Management should document and maintain accurate representations of the current IT and business environments and should employ processes to update representations after the implementation of significant changes.

And although this particular examiner quoted from the Operations Handbook, the term “data flow” (in combination with “maps”, “charts” and “analysis”) actually appears 15 times in 5 different Handbooks; Development and Acquisition, Information Security, Operations, Retail Payment Systems, and Wholesale Payment Systems.

So this concept is certainly not unheard of, but previously this “understanding of information flow” objective was achieved via a network topology map, or schematic.  Sufficiently detailed, a network schematic will identify all internal and external connectivity, types of data circuits and bandwidth, routers, switches and servers.  Some may even include workstations and printers.  In the past this diagram, in combination with a hardware and software inventory, was always sufficient to document management’s understanding of information flow to examiners.  But in this particular case the examiner highlighted (in bold) this section of the guidance (and this was the most troublesome to me):

Data flow diagrams should identify:

  • Data sets and subsets shared between systems;
  • Applications sharing data; and
  • Classification of data (public, private, confidential, or other) being transmitted.

…and…

Data flow diagrams are also useful for identifying the volume and type of data stored on various media.  In addition, the diagrams should identify and differentiate between data in electronic format, and in other media, such as hard copy or optical images.

Data classification?  Differentiation between electronic, printed and digital data?  This seems to go way beyond what the typical network schematic is designed to do, way beyond what examiners have asked for in the past, and even possibly beyond the ability of most institutions to produce, at least not without significant effort.  Of course using the excuse of “unreasonable resource requirements” will usually not fly with examiners, so what is the proper response to a request of this nature?

Fortunately there may be a loophole here, at least for serviced institutions, and it’s found in the “size and complexity” predication.   The guidance initially states:

Effective IT operations management requires knowledge and understanding of the institution’s IT environment.

This is the underlying requirement, and the core issue to be addressed.  It then goes on to state that documentation of  management’s “knowledge and understanding” be “commensurate with the complexity of the institution’s technology operations”.  And depending on size and complexity, this may include “data-flow diagrams”.  So the examiner is effectively saying in this case that they feel that a “data-flow diagram” is the most appropriate way for the management of this outsourced institution to document adequate knowledge and understanding of their technology operations.  I suggested that the institution respectfully disagree, and state:

Our management believes, based on our size and complexity as a serviced institution, that an updated detailed schematic, and current hardware and software inventories, adequately demonstrates sufficient knowledge and understanding of our technology operations.

This directly addresses the core issue and I’m pretty sure the examiner will agree, but I’ll let you know.  In any case it’s worth pushing back on this because of the potentially enormous resource requirement that it would take to comply with the request, both now and going forward.

Now here is the real question…should you require the same level of documentation (i.e. data classification and data type differentiation) from your core vendor?  And if so, are you even likely to get it from them?