Tag: CAMELS Ratings

25 Feb 2013

Examination Downgrades Correlated with Poor Vendor Management

According to Donald Saxinger (senior examination specialist in FDIC’s Technology Supervision Branch) in a telephone briefing given to the ABA in December of last year, almost half of all CAMELS score downgrades in 2012 were related to poor vendor management.  The briefing was titled “Vendor Management: Unlocking the Value beyond Regulatory Compliance“, and in it Mr. Saxinger noted that in 46% of the FDIC IT examinations in which bank ratings were downgraded, inadequate vendor management was cited as a causal factor.  He went on to say that although poor vendor management may not have been the prime cause, it was frequently cited as a factor in the downgrade.

Mr. Saxinger recommends that banks request, receive, and review not just financials and third-party audits such as SOC reports and validation of disaster recovery capabilities, but also any examination reports on the provider.  Federal examiners have an obligation and a responsibility to monitor financial institution service providers using the same set of standards required of the institutions themselves, and they are doing so with increasing frequency.

In addition, consider that all of the FFIEC regulatory updates and releases issued last year were either directly or indirectly related to vendor management:

  • Changes to the Outsourcing Handbook to add references to cloud computing vendors, and managed security service providers.
  • Updates to the Information Security Handbook to accommodate the recently released Internet Authentication Guidance (with its strong reliance on third-parties).
  • Changes to all Handbooks to accommodate the phase-out of the SAS 70, and  replace with the term “third-party review”.
  • Updated guidance on the URSIT programs for the supervision and scoring of Technology Service Providers.
  • Completely revised and updated  Supervision of Technology Service Providers Handbook.

So regulators see inadequate vendor management as a contributing factor in examination downgrades, and virtually all new regulations issued by the FFIEC are related to it as well.  As a service provider to financial institutions we are prepared for, and expecting, added scrutiny.  As a financial institution looking to optimize examination results and stay ahead of the regulators, you should be too.

Here is a link to all vendor management related blog posts.

12 Nov 2012

The Financial Institutions Examination Fairness and Reform Act (and why you should care)

Although it’s currently stuck in committee, financial institutions should be aware of this bill and track it closely in the next congressional session.  There are actually 2 bills, a House (H.R. 3461) and a Senate (S. 2160) version, both  containing similar provisions.  The House bill has 192 sponsors and the Senate version has 14 sponsors, and both bills have supporters from both political parties.  Here is a summary of the bill, and why you might want to support it as well:

What it does:

  • Amends the Federal Financial Institutions Examination Council (FFIEC) Act of 1978 to require a federal financial institutions regulatory agency to make a final examination report to a financial institution within 60 days of the later of:
(1) the exit interview for an examination of the institution, or
(2) the provision of additional information by the institution relating to the examination.
  • Sets a deadline for the exit interview if a financial institution is not subject to a resident examiner program.
  • Sets forth examination standards for financial institutions.
  • Prohibits federal financial institutions regulatory agencies from requiring a well capitalized financial institution to raise additional capital in lieu of an action prohibited by the examination standards.
  • Establishes in the Federal Financial Institutions Examination Council an Office of Examination Ombudsman. Grants a financial institution the right to appeal a material supervisory determination contained in a final report of examination.
  • Requires the Ombudsman to determine the merits of the appeal on the record, after an opportunity for a hearing before an independent administrative law judge.
  • Declares the decision by the Ombudsman on an appeal to:
(1) be the final agency action, and
(2) bind the agency whose supervisory determination was the subject of the appeal and the financial institution making the appeal.
  • Amends the Riegle Community Development and Regulatory Improvement Act of 1994 to require:
(1) the Consumer Financial Protection Bureau (CFPB) to establish an independent intra-agency appellate process in connection with the regulatory appeals process; and
(2) appropriate safeguards to protect an insured depository institution or insured credit union from retaliation by the CFPB, the National Credit Union Administration (NCUA) Board, or any other federal banking agency for exercising its rights.

Why you should care:

In addition to the provisions for more expeditious exit interviews and final reports, the Bills provide for certain changes to “examination standards”.   The standards pertain primarily to the non-accrual treatment of commercial loans and their effect on capital, and they also redefine “Material Supervisory Determination” as “any matter requiring attention by the institution’s management or board of directors”.  These are all generally good things for financial institutions, but I think the most significant provisions (and the ones with the biggest positive impact) are the provisions that establish the Office of Examination Ombudsman within the FFIEC.

The current appeal process for contested examination findings was recently re-addressed by the FDIC here (and I reacted to it here).  In summary, if you currently have a disagreement with the FDIC about any “material supervisory determination”, which includes anything that affects CAMELS ratings and IT ratings (the full list is here, search for “D. Determinations Subject to Appeal”) you must stay within the FDIC for resolution.  And this includes the current Office of the Ombudsman, which is also a part of the FDIC.

The agency makes it clear that they believe the appeals process is “independent of the examination function and free of retribution or other retaliation”, but whether it is or isn’t, the fact that the process never leaves the FDIC deters many financial institutions from pursuing the appeals process in the first place.  I believe moving the process to the FFIEC at least improves the perception of independence and objectivity, which may encourage more institutions to be more inclined to challenge examination findings.  What are your thoughts?

[poll id=”6″]

Again, I encourage you to learn about these bills for yourself and take a position. To support the Senate bill, go HERE.  To support the House bill, go HERE.  And feel free to share this post.  If enough people support it perhaps we’ll see some progress in the next congressional session!

18 Oct 2012

“2 is the new 1″…or is it? (with poll)

UPDATED – October, 2012 – Two institutions in the past ten days have told me that they have been assigned a CAMELS score of “1” in their latest examination.  One institution regained their 1 after slipping to a 2 in their last exam cycle, and the other went up to a 1 for the first time.  The FDIC is the primary federal regulator for both institutions.  What is your experience?  (Original post below the polls)

[poll id=”4″]

And while we’re asking for your input…

[poll id=”5″]

During a panel discussion recently at our annual user conference, we heard this from a banker who was quoting an examiner during their last examination.  They had slipped from a CAMELS 1 rating to a 2, and in discussing the reasoning with the Examiner in Charge they said that they should be satisfied with a 2, because “2 is the new 1”.

Just 3 years ago Tony Plath, a finance professor at the University of North Carolina Charlotte, said that (at least for large banks) a CAMELS score of anything less than “1” was cause for concern.  These days it almost seems that examiners are digging for anything they can find to justify NOT assigning the highest rating.  Indeed I had a recent conversation with an FDIC examiner who said (off the record) “if we find anything at all to document during our examination, that is enough to disqualify them for a “1” rating”.

Unlike the comparatively significant difference between a “2” and a “3”, the differences between a “1”, defined as “Sound in every respect” and a “2”, defined as “Fundamentally sound” are extremely subtle, and there is no clear line of demarcation between them.  Often it comes down to examiner opinion.

So pick your battles and push back where you can, but understand that although you should be familiar with the criteria for a “1” rating, and strive to achieve it, you should be quite satisfied with a “2”…at least for now.

 

19 Dec 2011

2012 Compliance Trends, Part 3 – Management

I’ve written about the importance of this before, and from many different angles, but I want to recap and explain why I think management (both IT and enterprise) will be an area of increased regulatory focus in the year ahead.  To recap my criteria for inclusion in the “2012 Trends” list, it must have a basis in:

  1. Recent audit and examination experience,
  2. Regulatory changes, and/or
  3. Recent events.

Management, or as it is sometimes referred, governance, is defined by the FFIEC in the IT Examination Management Handbook as;

“…an integral part of enterprise governance and consists of the leadership and organizational structures and processes that ensure that the organization’s IT sustains and extends the organization’s strategies and objectives.”

And…

“Due to the reliance on technology, effective IT management practices play an integral role in achieving many goals related to corporate governance.”

So regulators have always considered IT management critical, and most institutions address that obligation by assigning responsibility for day-to-day management of IT to a committee, such as a technology or IT Committee.  In recent examinations we have seen regulators ask specifically to see committee minutes, looking for things such as discussion of vendors before they are approved, and discussion of new technology before it is implemented.  They want to know that the institution considered the strategic value of the vendor and the new technology prior to approval.  Was the decision to approve consistent with (in alignment with) the overall goals and objectives of the strategic plan?  Can you document that?

Effective management of IT has significance way beyond just IT and strategic alignment though, after all…

“…IT management is an essential component of effective corporate governance and operational risk management.”

An institution that fails to demonstrate that they can adequately manage technology (and do so at all levels of management, from the Board of Directors down) may have fundamental management issues enterprise-wide.  I further explained this here, and examiners agree.  Consider this…the two most often repeated statements in FDIC enforcement orders this year is for the institution to “have and retain qualified management”, and for the Board of Directors to “increase its participation in the affairs of the Bank”.

For all these reasons I believe the CAMELS “M” will be in the minds of examiners.  So how can you prepare?  In a word, reporting.  Take a look at the following illustration:

 

Once the overall strategy has been communicated top-down (left side), reporting (right side) will document that the strategy has been successfully incorporated into the policies and procedures of the organization, and (most importantly) that day-to-day practices abide by those policies and procedures.  Implementing an internal self-assessment program can be a very effective way of both communicating strategy and documenting compliance.  Use automated controls and monitoring (like this for example), and employ outside expertise whenever possible.

06 Oct 2011

Material Loss Reviews: Does responsibility = liability?

I asked in my previous post whether or not the regulators should share any of the blame when institutions fail, and if so, should they shoulder any of the liability?  The thought occurred to me as I was reviewing some recent Material Loss Reviews.

A Material Loss Review (MLR)  is a post-mortum written by the Office of Inspector General for each of the federal regulators with oversight responsibility after a failure of an institution if the loss to the deposit insurance fund is considered to be “material”.  (The threshold for determining whether the loss is material was recently increased by the Dodd-Frank Act from $25 million to $200 million, so we are likely to see fewer of these MLR’s going forward.)  All MLR’s have a similar structure.  There is an executive summary in the front, followed by a break-down of capital and assets by type and concentration.  But there is also a section that analyzes the regulator’s supervision of the financial institution, and I noticed a recurring theme in this section:

  • …(regulator) failed to adequately assess or timely identify key risks…until it was too late.
  • …(regulator) did not timely communicate key risks…
  • …Regulator should have taken “a more conservative supervisory approach”, and used “forward-looking supervision”.
  • …examiners identified key weaknesses…but…they did not act on opportunities to take earlier and more forceful supervisory action.
  • …serious lapse in (regulator’s) supervision
  • …(regulator), in its supervision…did not identify problems with the thrift
  • …(regulator) should have acted more forcefully and sooner to address the unsafe and unsound practices
  • …(regulator) did not fully comply with supervisory guidance

There were also many references to the responsibilities of the Board, which I addressed here, but in almost every case the regulator was found at least partially responsible for the failure of the institution.

Here is where you can find the reports for each regulator:

I encourage you to take a look at these and draw your own conclusions as to the issues of responsibility and liability.  But clearly there are lessons to learn from any failure, and one lesson that I think we should all learn from this is that regulators will be pressured to be much more critical going forward.  (I.e. quicker to apply “Prompt Corrective Action“.)  After all, no one likes to be called out for doing a bad job.

One other part I found interesting (in the sense that it perfectly fits the narrative) is where the review lists all examination CAMELS ratings in the periods immediately prior to the failure.  What struck me was how many times institutions scored 1’s and 2’s just prior to the failure, and then dropped immediately to 4’s and 5’s in a single examination cycle.  Again, the lesson is that there will be tremendous downward pressure on CAMELS scores.  And don’t think that just because you are healthy you’re immune from the additional scrutiny.  As one MLR stated “…a forceful supervisory response is warranted, even in the presence of strong financial performance.”

08 Sep 2011

Exam preparation – less equals more?

One of the more surprising findings from my recent examination experience survey (thanks again to all that participated!) is that there doesn’t seem to be a direct relationship between the amount of time spent preparing, and examination results. I’ll elaborate in a moment, but first, here are the final survey demographics:

  • There were 80 total respondents
  • FDIC was the most prominent regulator (80%), but institutions representing all the others PFR’s (OTS, OCC, Federal Reserve and NCUA) also responded.
  • Institutions from 20 different states responded, resulting in a pretty good geographic cross-section.
  • The majority of respondents were under $500M, but we also got a handful >$1B.
  • 25% were DeNovo (less than 5 years old).

So what we found was that most institutions spent quite a bit of time preparing for their last IT examination.  57% of you spent more than 5 hours, but interestingly enough, it really didn’t translate into better results.  Although 73% of those felt they were very prepared for the exam, less than half felt that the exam went pretty much as expected, with 9% describing their last examination as a “nightmare”!  By contrast, only 5% of those who spent less than 5 hours preparing felt the same way.  But perhaps the most significant statistic is the average IT composite score.  Those who spent more than 5 hours preparing averaged a score of 1.85 as opposed to a 1.76 for those that spent less than 5 hours.  So is the conclusion that as far as preparation goes, less equals more?  I think a better way to interpret the data is that it’s better to work smarter than harder. Consider this:  Those of you who used an outside consultant to assist with the pre-examination questionnaire seemed to have a much more favorable experience overall.  90% of you felt that the examination experience was either not bad, or pretty much as expected.  But more significantly, those who used outside help also got better IT composite scores, averaging a 1.69 versus 1.82 for all respondents!