Tag: Audit

04 Apr 2011

The Control Self-Assessment (CSA)

If there was a process that was mentioned 43 times in 7 of the 12 FFIEC IT Examination Handbooks, (including 12 times in the Information Security Handbook alone!), would you consider implementing it?  How about if it virtually assured better audits and examinations?  OK, you’re interested, but the last thing you need is to implement another complicated process, right?  What if the framework is probably already in place at your institution, and all you need to do is fine-tune it a bit?

I’m referring to the Control Self-Assessment (CSA), and let’s first make the regulatory case for it.  The FFIEC Operations Handbook says:

Periodic control self-assessments allow management to gauge performance, as well as the criticality of systems and emerging risks.
And…
Senior management should require periodic self-assessments to provide an ongoing assessment of policy adequacy and compliance and ensure prompt corrective action of significant deficiencies.

If you’re familiar with “FFIEC-speak”, then you know that “should” really translates to “must”.  But the Information Security Handbook makes the most compelling argument for utilizing the CSA in your risk management program:

Control self-assessments validate the adequacy and effectiveness of the control environment. They also facilitate early identification of emerging or changing risks.

So there is plenty of regulatory support for the CSA process, what about the audit and exam benefits?  All of the major auditing standards bodies (IIA, AICPA, ISACA) address the importance of internal control reviews.  Indeed most auditors say that institutions with an internal CSA process in place generally demonstrate a much more evolved risk management process, resulting in fewer, and less severe, audit findings.  This stands to reason, as they tend to identify, and correct, control weaknesses prior to audit, as opposed to waiting for the auditor to identify them.  And since one of the first things the examiner wants to see when they come in is your most recent audit, this often results in fewer examination findings as well.

One more reason to implement a CSA process from the examination perspective is something I touched on here…for those institutions trying to maximize their CAMELS IT composite ratings, one of the biggest differentiators between a “1” and a “2” is that in institutions rated a “1” “…management identifies weaknesses promptly (i.e. internally) and takes appropriate corrective action to resolve audit and regulatory concerns”.   Conversely, in those institutions rated a “2” “…greater reliance is placed on audit and regulatory intervention to identify and resolve concerns”. A CAMELS “3” rating speaks directly to the CSA, stating that “…self-assessment practices are weak…“.

OK, so there are certainly lots of very good reasons to implement a CSA process in your institution.  How can this be done with minimal disruption and the least amount of resource overhead?  Chances are you already have a Tech Steering Committee, right?  If the committee consists of members representative of all functional units within the organization, it has the support of senior management, and is empowered to report on all risk management controls, all that’s needed is a standardized agenda to follow.  The agenda should address the following concerns:

  • Identification of risks and exposures
  • Assessment of the controls in place to reduce risks to acceptable levels
  • Analysis of the gap between how well the controls are working, and how well management expects them to work

As you can see, this is not substantially different from what you are probably already doing in your current Tech Steering Committee meetings.  In fact, this list is really only a sub-set of your larger agenda…the only possible difference is that any and all findings in the gap analysis must be assigned to a responsible party for remediation.

In summary; the FFIEC strongly encourages it, the auditors and examiners love it, and for most institutions it’s not too difficult to implement and administer.  But if you only need one good reason to consider the CSA process, it should be this:

Improved audit and examination ratings!

23 Mar 2011

IT Composite Ratings: 1 vs. 2

In a recent survey conducted with our customers, we asked them to tell us (anonymously) what their FDIC IT composite scores were after their last IT examination, and whether those scores increased (got worse), or decreased (got better).  The average score was 1.8 on the 5 point scale.  Of course the results could be attributed to the fact that by virtue of their relationship with us, they demonstrate a higher level of awareness of IT and IT risks, resulting in a kind of reverse “adverse selection”, but regardless anything better than 2 is considered much better than average.  And slightly more institutions saw their score increase (or get worse) than stay the same…almost none saw their scores decrease.
So is the FDIC issuing any 1’s in IT anymore?  Not many, as far as I can see.  But for those institutions looking to maintain, or even enhance, their IT scores, it’s critical to review the components in each category…particularly the differences…between 1 and 2.  And since there are significant similarities between the two, the difference is all in the details.

The full list with all details is here, but this is a condensed version of how the FDIC IT Examination Composite Ratings break out by component:

Risk Management:

One (1) – “Risk Management processes provide a comprehensive program to identify and monitor risk relative to the size, complexity and risk profile of the entity.”
Two (2) – “Risk Management processes adequately identify and monitor risk relative to the size, complexity and risk profile of the entity.”

The difference between a 1 and a 2 in risk management is a “comprehensive program”…very subtle, but using the IT Steering Committee to manage IT could be the difference.

Strategic Planning:

One (1) – “Strategic plans are well defined and fully integrated throughout the organization.  This allows management to quickly adapt to changing market, business and technology needs of the entity”.
Two (2) – “Strategic plans are defined but may require clarification, better coordination or improved communication throughout the organization.  As a result, management anticipates, but responds less quickly to changes in market, business, and technological needs of the entity”.

This distinction is the most significant between the 2 categories, and in my opinion, seems to be the critical factor.  I addressed the IT Strategic Plan in detail here.  Often the difference between a 1 and a 2 in IT is in how well you manage, and communicate, your strategic plan.

Self Assessment:

One (1) – “Management identifies weaknesses promptly and takes appropriate corrective action  to resolve audit and regulatory concerns”.
Two (2) – “Management normally identifies weaknesses and takes appropriate corrective action.  However, greater reliance is placed on audit and regulatory intervention to identify and resolve concerns“.

Both have the ability to identify and correct weaknesses, but the key difference here is that the stronger organization handles it internally.  The key to this is the control self-assessment process.  The FFIEC mentions “control self-assessment” 43 times, and  in 7 of the 12 IT Examination Handbooks.  This is not a new concept, nor is it particularly difficult to implement, but for some reason it is under-utilized by most financial institutions.

I intend to address the self-assessment process more completely in a future post, but until then here are some of the benefits:

  • Early detection of risks
  • Improved internal controls
  • Assurance to top management that you are doing what you say you’re doing,  and of course
  • Improved audit and examination ratings!
08 Mar 2011

Auditor rotation – pro and con

The practice of periodically changing, or rotating, your external auditor has been a topic of interest with our customers lately, and there are two schools of thought on this.  The pro-rotation side takes the position that a different set of eyes looking at the same system might see something the other missed.  This is certainly a valid position, and probably originated in the post-Enron/Arthur Anderson days.  In fact, Section 203 of Sarbanes-Oxley (SOX) does require audit partner rotation every 5 years for publicly held companies, but this provision only applies to the lead auditor and the auditor responsible for reviewing the audit, not the auditing firm.

Indeed in interviews conducted in 2003 by the Government Accounting Office among Fortune 1000 companies, the majority surveyed indicated that audit partner rotation (using different individuals within an audit firm) would achieve the same benefits as audit firm rotation (using different audit firms).

Changing auditor firms can also be somewhat disruptive, as the new firm must get up to speed on the particularities of the institution’s control environment.   There is evidence that maintaining the same auditor may actually improve the quality of subsequent audits, as the auditor’s store of institutional knowledge increases.  Additionally, changing auditors too frequently may cause the appearance of “auditor shopping”, or shopping around for better results.

For their part, the FFIEC is silent on the practice of auditor rotation, stating only that:

“…management should ensure that there are no conflicts of interest and that the use of these (external auditor) services does not compromise independence”

Bank examiners are instructed to assess “whether the structure, scope, and management of an internal audit outsourcing (or external audit) arrangement adequately evaluate the institution’s system of internal controls“.  In other words, are they doing what they are supposed to do?

In the end analysis, in the absence of a regulatory mandate there is really only one overriding concern for financial institutions…are your examination results satisfactory? If so, and if there are no conflicts of interest or other independence concerns, there is really no compelling reason to change auditing firms…but periodically using a different set of eyes is definitely a good idea.

17 Jan 2011

Top 5 Compliance Trends for 2011 – Part 2

A recent survey of auditors and examiners asked:

During the past year, in which category would you say MOST of your IT audit/exam findings occurred?

The choices were:

  • Lacking or Insufficient Polices
  • Inadequate Procedures, or
  • Insufficient documentation of actual practices

2/3 of the respondents said that insufficient documentation of practices was the most common finding.  In other words, policies and procedures were fine, but the institution could not adequately demonstrate that they were actually following them.  This brings me to the second compliance trend for 2011 (and a carry-over from last year):

Documentation

The regulatory compliance process involves the coordination of 3 intersecting spheres:

  • Policies
  • Procedures, and
  • Practices

All 3 must be not only be in alignment with one another, but also in alignment with the current interpretation of regulatory guidance.  (Made especially challenging since the latter is a moving target.)  Policy defines what you will do to address regulatory mandates, procedures dictate how you’ll implement policy, but practices document what you actually do.  If polices are off target, but you can still demonstrate good practices, you’ll have a minor audit/exam finding.  But if you say you’re doing something and you either didn’t, or can’t prove you did, that is generally a more severe finding.

So what recent audit and examination experience last year has demonstrated, and what I believe we’ll continue to see in 2011, is increased scrutiny in the sphere of documented practices.  Simply put…if you didn’t document it, you didn’t do it.

There are many ways to document your actual practices, but perhaps the best way is to take your procedures and convert them into a checklist.   The checklist is then discussed in committee (Tech or IT) as a regular agenda item.  For example, if your written procedures state that you will implement a patch management process to keep all devices fully patched, be able to produce a report showing device patch status, and present it to a committee assigned responsibility for validating the effectiveness of your procedures.

Remember, if you can’t document it, then for regulatory purposes, you aren’t doing it.

07 Jan 2011

Top 5 Compliance Trends for 2011 – Part 1

I recently looked back at 2010, and the predictions I made a year ago.  This post begins a series of the top regulatory compliance trends for the current year.  I’m going to focus on the top 5, and my sources for these are the following:

  • Recent audit and examination experience from our customers
  • Recently released regulatory guidance
  • Discussions with my compliance advisory committee (consisting of a policy consultant, and 3 IT field auditors.)
  • A recent survey conducted  among  bank auditors and examiners.

For a topic to be included in this list, it had to have been validated in at least two of the four sources.  My first trend was validated in all four:

Enterprise-Wide Risk Assessments

If this one sounds familiar, it was on last years list as well.  And I would have left it out this year except for the fact that just last week an institution had a finding from a State examiner that moved it from off the list, to the top of the list.

My original motivation for this was an article that appeared in the FDIC Supervisory Insights newsletter in November, 2009.  The article was titled:  From the Examiner’s Desk:  Customer Information Risk Assessments: Moving Toward Enterprise-wide Assessments of Business Risk. (The article is excerpted here.)  As you can tell from the title, it’s pretty clear that enterprise-wide risk assessments are the future.  The only question was how quickly the new standard would be adopted by the regulators.  I thought it would have been in 2010, and apparently it just made it.

According to the State examiners finding:

“…the bank’s internal auditor, in conjunction with department heads and the Board, should develop an enterprise-wide risk assessment that identifies and assigns a risk grade to every major function of bank operations.”

I’m not surprised that this new standard found it’s way into examinations, but I am a bit surprised that we first saw it in a State exam.  Nevertheless, the fact that the guidance is out there, and that we are now seeing it reflected in examiner expectations, means this is trend #1.

And just to underscore the point, the survey (more on that in a future post) had the following responses when asked:  What is the current regulatory expectation and standard for documenting the assessment of risk?

Customer Information Risk Assessment     0.0%

Information Security Risk Assessment        30.0%

Enterprise-wide Assessment of Risk           70.0%

AND my advisory committee agrees, so a clean sweep of all sources.  So how do you document adherence to this enterprise-wide standard of risk assessment?  The full answer is too complicated to adequately address in this post (I promise to give it justice in a future post), but in short, make sure you include the following risk categories in your risk assessment:

  • Strategic Risk
  • Operational/Transactional Risk
  • Reputation Risk, and
  • Legal/Regulatory Risk

Also, make sure you document both the inherent risk (prior to the application of control measures), and the residual risk (after controls).

28 Dec 2010

Looking back – 2010 compliance hits & misses

Every year about this time, I’m asked to look ahead to the upcoming year and prognosticate on regulatory compliance trends.  I  intend to do just that in a future post, but today I wanted to do something very few other prognosticators do…look back at last years’ predictions and see which ones hit and which missed (and why).

Here was the list of 2010 trends as I saw them early last year:

  • Risk Assessments –New standards and expectations
  • Documentation–Who, What, How and Why
  • Disaster Recovery –Compliant and Recoverable
  • Vendor Management –Trust but Verify

Overall I scored 2 hits and 2 misses, although to be fair the misses are more along the line of “not yet hits”.  Here is how 2010 actually shaped up:

  • Risk Assessments – miss.  This prediction was taken from the Winter 2009 FDIC Supervisory Insights Newsletter article entitled “Customer Information Risk Assessments: Moving Toward Enterprise-wide Assessments of Business Risk”.  It described how examiners should start to evaluate risk on an enterprise-wide basis instead of simply focusing on information security risks.  I predicted that examiners would start to adjust their examination procedures for the new criteria in 2010, but it hasn’t manifested itself in examination work papers yet.  However, some of the enterprise-wide risk criteria has made its way into various risk assessment best practices.  Criteria such as strategic risk, operational/transactional risk, reputation risk and legal/regulatory risk are now part of the vernacular for disaster recovery, retail payment systems and new technology risk assessments.  We’ll call this a miss…for now.
  • Documentation – hit.  The vast majority of audit and examination findings I’ve seen this year we’re not related to missing or insufficient policies or procedures, they were due to the institutions inability to document (prove) that they were following their own procedures.  Expect this trend to continue in 2011.
  • Disaster Recovery – hit.  Both auditors and examiners are finding fault with DR plans that do not strictly conform to the FFIEC guidance.  Specifically, they must contain a business impact analysis, risk assessment, risk management and testing sections, and in that order.  A non-compliant plan that may even be able to demonstrate (through testing) recoverability will still be written up.  (More here.)
  • Vendor Management – miss.  With the increasing reliance of financial institutions on third-party vendors, I predicted that 2010 would be the year that the examiners started scrutinizing vendor management programs more closely.  It hasn’t happened…yet.  It may be because of the continued overwhelming emphasis on asset quality during the safety and soundness examination, but I’m leaving this on the list for 2011.  Asset quality will undoubtedly still dominate in 2011, but there are indications that the pendulum is starting to swing back around.  (More on that later.)

My next post will be my predictions for 2011.  I’m also collecting survey responses from auditors and examiners on where they think the areas of focus will be, and I’ll report that in early 2011 as well.

All the best for a Happy and Compliant New Year!!