Tag: Examination

02 Mar 2012

“Data-flow diagrams”

This request was seen in a recent State examiners pre-examination questionnaire, and although I usually like to see a request a couple of times from different examiners before identifying it as a legitimate trend, this one could prove so potentially problematic that I thought I needed to get ahead of it.

Before we go much farther, it’s important to distinguish between “data-flow diagrams” and the “work-flow analysis”, which is a requirement of the business continuity planning process.  They are completely separate endeavors designed to address two very different objectives.  The BCP “work-flow analysis” is designed to identify the interdependencies between critical processes in order to determine the order of recovery for the processes.  The “data-flow diagram” is designed to:

Supplement (management’s) understanding of information flow within and between network segments as well as across the institution’s perimeter to external parties.

It’s important to note here that what the examiner asked for actually wasn’t unreasonable, in fact it appears word-for-word in the FFIEC AIO Handbook:

Common types of documentation maintained to represent the entity’s IT and business environments are network diagrams, data flow diagrams, business process flow diagrams, and business process narratives. Management should document and maintain accurate representations of the current IT and business environments and should employ processes to update representations after the implementation of significant changes.

And although this particular examiner quoted from the Operations Handbook, the term “data flow” (in combination with “maps”, “charts” and “analysis”) actually appears 15 times in 5 different Handbooks; Development and Acquisition, Information Security, Operations, Retail Payment Systems, and Wholesale Payment Systems.

So this concept is certainly not unheard of, but previously this “understanding of information flow” objective was achieved via a network topology map, or schematic.  Sufficiently detailed, a network schematic will identify all internal and external connectivity, types of data circuits and bandwidth, routers, switches and servers.  Some may even include workstations and printers.  In the past this diagram, in combination with a hardware and software inventory, was always sufficient to document management’s understanding of information flow to examiners.  But in this particular case the examiner highlighted (in bold) this section of the guidance (and this was the most troublesome to me):

Data flow diagrams should identify:

  • Data sets and subsets shared between systems;
  • Applications sharing data; and
  • Classification of data (public, private, confidential, or other) being transmitted.

…and…

Data flow diagrams are also useful for identifying the volume and type of data stored on various media.  In addition, the diagrams should identify and differentiate between data in electronic format, and in other media, such as hard copy or optical images.

Data classification?  Differentiation between electronic, printed and digital data?  This seems to go way beyond what the typical network schematic is designed to do, way beyond what examiners have asked for in the past, and even possibly beyond the ability of most institutions to produce, at least not without significant effort.  Of course using the excuse of “unreasonable resource requirements” will usually not fly with examiners, so what is the proper response to a request of this nature?

Fortunately there may be a loophole here, at least for serviced institutions, and it’s found in the “size and complexity” predication.   The guidance initially states:

Effective IT operations management requires knowledge and understanding of the institution’s IT environment.

This is the underlying requirement, and the core issue to be addressed.  It then goes on to state that documentation of  management’s “knowledge and understanding” be “commensurate with the complexity of the institution’s technology operations”.  And depending on size and complexity, this may include “data-flow diagrams”.  So the examiner is effectively saying in this case that they feel that a “data-flow diagram” is the most appropriate way for the management of this outsourced institution to document adequate knowledge and understanding of their technology operations.  I suggested that the institution respectfully disagree, and state:

Our management believes, based on our size and complexity as a serviced institution, that an updated detailed schematic, and current hardware and software inventories, adequately demonstrates sufficient knowledge and understanding of our technology operations.

This directly addresses the core issue and I’m pretty sure the examiner will agree, but I’ll let you know.  In any case it’s worth pushing back on this because of the potentially enormous resource requirement that it would take to comply with the request, both now and going forward.

Now here is the real question…should you require the same level of documentation (i.e. data classification and data type differentiation) from your core vendor?  And if so, are you even likely to get it from them?

16 Feb 2012

FDIC changing annual IT report to Board?

Based on recent examination findings, it would appear that the FDIC is changing what they expect to see in the annual information security report to the Board of Directors.  The requirement for the report is established in the FFIEC Information Security Handbook where it states that a written report to the board should describe the overall status of the information security program, and that at a minimum, the report should address:

  • The results of the risk assessment process
  • Risk management and control decisions
  • Service provider arrangements
  • Results of security monitoring and testing
  • Security breaches or violations, and management’s responses
  • Recommendations for changes to the information security program

However in a recent examination the institution was written up because the FDIC did not believe the report contained enough detail.  They stated that “Board reporting should be expanded and include detail at a minimum for the following areas:

  • The information security risk assessment
  • Service provider agreements
  • Results of testing, audits, examinations or other reviews of the program
  • Any security breaches, violations, or other incidents since the previous report and management responses
  • A summary of information security training provided to employees since the last report
  • Status of the patch management program
  • Status of the Business Continuity Plan and testing results
  • Customer Awareness Program efforts and plans
  • Any recommendations for changes to the information security program”

I’ve highlighted the changes between the original guidance and the examination finding.  I’m not surprised at the training findings, as I have previously identified both employee and customer training as likely 2012 trends.  Nor am I particularly surprised by the inclusion of the status of the BCP and testing results.  This has been a requirement and an area of increased regulatory focus for a couple of years.  However it would appear that examiners may now prefer the BCP status update to be a part of the information security update report to the Board.

The inclusion of a patch management status report was a bit surprising though, as in the past this was not reported separately but simply included as one of your many risk management controls.  Perhaps they are looking for more control detail now?  (I plan to address patch management in a future post.)

I was also a bit baffled by the exclusion of “Risk management and control decisions” from the list of findings.  I had also identified the “Management” element as a probable area of increased regulatory scrutiny in 2012, so I’ll keep an eye on future examination findings to see if this actually represents a shift in focus or simply an oversight by the examiners this time.  (Of course a third possibility is that the examiner felt that the “risk management and control decisions” were present and properly documented, but given the other findings I doubt that was the case.)

08 Nov 2011

Access Rights a frequent finding

In reviewing recent audit and examination findings, the issue of access rights and permissions is coming up with increasing regularity.  Making sure that end-users have no more access rights than absolutely necessary to do their job is one of the best information security controls.  According to the FFIEC, formal access rights administration for users consists of four processes:

  •   An enrollment process to add new users to the system;
  •   An authorization process to add, delete, or modify authorized user access to operating systems, applications, directories, files, and specific types of information;
  •   An authentication process to identify the user during subsequent activities; and
  •   A monitoring process to oversee and manage the access rights granted to each user on the system.

One best practice for simplifying the management of access rights is to assign them via  group membership based on the employee’s role.  Most financial institution employees are easily categorized by job duties (lending, deposit ops, senior management, IT, etc.).  Job duties logically translate to responsibilities, and network file and folder access flows from that.  When an employees job duties change, changing group membership is much easier than changing individual file and folder permissions.

One of the main reasons for audit and exam findings in this area is not necessarily that users and groups aren’t maintained, but that there is a  disconnect between access rights on the Core system and the local network (Active Directory).  Unfortunately until the Core providers implement Active Directory integration this is a manual, 2-step process.

The key to addressing the FFIEC guidance (and preventing rights gaps in the process) is to manage all four of the above steps at the same time in the IT Steering Committee (or functional equivalent).  Each time the committee meets it should approve all access rights adds, changes and deletes, and review activity logs.  Periodically review the existing users (by group if possible) to validate the appropriateness of their (and the groups) access rights on a schedule commensurate with the risk.  Properly restricted regular users may only require semi-annual reviews, but privileged access (administrative) accounts should be reviewed for activity and re-approved at each committee meeting.

06 Oct 2011

Material Loss Reviews: Does responsibility = liability?

I asked in my previous post whether or not the regulators should share any of the blame when institutions fail, and if so, should they shoulder any of the liability?  The thought occurred to me as I was reviewing some recent Material Loss Reviews.

A Material Loss Review (MLR)  is a post-mortum written by the Office of Inspector General for each of the federal regulators with oversight responsibility after a failure of an institution if the loss to the deposit insurance fund is considered to be “material”.  (The threshold for determining whether the loss is material was recently increased by the Dodd-Frank Act from $25 million to $200 million, so we are likely to see fewer of these MLR’s going forward.)  All MLR’s have a similar structure.  There is an executive summary in the front, followed by a break-down of capital and assets by type and concentration.  But there is also a section that analyzes the regulator’s supervision of the financial institution, and I noticed a recurring theme in this section:

  • …(regulator) failed to adequately assess or timely identify key risks…until it was too late.
  • …(regulator) did not timely communicate key risks…
  • …Regulator should have taken “a more conservative supervisory approach”, and used “forward-looking supervision”.
  • …examiners identified key weaknesses…but…they did not act on opportunities to take earlier and more forceful supervisory action.
  • …serious lapse in (regulator’s) supervision
  • …(regulator), in its supervision…did not identify problems with the thrift
  • …(regulator) should have acted more forcefully and sooner to address the unsafe and unsound practices
  • …(regulator) did not fully comply with supervisory guidance

There were also many references to the responsibilities of the Board, which I addressed here, but in almost every case the regulator was found at least partially responsible for the failure of the institution.

Here is where you can find the reports for each regulator:

I encourage you to take a look at these and draw your own conclusions as to the issues of responsibility and liability.  But clearly there are lessons to learn from any failure, and one lesson that I think we should all learn from this is that regulators will be pressured to be much more critical going forward.  (I.e. quicker to apply “Prompt Corrective Action“.)  After all, no one likes to be called out for doing a bad job.

One other part I found interesting (in the sense that it perfectly fits the narrative) is where the review lists all examination CAMELS ratings in the periods immediately prior to the failure.  What struck me was how many times institutions scored 1’s and 2’s just prior to the failure, and then dropped immediately to 4’s and 5’s in a single examination cycle.  Again, the lesson is that there will be tremendous downward pressure on CAMELS scores.  And don’t think that just because you are healthy you’re immune from the additional scrutiny.  As one MLR stated “…a forceful supervisory response is warranted, even in the presence of strong financial performance.”

08 Sep 2011

Exam preparation – less equals more?

One of the more surprising findings from my recent examination experience survey (thanks again to all that participated!) is that there doesn’t seem to be a direct relationship between the amount of time spent preparing, and examination results. I’ll elaborate in a moment, but first, here are the final survey demographics:

  • There were 80 total respondents
  • FDIC was the most prominent regulator (80%), but institutions representing all the others PFR’s (OTS, OCC, Federal Reserve and NCUA) also responded.
  • Institutions from 20 different states responded, resulting in a pretty good geographic cross-section.
  • The majority of respondents were under $500M, but we also got a handful >$1B.
  • 25% were DeNovo (less than 5 years old).

So what we found was that most institutions spent quite a bit of time preparing for their last IT examination.  57% of you spent more than 5 hours, but interestingly enough, it really didn’t translate into better results.  Although 73% of those felt they were very prepared for the exam, less than half felt that the exam went pretty much as expected, with 9% describing their last examination as a “nightmare”!  By contrast, only 5% of those who spent less than 5 hours preparing felt the same way.  But perhaps the most significant statistic is the average IT composite score.  Those who spent more than 5 hours preparing averaged a score of 1.85 as opposed to a 1.76 for those that spent less than 5 hours.  So is the conclusion that as far as preparation goes, less equals more?  I think a better way to interpret the data is that it’s better to work smarter than harder. Consider this:  Those of you who used an outside consultant to assist with the pre-examination questionnaire seemed to have a much more favorable experience overall.  90% of you felt that the examination experience was either not bad, or pretty much as expected.  But more significantly, those who used outside help also got better IT composite scores, averaging a 1.69 versus 1.82 for all respondents!