FEATURED BLOG

Auditing Your RCA Program

Posted by Jessica Peel on Thu, Aug 27, 2015 @ 03:08 AM

Author: Kevin Stewart

I recently wrote an article about auditing root cause analysis (RCA) investigations, and it only seemed appropriate to follow up with advice on auditing your overall RCA program. Let’s go back to the dictionary definition of “audit” -- a methodical examination and review. In my mind this definition has two parts: 1) the methodical examination and 2) the review. audit_your_rca_program_image

It might help to compare this process to a medical examination. In that case, the doctor would examine the patient, trying to find anything he can, either good or bad. This would include blood work, reflex test, blood pressure, etc. After that examination he would then review his findings against some standard to help him determine if any action should be taken. Auditing an RCA program is no different; first we must examine it and find out as much as we can about it, then we will need to review it against some standard or measure.

In my other article I discussed at length the measures against which an RCA investigation could be judged. Those still apply, and one of the program audit items can and should be the quality of the RCA investigation.

Now we are faced with determining the characteristics of a good program. A list of characteristics is given below:

  • Quality of RCA investigations
  • Trigger Levels are set and adhered to 
  • Triggers are reviewed on a regular basis and adjusted as required to drive improvement
  • A program champion has been designated, trained and is functioning
  • Upper management has been trained and provides invloved sponsorship of the program
  • Middle management has been trained and provides involved sponsorship of the program
  • The floor employees have been trained and are involved in the process
  • The solutions are implemented and tracked for completion 
  • The RCA effectiveness is tracked via looking for repeat incidents
  • Dedicated investigators / facilitators are in place 
    • Investigators are qualified and certified on an ongoing basis
  • All program characteristics are reviewed / defined / agreed to by management and include: An audit system is defined, funded, and adhered to
    • Resource requirements 
    • Triggers
    • Training requirements are in place and funded
    • Sponsorship statements and support
  • The RCA program is incorporated into the onboarding and continuous review training for new and existing employees

The next step in developing an audit is to generate a set of items that your program will be gauged against. This list can come from the items above, your own list, or a combination of the two. Once you have a final list of items to audit against, you need to generate a ratings scale. This can be a pass/fail situation or a scale that gives a rating from 0 to 5 for each item. This can allow you to give partial credit for some items that may not quite meet the full standard. You can also provide a weighting scale if deemed appropriate. This would mean that some of the items in the list had more importance or weight in the scoring based on the local feelings or culture of your facility. This scale can be anything you wish, but be cautious about making the scale too large. Can you really tell the difference between a 7 or 8 in a 10-point scale? Perhaps a 1 – 4 scale would be better?

Next, develop a score sheet with each item listed and a place to put a score for each one. It’s handy to add some guidelines with each item to give the reviewer a gauge on how to score the item. A sample of such guidelines might look like:

0    Does not exist

1    Some are in place but not correct

2    Many are in place and some are correct

3    All are in place but only some are correct

4    All are in place and most are correct

5    All are in place and correct

Don’t forget to leave a space for notes from the reviewer to explain the reasons for partial credit. With this in place either next to each item or easily available as a reference, it helps ensure consistency in the scoring, especially if multiple people will be scoring your RCA program.

The goal for a standardized audit process would be that several different people could independently review and score a program and would come up with essentially the same score. This may seem like a simple thing, but it turns out to be the largest issue because everyone interprets the questions slightly differently. There are several things you can do to minimize discrepancies:

  1. Provide the information above to help.
  2. Require the auditors to be trained and certified by the same process / people and then have them provide a sample audit and check it against the standard. Review and adjust any discrepancies until you are sure they will apply the same thinking against the real audit.
  3. Always ensure that if multiple auditors are used in a program review, at least one has significant experience to provide continuity. In other words, don’t allow an audit to be done with all first-time auditors.

With these measures in place, all you have to do is review the RCA program against your list, score it, and have some sort of minimum for passing. Likewise you’ll want to have some sort of findings report where the auditor can provide improvement opportunities against the individual items instead of simply saying: “did not pass.”

These measures will ensure that the program is gauged against a consistent standard and can be repeated by multiple auditors. There will always be differences if multiple people are auditing an RCA program, but by utilizing the steps above these differences can be minimized to provide the highest level of credibility for the audit.

Apollo_-_Webinar_banner