No Quick Fix : Streamlining reliability-centred maintenance
John Moubray wrote the book on RCM — literally.
His book entitled Reliability-Centred Maintenance (now in its second edition) lays out the essentials of this all-encompassing maintenance philosophy. In this article, he takes aim at the growing popularity of streamlining’ the RCM process. There are no shortcuts to RCM, Moubray argues, and attempts to abridge the strategy to save time or money are fundamentally flawed. Not only does the author dismantle some of the leading methods of implementing a reduced version of RCM, he argues that doing so is fraught with legal, even ethical peril. True RCM is designed to identify the absolute, safe minimum of what must be done to preserve the functions of physical assets.
The importance of this quality of RCM is manifest, argues Moubray. "Society is growing increasingly intolerant of industrial accidents and seeks to hold individuals, as well as corporations to account." he writes. "Under these circumstances, everyone involved in the management of physical assets needs to take greater care than ever to ensure that every step they take in executing their official duties is beyond reproach. It is becoming professionally suicidal to do otherwise."
True RCM demands that all information and decisions made by maintenance managers be documented in such a way as to make the information and the decisions fully available to any third party. In the event of an accident, true RCM, if properly implemented, could be the maintenance managers’ best alibi in the face of legal scrutiny, even criminal charges. (Ed.)
Reliability-centred maintenance (RCM) is a process used to ensure that any physical asset or system continues to function properly. It was first used by the commercial aviation industry as early as the 1960s. Driven by the need to improve reliability while containing the cost of maintenance, the industry developed a comprehensive process for deciding what maintenance work was needed to keep aircraft airborne.
In 1978, U.S. Department of Defense took notice of RCM after F. Stanley Nowlan and Howard Heap of United Airlines wrote a report for the Pentagon entitled Reliability-Centered Maintenance. It formed the basis of the maintenance strategy formulation process named for the Air Transport Association of America’s Maintenance Steering Group – 3 Task Force (MSG3). MSG3 is used to this day by the international commercial aviation industry. In the years following their landmark report, Nowlan and Heap’ s RCM began to influence industries outside of aviation. Its growing popularity among maintenance professionals was based on its ability to identify the true, safe minimum of what must be done to preserve the functions of physical assets.
Today, RCM is used by thousands of organizations in nearly every major industrial field.
Throughout its rise to acceptance, RCM has spawned a series of derivatives. Some of these variations are refinements and enhancements of Nowlan and Heap’s original RCM process. However, less rigorous derivatives have also emerged, most of which are attempts to ?Â¥streamline’ the maintenance strategy formulation process.
Towards a standard
Many of these abridged processes either omit key steps of the process described by Nowlan and Heap, or change their sequence, or both. Consequently, despite claims to the contrary made by the proponents of these processes, the output differs markedly from what would be obtained by conducting a full, rigorous RCM analysis.
A growing awareness of these differences led to demands for a standard that set out the criteria any process must comply with in order to be called RCM. The Society of Automotive Engineers (SAE) responded to these calls in 1999 with the formulation of an RCM standard.
Section 5 of the standard, JA1011 – Evaluation Criteria for Reliability-Centred Maintenance, summarizes the key attributes of any RCM process as follows:
Any RCM process shall ensure that all of the following seven questions are answered satisfactorily and are answered in the sequence shown below:
a. What are the functions and associated desired standards of performance of the asset in its present operating context (functions)?
b. In what ways can it fail to fulfill its functions (functional failures)?
c. What causes each functional failure (failure modes)?
d. What happens when each failure occurs (failure effects)?
e. In what way does each failure matter (failure consequences)?
f. What should be done to predict or prevent each failure (proactive tasks and task intervals)?
g. What should be done if a suitable proactive task cannot be found (default actions)?
All information and decisions shall be documented in a way which makes the information and the decisions fully available to and acceptable to the owner or user of the asset."
Subsequent sections of the standard list the issues that any true RCM process must address in order to answer each of the seven questions "satisfactorily". However, the key words in Section 5 of the standard are in the first sentence. They are: ?Â¥any’, ?Â¥all’ and ?Â¥in the sequence shown below’. They mean that if any process that does not answer all the questions in the sequence shown (and which does not answer them satisfactorily in compliance with the rest of the standard), then that process is not RCM.
None of the streamlined "RCM" processes comply fully with the requirements of Section 5 of the SAE standard.
The reaction of society as a whole to equipment failures is an aspect of physical asset management that is changing at warp speed as we move into the 21st century.
Physical asset management is evolving. Society is increasingly looking for transparency in industry, especially following high-profile equipment failures (that lead to injuries or fatalities). Yet, this trend has attracted surprisingly little comment within the maintenance community.
The changes began with sweeping legislation governing industrial safety, mainly in the 1970s. Among the best known examples of such legislation are the Occupational Safety and Health Act of 1970 in the United States and the Health and Safety at Work Act of 1974 in the United Kingdom. These Acts are fairly general in nature, and similar laws have been passed in nearly all the major industrialized countries. Their intent is to ensure that employers provide a generally safe working environment for their employees.
These Acts were followed by a series of more specific safety-oriented laws and regulations such as OSHA Regulation No. 1910.119: "Process Safety Management of Highly Hazardous Chemicals" in the United States and the "Control of Substances Hazardous to Health Regulations" in the United Kingdom. Both of these regulations were first enacted in the early to mid-1990s. They are noteworthy examples of a then-new requirement for the users of hazardous materials to perform formal analyses or assessments of the associated systems, and to document the analyses for subsequent inspection if necessary by regulators.
Increasing regulation demanded that physical asset managers are subject to a steady increase in legal requirements to demonstrate responsible custodianship of the assets under their control. These laws placed heavy burdens on the managers of the assets concerned. But, they reflect the steadily rising expectations of society in terms of industrial safety — industry has no choice but to comply as best it can.
The late 1990s have seen even more changes, this time concerning the sanctions that society now wishes to impose if things go wrong. Until the mid-90s, if a failure occurred whose consequences were serious enough to warrant criminal proceedings, these proceedings usually ended with a substantial fine imposed on the organization found to be at fault. Occasionally, the organization’s permit to operate was withdrawn, as in the case of the ValuJet airline after the crash in Florida on 11 May 1996, effectively putting the airline out of business.
Following recent industrial disasters, however, a movement is now developing not only to punish the organizations concerned, but also to impose criminal sanctions on individual managers. In other words, under certain circumstances, individual managers can be sent to prison in connection with equipment failures that result in injury or death.
In the wake of a fatal 1999 rail crash in the UK, British law-makers introduced a new homicide designation: "corporate killing". Executives found guilty of such offenses can be imprisoned. In the U.S. following recent SUV accidents allegedly caused by faulty tires, national laws were amended to include prison sentences of up to 15 years for vehicle manufacturer executives presiding over companies that commit specified offenses in connection with vehicle failures that cause injuries or death.
There is considerable controversy about the reasonableness of these initiatives, and even some doubt about their ultimate enforceability. However, from the point of view of people involved in the management of physical assets, the issue is not what is reasonable, but that we are increasingly being held personally accountable for actions that we take on behalf of our employers. Not only that, but if we are called to account in the event of a serious incident, it will be in circumstances that could culminate in jail sentences.
Perhaps the most startling legislative developments of all were triggered by a deadly gas plant explosion in Longford, Australia. Following the disaster, legislators amended the criminal code in the case of industrial disasters to suspend attorney/client confidentiality for the purposes of the Longford ?Â± and subsequent ?Â± official inquiries.
Furthermore, the state governments of Victoria and Queensland are also considering legislation to deal with "industrial manslaughter" and "corporate culpability" respectively, as both governments believe that their current legislation does not deal adequately with industrial incidents causing death or serious injury. These proposed laws go further than the laws in the UK and the US, in that the concept of "aggregation of negligence" is introduced. This allows the aggregation of actions and omissions of a group of employees and managers to establish that an organization is negligent. Both governments have made it clear that if managers and/or a management system fails to prevent workplace death or serious injury, then the responsible manager and/or management team is likely to face criminal prosecution. If the legislation proceeds, penalties of over $500,000 and seven years imprisonment are proposed.
The message to us all is that society is intolerant of industrial accidents and seeks to hold individuals, as well as corporations to account. It’s prepared to alter well-established principles of jurisprudence to do so. Under these circumstances, everyone involved in the management of physical assets needs to take greater care than ever to ensure that every step they take in executing their official duties is beyond reproach. It is becoming professionally suicidal to do otherwise.
Associates and I have helped companies to apply true RCM on more than 1,200 sites spanning 41 countries and nearly every form of organized human endeavour. We’ve found that when true RCM has been correctly applied by well-trained individuals working on clearly defined and properly managed projects, the analyses have usually paid for themselves in between two weeks and two months. This is a very rapid payback indeed.
However, despite this rapid payback, some individuals and organizations have expended a great deal of energy on attempts to reduce the time and resources needed to apply the RCM process. The results of these attempts are generally known as ?Â¥streamlined’ RCM techniques.
In all cases, the proponents of these techniques claim their principal advantage is that they achieve similar results to something which they call ?Â¥classical’ RCM, but that they do so in much less time and at much lower cost. However, not only is this claim questionable, but all of the streamlined techniques have other drawbacks, some quite serious.
An article by C. Bookless and M. Sharkey published in the UK magazine Maintenance in 2000 described the process of ‘streamlining’ RCM in the British nuclear industry. The article signaled a growing trend towards implementing abbreviated versions of RCM. The most popular method of ‘streamlining’ the RCM often starts not by defining the functions of the asset (as specified in the SAE Standard), but with the existing maintenance tasks. Users of this approach try to identify the failure mode that each task is supposed to be preventing, and then work forward again through the last three steps of the RCM decision process to re-examine the consequences of each failure. This supposedly helps to identify a more cost-effective failure management policy. K.S. Jacobs, in his 1997 presentation at the ASNE Fleet Maintenance Symposium in San Diego, CA, also described this as "backfit" RCM; others use the term "RCM in reverse".
Retroactive approaches are superficially very appealing, so much so that I tried them myself on numerous occasions when I was new to RCM. However, in reality they are also among the most dangerous of the streamlined methodologies, for the following reasons:
– They assume that existing maintenance programs cover just about all the failure modes that are reasonably likely to require some sort of preventive maintenance. In the case of every maintenance program that I have encountered to date, this assumption is simply not valid. If RCM is applied correctly, it transpires that nowhere near all of the failure modes that actually require PM are covered by existing maintenance tasks. As a result, a considerable number of tasks have to be added. Most of the tasks that are added apply to protective devices, as discussed below. (Other tasks are eliminated because they are found to be unnecessary, or the type of task is changed, or the frequency is changed. The net effect is usually a reduction in perceived overall PM workloads, typically by between 40 and 70 percent.)
– When applying retroactive RCM, it is often very difficult to identify exactly what failure cause motivated the selection of a particular task, so much so that either inordinate amounts of time are wasted trying to establish the real connection, or sweeping assumptions are made that very often prove to be wrong. These two problems alone make this approach an extremely shaky foundation upon which to build a maintenance program.
– In reassessing the consequences of each failure mode, it is still necessary to ask whether "the loss of function caused by the failure mode will become evident to the operating crew under normal circumstances". This question can only be answered by establishing what function is actually lost when the failure occurs. This in turn means that the people doing the analysis have to start identifying functions anyway, but they are now trying to do so on an ad hoc basis halfway through the analysis (and they are not usually trained in how to identify functions correctly in the first place because this approach considers the function identification step to be unnecessary). If they do not, they start making even more sweeping — and hence often incorrect — assumptions that add to the shakiness of the results.
– Retroactive approaches are particularly weak on specifying appropriate maintenance for protective devices. As I stated in my book entitled Reliability-Centred Maintenance: "…at the time of writing, many existing maintenance programs provide for fewer than one third of protective devices to receive any attention at all (and then usually at inappropriate intervals). The people who operate and maintain the plant covered by these programs are aware that another third of these devices exist but pay them no attention, while it is not unusual to find that nobody even knows that the final third exist. This lack of awareness and attention means that most of the protective devices in industry — our last line of protection when things go wrong — are maintained poorly or not at all."
So if one uses a retroactive approach to RCM, in most cases a great many protective devices will continue to receive no attention in the future because no tasks were specified for them in the past. Given the enormity of the risks associated with unmaintained protective devices, this weakness of retroactive RCM alone makes it completely indefensible. Some variants of this approach address this problem by specifying that protective systems should be analyzed separately, often outside the RCM framework. This gives rise to the absurd situation that two analytical processes have to be applied in order to compensate for the deficiencies created by attempts to streamline one of them.
– More so than any of the other streamlined versions of RCM, retroactive approaches focus on maintenance workload reduction rather than plant performance improvement (which is the primary goal of function-oriented true RCM). Since the returns generated by using RCM purely as a tool to reduce maintenance costs are usually lower than the returns generated by using it to improve reliability, the use of the ostensibly cheaper retroactive approach becomes self-defeating on economic grounds, in that it virtually guarantees much lower returns than true RCM
Use of generic analyses
A fairly widely-used shortcut in the application of RCM entails applying an analysis performed on one system to technically identical systems. In fact, one or two organizations even sell such generic analyses, on the grounds that it is cheaper to buy an analysis that has already been performed by someone else than it is to perform your own. The following paragraphs explain why generic analyses should be treated with great caution:
– Operating context: In reality, technically identical systems often require completely different maintenance programs if the operating context is different. For example, consider three pumps A, B and C that are technically identical (same make, model, drives, pipework, valvegear, switchgear, and pumping the same liquid against the same head). The generic mind-set suggests that a maintenance program developed for one pump should apply to the other two.
However, pump A stands alone, so if it fails, operations will be affected sooner or later. As a result the users and/or maintainers of Pump A are likely to make some effort to anticipate or prevent its failure. (How hard they try will be governed both by the effect on operations and by the severity and frequency of the failures of the pump.)
However, if pump B fails, the operators simply switch to pump C, so the only consequence of the failure of pump B is that it must be repaired. As a result, it is likely that the operators of B would at least consider letting it run to failure (especially if the failure of B does not cause significant secondary damage.) On the other hand, if pump C fails while pump B is still working (for instance if someone cannibalizes a part from C), it is likely that the operators will not even know that C has failed unless or until B also fails. To guard against this possibility, a sensible maintenance strategy might be to run C from time to time to find out whether it has failed. This example shows how three identical assets can have three totally different maintenance policies because the operating context is different in each case. In the case of the pumps, a generic program would only have specified one policy for all three pumps.
Apart from redundancy, many other factors affect the operating context and hence affect the maintenance programs that could be applied to technically identical assets. These include whether the asset is part of a peak load or base load operation, cyclic fluctuations in market demand and/or raw material supplies, the availability of spares, quality and other performance standards that apply to the asset, the skills of the operators and maintainers, and so on.
– Maintenance tasks: Different organizations — or even different parts of the same organization — seldom employ people with identical skill-sets. This means that people working on one asset may prefer to use one type of proactive technology (say high-tech condition monitoring), while another group working on an identical asset may be more comfortable using another (say a combination of performance monitoring and the human senses). It is surprising how often this difference does not matter, as long as the techniques chosen are cost-effective. In fact, many maintenance organizations are starting to realize that there is often more to be gained from ensuring that the people doing the work are comfortable with what they are doing than it is to compel everyone to do the same thing. (The validity of different tasks is also affected by the operating context of each asset. For instance, think how background noise levels affect checks for noise.) Because generic analyses necessarily incorporate a "one size fits all" approach to maintenance tasks, they do not cater to these differences and hence have a significantly reduced chance of acceptance by the people who have to do the tasks.
These two points mean that special care must be taken to ensure that the operating context, functions and desired standards of performance, failure modes, failure consequences and the skills of the operators and maintainers are all effectively identical before applying a maintenance policy designed for one asset to another. They also mean that an RCM analysis performed on one system should never be applied to another without any further thought just because the two systems happen to be technically identical.
Use of generic lists of failure modes
Generic lists of failure modes are lists of failure modes — or sometimes entire failure mode effect analyses (FMEA) — prepared by third parties. They may cover entire systems, but more often cover individual assets or even single components. These ‘generic’ lists are touted as another method of speeding up or ‘streamlining’ this part of the maintenance program development process. In fact, they should also be approached with great caution, for all the reasons discussed in the previous section of this paper, and for the following additional reasons:
– The level of analysis may be inappropriate: It is possible to ‘drill down’ almost any number of levels when seeking to identify failure modes (or causes of failure). The point at which this process should stop is the level at which it is possible to identify an appropriate failure management policy, and this can vary enormously depending on the operating context of the system. In other words, when establishing causes of failure for technically identical assets, it may be appropriate in one context to ask why it fails once, and in another it may be necessary to ask why it has failed seven or eight times. However, if a generic list is used, this decision will already have been made in advance of the RCM analysis. For instance, all the failure modes in the generic list may have been identified as a result of asking why four or five times, when all that may be needed is level 1. This means that far from streamlining the process, the generic list would condemn the user to analyzing far more failure modes than necessary. Conversely, the generic list may focus on level 3 or 4 in a situation where some of the failure modes really ought to be analyzed at level 5 or 6. This would result in an analysis that is too superficial and possibly dangerous.
– The operating context may be different: The operating context of your asset may have features which make it susceptible to failure modes that do not appear in the generic list. Conversely, some of the modes in the generic list might be extremely improbable (if not impossible) in your context.
– Performance standards may differ: Assets may operate to standards of performance which means that an entire definition of failure may be completely different from that used to develop the generic FMEA.
These three points mean that if a generic list of failure modes is used at all, it should only ever be used to supplement a context-specific FMEA, and never used on its own as a definitive list.
Skipping elements of the process
Another common way in which the RCM process is "streamlined" is by skipping various elements of the process altogether. The step most often omitted is the definition of functions. Proponents of this methodology start immediately by listing the failure modes that might affect each asset, rather than by defining the functions of the asset under consideration. They do so either because they claim that, especially in the case of a "non-safety-critical" plant, identifying functions does not contribute enough relative to the amount of time it takes (see M Dixey and J Gallimore’s 2000 article in Maintenance Vol. 15 No. 1 entitled "Fast-tracking RCM — Getting results from RCM"), or because they simply appear not to be aware that defining all the functions and the associated desired standards of performance of the assets under review is an integral part of the RCM process (See S.D. Mundy’s article in Vol. 7 No. 3 edition of Reliability entitled "Completing the Reliability-Centred Maintenance loop at a new process facility").
In fact, it is generally accepted by all the proponents of true RCM that in terms of improved plant performance, by far the greatest benefits of true RCM flow from the extent to which the function definition step transforms general levels of understanding of how the equipment is supposed to work. So cutting out this step costs far more in terms of benefits foregone than it saves in reduced analysis time.
From a purely technical point of view, the identification of functions and associated desired performance also makes it far easier to identify the surprisingly common situations (failure modes) where the asset is simply incapable of doing what the user wants it to do, and therefore fails too soon or too often. For this reason, eliminating the function definition step further reduces the power of the process.
The comments in the earlier discussion on retroactive approaches also apply here.
Analyze only "critical" functions or "critical" failures
The SAE Standard stipulates inter alia that a true RCM analysis should define all functions, and that all reasonably likely failure modes should be subjected to the formal consequence evaluation and task selection steps. The shortcuts embodied in some of the streamlined RCM processes try to analyze ‘critical’ functions only, or to subject only ‘critical’ failure modes to detailed analysis. These approaches have two main flaws, as follows:
– The process of dismissing functions and/or failure modes as being ‘non-critical’ necessarily entails making assumptions about what a more detailed analysis might reveal. In the personal experience of the author, such assumptions are frequently wrong. It is surprising how often apparently innocuous functions or failure modes are found on closer examination to embody elements that are highly critical in terms of safety and/or environmental integrity. As a result, the practice of prematurely dismissing functions or failure modes results in much riskier analyses, but because the analysis is incomplete, no-one knows where or what these risks are.
– Many of the streamlined processes that adopt this approach incorporate elaborate additional steps designed to ‘help’ identify what functions and/or failure modes are critical or non-critical. In a great many cases, applying these additional steps takes longer and costs more than it would take to conduct a rigorous analysis of every function and every reasonably likely failure mode using true RCM, yet the output is considerably less robust.
Analyze only "critical" equipment
An approach to maintenance strategy formulation that is often presented as a ‘streamlined’ form of RCM suggests that the RCM process should be applied to ‘critical’ equipment only. This issue does not fall within the scope of the SAE standard, because the standard does not deal with the selection of equipment for analysis. It defines RCM as a process that can be applied to any asset, and it assumes that decisions about what equipment is to be analyzed and about system boundaries have already been made when the time comes to apply the RCM process defined in the SAE standard. There were two reasons why the equipment selection process was omitted from the standard:
– Different industries use widely differing criteria to judge what is ‘critical’. For instance, the ability of assets to produce products within given quality limits is a major issue in manufacturing operations, and hence features prominently in assessments of criticality. However, this issue barely figures at all with respect to equipment used by military undertakings. This means that there is an equally wide range of techniques used to assess criticality — so wide that it is impossible to encompass this issue in one universal standard.
– There is a growing school of thought (with which I have some sympathy) that there is no such thing as an item of plant — at least in an industrial context — that is ‘non-critical’ or ‘non-significant’ to the extent that it does not justify analysis using RCM. Two of the main reasons for believing that systems or items of plant should not be dismissed as ‘non-critical’ prior to rigorous analysis are exactly the same as the reasons given above for not dismissing functions and failure modes in the same way. (In fact, many organizations that choose to start with a formal, across-the-board equipment criticality assessment seem to spend as much time deciding what assessment methodology they will use and then applying it as they would have spent using true RCM to analyze all the equipment in their facility.)
There is a great deal more that could be said both in favour of and against the idea of using equipment criticality assessments as a means of deciding whether to perform rigorous analyses using techniques such as RCM. Since criticality assessment techniques are not an integral part of the RCM process, such a discussion is beyond the scope of this article. It is incorrect to present such techniques as streamlined forms of RCM because they do not form part of the RCM process as defined by the SAE standard.
In nearly all cases, the proponents of the streamlined approaches to RCM claim that these approaches can produce much the same results as true RCM in about a half to a third of the time. However, the above discussion indicates that not only do they not produce the same results as true RCM, but that they contain logical or procedural flaws which increase risk to an extent that overwhelms any small advantage they might offer in reduced application costs. It also transpires that many of these ‘streamlined’ techniques actually take longer and cost more to apply than true RCM, so even this small advantage is lost. As a result, the business case for applying streamlined RCM is suspect at best.
However, a rather more serious point needs to be kept in mind when considering these techniques. The very word ‘streamline’ suggests that something is being omitted, and this article indicates that this is indeed so for the streamlined techniques described. In other words, there is to a greater or lesser extent a degree of sub-optimization embodied in all of these techniques.
Leaving things out inevitably increases risk. More specifically, it increases the probability that an unanticipated failure, possibly one with very serious consequences, could occur. If this does happen, managers of the organization involved are increasingly likely to find themselves called personally to account. Worst-case scenario: they will not only have to explain, often in an emotionally-charged courtroom why they deliberately chose a sub-optimal decision-making process to establish their asset management strategies in the first place, rather than using one which complies fully with a standard set by an internationally-recognized standards-setting organization.
One rationale often advanced for using the streamlined methods is that it is better to do something than to do nothing. However, this rationale misses the point that all the analytical processes described above, streamlined or otherwise, require users to document the analyses. This generates a clear audit trail showing all the key information and decisions underlying the asset management strategy, in most cases where none has existed before. If a sub-optimal approach is used to formulate these strategies, the existence of written records makes every shortcut much clearer to any investigators.
A further rationale for streamlining says something like "we have been using this approach for a few years now and we haven’t had any accidents, so it must be all right." This rationale betrays a complete misunderstanding of the basic principles of risk. Specifically, no analytical methodology can completely eliminate risk. However, the difference between using a more rigorous methodology and a less rigorous methodology may be the difference between a probability of a catastrophic event of one in a million versus one in ten thousand. In both cases, the event may happen next year or it may not happen for thousands of years, but in the second case, it is a hundred times more likely. If such an event were to happen, the user of true RCM would be able to claim that he or she exercised prudent, responsible custodianship by applying a rigorous process that complies with an internationally recognized standard, and as such would be in a highly defensible position. Under the same circumstances, the user of streamlined RCM is on much, much shakier ground.
Author’s Note: When discussing streamlined RCM it’s worth asking what exactly it is that is being streamlined. Nearly all the advocates of streamlined processes compare their offerings to something they call ‘classical’ RCM. However, closer study of what they mean by ‘classical’ RCM reveals that it is often a monstrously complicated process or collection of processes that bears little or no resemblance to RCM as defined in the SAE standard. In these cases, it is hardly surprising that streamlined RCM is cheaper and quicker than these so-called ‘classical’ fantasies. In reality, if true RCM is applied as explained earlier, it is nearly always quicker and cheaper than the streamlined versions, in addition to being far more defensible and producing far greater returns.
John Moubray is the president and founder of Aladon LLC, a provider of RCM training, consulting and software. He can be reached at 828-277-2780 or at firstname.lastname@example.org.