A little over 50 years ago, reacting to the Soviet lead in the space race, President Kennedy decided to rally the nation with the moonshot. He justified his decision to the American public by stating, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”1
While landing a man on the moon in itself did not yield many tangible benefits aside from a psychological win for the American public, the moonshot brought an unprecedented advance in science and technology, as well as economic growth. Almost everything that we consider indispensable today can be traced back to the moonshot—such as solid-state electronics, satellite communications, and GPS, as well as most medical monitoring, telemedicine, and imaging technologies.
Today, the clinical engineering (CE) community faces a similar challenge. After decades of effort in dispelling the initial false alarm of electric shocks and developing rational methods to improve equipment maintenance at lower costs, CE professionals were able to attain a much lower maintenance-related failure rate than the Six Sigma quality level sought by world-class manufacturing companies.2 Yet, in 2011 the Centers for Medicare & Medicaid Services (CMS) released a new set of maintenance requirements with little, if any, rationale, demanding blind adherence to manufacturers’ recommendations.3
As a token of recognition to the CE community’s achievements and after intense lobbying, CMS agreed in 2013 to allow us to adopt an alternate equipment management (AEM) program for certain equipment (except lasers, imaging, and “new” equipment) if the hospital can provide evidence that it is “safe and effective.”4 Ironically, such a requirement does not exist for equipment manufacturers. While the Food and Drug Administration (FDA) does require manufacturers to prove their products are safe and effective before marketing, it does not require their maintenance recommendations be proven “safe and effective.” FDA only requires that “[w]here servicing is a specified requirement, each manufacturer shall establish and maintain instructions and procedures for performing and verifying that the servicing meets the specified requirements.”5
Since the publication of the CMS mandates and subsequent revision of the standards by its accreditation organizations, including The Joint Commission (TJC), many organizations adapted their “risk-based criteria” to fit the new requirements and redefining risk to conform to the terms “critical” (or “high-risk,” per TJC). In doing so, some CE departments even continued to exclude “low-risk” equipment from their maintenance inventory, in spite of the explicit requirement from CMS to include all equipment.4 Few have thought about how to collect evidence to prove that their AEM programs are safe and effective. Two data-based methodologies have been developed that can prove alternative maintenance strategies are as safe and effective as those recommended by manufacturers. They are reviewed below.
Reliability-Centered Maintenance
Well before the publication of the first CMS mandate in 2011, some CE professionals investigated data-based methodologies that can prove that certain maintenance strategies are as safe and effective as those recommended by manufacturers or the ones they traditionally adopted. One such methodology is reliability-centered maintenance (RCM), which has proven to be very effective in industries such as aviation and energy generation.6 RCM is based on the premise that it is possible to determine the failure modes of each piece of equipment and then find ways to eliminate or reduce those risks.7
Unfortunately, unlike aircraft and generator manufacturers, medical equipment producers are often unwilling to provide detailed information about their products, alleging confidentiality is needed to protect their intellectual property. Since nowadays most medical equipment incorporates embedded software, there is no possibility of determining failure modes without access to the software codes. Even if the manufacturers did provide failure modes, RCM still poses two major challenges.
First, the large variety of brands and models of medical equipment makes its implementation very difficult, since each piece of equipment is likely to have dozens of failure modes. More challenging yet is that RCM requires modifying equipment to reduce or eliminate failure modes in case maintenance changes alone are not enough. However, modification of medical devices is strictly prohibited by FDA regulations.8 So even though CE professionals could find ways to implement RCM, legally only the manufacturers can make design changes. In essence, while successful elsewhere, RCM is not practical for CE professionals.
Introducing EBM
Another methodology that was investigated is evidence-based maintenance (EBM), which I have defined elsewhere as “a continual improvement process that analyzes the effectiveness of maintenance resources deployed in comparison to outcomes achieved previously or elsewhere, and makes necessary adjustments to maintenance planning and implementation.”9 The main difference between RCM and EBM is that the latter does not require any knowledge of failure modes. Each piece of medical equipment is treated as if it were a “black box,” and the objective is to determine whether different maintenance strategies (inputs) would change the number of maintenance-related equipment failures found during repairs and scheduled maintenance (outputs).10-15
Under EBM, a maintenance strategy is considered better than another if the former produces fewer maintenance-related failures, analogous to randomized clinical trials of drugs. EBM differs from classical “risk-based criteria” in analyzing maintenance outcomes after the implementation of a maintenance plan and then using the results of analysis to improve the original plan. This is actually the original intent of the method proposed by Fennigkoh and Smith.16 However, most CE professionals focused their efforts on the initial scheduled maintenance (SM) planning and ignored the review of failures found during SM (and repairs), thus missing the opportunity to improve their SM plans.
Some EBM critics have said that EBM is nothing but the well-known engineering concept of “negative feedback amplifier” patented by Harold Black.17 This is absolutely correct. However, the choice of the term “evidence-based” is important because it helps our clinical colleagues understand that we are using the same scientific principles they are using in evidence-based medicine. The only difference is that our “patients” are pieces of medical equipment. It is not enough to have an excellent plan (for example, using risk-based criteria), as previously unknown factors and unforeseen changes may make the plan entirely useless or at least not as effective as it could be. The feedback concept allows well-founded adjustments of the initial plan.
Implementation Challenges
Last year, a roundtable organized by AAMI featuring several CE experts, a manufacturer representative, and TJC Director of the Department of Engineering George Mills agreed that EBM is the best approach for establishing proper equipment SM.18 Likewise, the AAMI Medical Equipment Management Committee stated in the ANSI/AAMI EQ89:2015 standard19 that “[t]he implementation of evidence-based maintenance may lead to appreciable reductions in labor and parts costs without compromising the equipment safety and availability.”
Despite these advantages, few CE professionals have implemented EBM on a broad scale, apparently due to some presumed challenges.
One of the challenges often mentioned in discussions of EBM is the large variety of failures found during repairs and SM, making analyses difficult and not very conclusive. This happens because most of the failure classifications are based on the parts replaced or corrective actions taken. For example, one study found the most frequent failure is the printed circuit board (PCB) and the corresponding action is PCB replacement.20 While this result could be helpful to the respective equipment manufacturers (to revise the design of or change components on the PCB), this kind of analysis is not conducive to maintenance strategy improvement. PCB failures are unpredictable and preventive PCB replacement is costly. Instead, EBM studies13-15 have shown that it is possible to focus on a small, limited set of failure causes that can be used to distinguish maintenance-related failures from those caused by normal wear and tear, abuse or accidents, batteries, accessories, or random unpreventable failures (such as PCB failures). By analyzing failure causes and limiting them to a small number, it is possible to focus on maintenance-related failures, discern patterns and, thus, identify possible solutions.
Another challenge commonly raised against EBM implementation is the difficulty in obtaining data with suitable quality and accuracy.20, 21 It is indeed difficult to train and ensure that CE staff are correctly and consistently using failure-cause codes (FCCs). Prior experience12-15 has shown that it is necessary not only to provide good training, but also to show CE staff that the FCCs they assign are actually analyzed and used to help them reduce, if not eliminate, meaningless tasks—thus allowing them more time on other tasks (such as user assistance, equipment planning and purchase, etc) that actually have more impact on patient safety and equipment reliability. Furthermore, it was found necessary for the CE leadership to periodically review the FCCs assigned by the staff to detect misuse and misunderstandings. With a little practice, it is easy to discern patterns such as repeated use of the same FCC for all service records, assigning repair FCCs to SM work orders, etc.
The third common challenge against EBM implementation is the need to collect enough data to perform statistically meaningful analyses. As most medical equipment is repaired once a year and receives one SM per year,21 each device generates on average two service records annually. If a hospital has 100 pieces of equipment of the same brand and model that are used in a similar manner, it is not difficult to accumulate enough data in a single year for EBM analyses. However, if the hospital only has a dozen pieces of certain equipment, it would require several years to gather enough data. This challenge can only be addressed if several hospitals agree to use the same set of FCCs and share the data they collect. Alternatively, these smaller hospitals may have to adopt strategies developed by large hospital systems or independent service organizations22 and verify the applicability of those strategies using the maintenance outcomes achieved.
There are some other alleged impediments to EBM implementation. For example, ANSI/AAMI EQ89:201520 states that “[e]vidence-based maintenance relies not only on the observed history of failures, but the theoretical probability of certain types of failures. This second type of analysis is typically beyond the capabilities of most service organizations.” This language is contradictory, as EBM by definition cannot rely on theoretical values. ANSI/AAMI’s interpretation would reverse the tenet that EBM is based on evidence, which must be factual and not theoretical. Consequently, theoretical probability analysis skill is not required. Actually, only elementary statistical proficiency is needed for EBM analyses.
Finally, another allegation often raised is the uniqueness of each hospital. According to this reasoning, it is not possible to use the EBM results obtained elsewhere20 or even to adopt the same methodology. While there is and always will be differences among hospitals, clinical users, and service staff, they share more commonalities than differences, which can be useful in shedding light on the best approaches to keep equipment safe and reliable. Similar arguments were made against benchmarking for over two decades until about 2006,23 when three entities began offering CE benchmarking services. They proved that commonality does exist and that CE performance can be compared among hospitals.
It is not surprising that CE professionals have not immediately and widely accepted EBM. Many scientific theories and concepts took years for the scientific community to embrace them. Our clinical colleagues also took some time to abandon their traditional methods and enthusiastically embrace evidence-based medicine. How can we now justify to them and the patients who depend on the equipment we manage that we don’t want to adopt EBM because it is too difficult and time-consuming to implement?
If we, as a profession, do not face head-on the challenge posed by CMS, we will miss our greatest opportunity to prove to society, to both the healthcare and medical equipment industries, and—above all—to ourselves that we are worthy managers of medical technology and deserve respect. This is our opportunity to have our moonshot. If we do not seize it, we will forever regret it.
Binseng Wang, ScD, CCE, fAIMBE, fACCE, is vice president, Quality & Regulatory Affairs with Greenwood Marketing LLC. The views expressed in this article are solely those of the author. For more information, contact chief editor Jenny Lower at [email protected].
References and Footnotes
1. President JF Kennedy’s Address at Rice University on the Nation’s Space Effort, September 12, 1962.
2. Wang B, Rui T & Balar S. An estimate of patient incidents caused by medical equipment maintenance omissions, Biomed Instrum & Techn., 47:84-91, 2013
3. Centers for Medicare & Medicaid Services. Clarification of Hospital Equipment Maintenance Requirements. Survey and Certification letter 14-07-Hospital, issued on December 2, 2011. Available at https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/downloads/SCLetter12_07.pdf, accessed 1/29/2016.
4. Centers for Medicare & Medicaid Services. Hospital Equipment Maintenance Requirements. Survey and Certification letter 14-07-Hospital, issued on December 20, 2013. Available at http://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/Downloads/Survey-and-Cert-Letter-14-07.pdf, accessed 1/29/2016.
5. 21 CFR 820.200 – Servicing. Available at https://www.gpo.gov/fdsys/granule/CFR-2011-title21-vol8/CFR-2011-title21-vol8-sec820-200, accessed 1/29/2016.
6. Nowlan FS, Heap H. Reliability-Centered Maintenance. Report number AD-A066579. Washington, DC: Department of Defense; 1978.
7. Moubray J. Reliability-Center Maintenance. 2nd ed., Industrial Press Inc., New York, NY, 1997.
8. 21 USC §351 – Adulterated Drugs and Devices. See explanation of “Adulteration” at this FDA website http://www.fda.gov/medicaldevices/deviceregulationandguidance/overview/generalandspecialcontrols/ucm055910.htm#adulteration, accessed 1/29/2016.
9. Fedele J & Wang B. Evidence-based maintenance: Comparison of OEM versus hospital-developed maintenance procedures and schedules. MD Expo, Washington DC, April 3, 2013
10. Wang B. Evidence-based maintenance? 24×7 Magazine, p.56, April 2007
11. Ridgway M, Atles LR, Subhan A. Reducing equipment downtime: a new line of attack. J Clin Eng. 2009;34:200-204
12. Wang B, Fedele J, Pridgen B, Rui T, Barnett L, Granade C, Helfrich R, Stephenson B, Lesueur D, Huffman T, Wakefield JR, Hertzler LW & Poplin B. Evidence-Based Maintenance: Part I – Measuring maintenance effectiveness with failure codes, J Clin Eng. 35:132-144, 2010
13. Wang B, Fedele J, Pridgen B, Rui T, Barnett L, Granade C, Helfrich R, Stephenson B, Lesueur D, Huffman T, Wakefield JR, Hertzler LW & Poplin B. Evidence-Based Maintenance: Part II – Comparing maintenance strategies using failure codes, J Clin Eng., 35:223-230, 2010
14. Wang B, Fedele J, Pridgen B, Rui T, Barnett L, Granade C, Helfrich R, Stephenson B, Lesueur D, Huffman T, Wakefield JR, Hertzler LW & Poplin B. Evidence-Based Maintenance: Part III – Enhancing patient safety using failure code analysis, J. Clin. Eng., 36:72-84 Apr-June 2011
15. Wang B, Rui T, Koslosky J, Fedele J, Balar S, Hertzler LW & Poplin B. Evidence-Based Maintenance: Part IV – Comparison of scheduled inspection procedures, J. Clin. Eng., 38:116- Jul-Sept 2013
16. Fennigkoh L & Smith B. Clinical equipment management. JCAHO PTSM Series. 2:5-14, 1989
17. Brittain JE. Scanning the past: Harold S. Black and the negative feedback amplifier, Proc. IEEE, 85: 1335-1336, 1997
18. A Roundtable Discussion: Getting to the Heart of the PM Debate, Biomed. Instr. Technol., 49:108-119, March/April 2015
19. ANSI/AAMI EQ89:2015 – Guidance for the use of medical equipment maintenance strategies and procedures, AAMI, Arlington, VA, 2015
20. Collins JT. Work histories in a medical equipment management program—an analysis of parts replaced. Chicago, IL: American Society for Healthcare Engineering; 2008.
21. Wang B, Eliason RW, Richards SM, Hertzler LW & Koenigshof S. Clinical Engineering Benchmarking: An Analysis of American Acute Care Hospitals, J. Clin. Eng., 33:24-37, 2008
22. The use of data from other organizations is allowed by CMS in S&C14-07 when it states that “…maintenance history…available publicly from nationally recognized sources.”
23. Maddock KE. (Benchmarking) Glass is half full, Biomed. Instr. Technol., 40:328, 2006
I hate to use my son’s post-millennialesque verbiage, but this was an epic article.
Thank you for scaling this mountain of a topic, it definitely requires someone with the breadth of knowledge and scope of expertise that you have to explain it in its entirety.
While some Biomed (CE) departments still fully document all numbers as part of the PM process, during the mid 2000’s the idea of documenting pass/fail (“documentation by exception”) started gaining traction. With the CMS driven evolution of OEM recommendations overriding traditional risk rankings and now the availability of AEM, I got to thinking:
Combining those items, how can any department that currently follows the pass/fail method of PM documentation have any data to justify any position, let alone decreasing PM frequencies under AEM?
A reply to that question could be, “If an adjustment is necessary, we fail the PM and open a corrective work order.” In this instance a “fail” PM has been documented and any qualitative data is left to the corrective work order. Is the (hypothetical) Biomed dept’s policy specific on what data is recorded in the corrective? Does it include starting as well as ending values? Does the corrective simply state something similar to “Adjusted to OEM specifications and returned to service”? Is it possible to create trending data based on actual recorded numbers from work orders? If there is data, how much data is “enough?”
A department that fully documents PM qualitative data would be able to trend a change and reasonably predict when adjustments would be required. A department that doesn’t document qualitative data at all would not even be able to make an educated guess.
Without data, how does AEM justification work?
Dear Alan:
If you had the opportunity to read the published articles cited or attend one of the presentation that Jim Fedele and I made at the MD Expo in the last three years, you would have seen how the safety and effectiveness of AEM can be evaluated and proven without reversing the “exception reporting” rule allowed by NFPA 99-1999. If you provide me with your email address, I will be happy to send you the slide deck.
In short, you only need to establish a small list of “failure cause codes” (FCCs) to characterize the type of failure cause (in our case, we used only 12 codes, some for repairs and others for scheduled maintenance – SM). For safety evaluation, you need to find the failures related to maintenance omission (i.e., hidden failures, preventable & predictable failures, potential failures, and service induced failures) that resulted in serious injury or deaths. If your incidence of maintenance omission with respect to service performed (repairs and SM) is comparable or lower than those obtained from OEMs or other service organizations, then you have proven that your AEM has not caused safety issues.
For effectiveness evaluation, you focus again on those 4 FCCs and look for equipment groups that have the highest FCCs. For each of those equipment groups you determine if the underlying cause is “active failure” committed by individuals or “latent conditions” created by your AEM (e.g., change of SM frequency or procedure). If you find any due to SM changes, then your AEM needs to be revised. Otherwise, you may need to retrain or even discipline those individuals who are making mistakes due to ignorance or memory lapses.
Indeed, you need data not only to prove that your AEM is safe and effective but also to determine how to improve your AEM (and even on the OEM recommendations). However, going back to recording each reading is unjustified, as the old days of monitoring drifts and adjusting trimpots have long gone since the advent of digital electronics.
Hope this is helpful.