Ever since the publications of the Fennigkoh and Smith model for equipment classification1 and ECRI Institute’s risk classification criteria,2 the clinical engineering (CE) community has been indoctrinated to believe that there is a strong relationship between equipment “risk” and its need for maintenance, especially so-called “preventive” maintenance (PM). This misconception has, unfortunately, persisted so long that the director of engineering at The Joint Commission (TJC), George Mills, felt compelled to issue this public warning at the recent annual conference of the Association for the Advancement of Medical Instrumentation: “Just because [a device is] high-risk doesn’t automatically mean it’s high PM.”3
Actually, both classification proposals noted above were aimed at demonstrating that not all equipment needs the same scheduled maintenance. At the time they were issued, hospitals were required by TJC to perform semiannual checks on all AC-powered equipment, regardless of function or risk.
The root cause of this misconception seems due to the confusion between the concepts of risk and severity. The International Organization for Standardization defines risk as “[t]he combination of the probability of occurrence of harm and the severity of that harm.” Severity, in turn, is defined as “a measure of the possible harmful consequences that a hazard could potentially cause.”4 The relationship between these two notions can be represented by the equation below:
risk = probability * severity [of harm] (1)
In other words, the fact that a particular piece of equipment could cause severe harm to a patient does not mean it also poses high risk if its failure probability is low. Conversely, if a second piece of equipment would cause little harm to a patient but its failure probability is high, that device could pose an equal or higher risk than the first piece of equipment.
Using this equation, it is easy to understand why nuclear reactors and commercial airliners present lower risk to the average person than vehicles. The probability of harm is much lower for the first two technologies due to strict controls imposed by the government and implemented by those industries. In contrast, accident investigation statistics5 show driver-related issues make the probability of traffic accidents much higher, despite the existence of driver licensing requirements and traffic laws.
Because the severity of harm cannot be manipulated easily (eg, a ventilator failure will likely lead to patient injury or death in a matter of minutes), the only recourse is to decrease the probability of device failure. And since it is impossible to build a perfect defense mechanism, the best approach is to erect a series of barriers. While each protective measure might be inherently imperfect, taken together they provide an acceptably low probability. This method can be visualized by looking at the famous Swiss-cheese model proposed by James Reason, PhD.6
Alternatively, one can appreciate the redundancy principle by rewriting equation (1) as:
risk = (?i probabilityi) * severity [of harm] (2)
In this equation, the single barrier probability from equation (1) has been replaced by ?i, the product of probabilities provided by multiple barriers. For example, two sequential barriers with an effectiveness level of 99% each (enabling only 1% of errors to get through) will provide a combined effectiveness of 99.99% (allowing only .01% of errors to get through).
Since 2013, TJC has adopted the international definition of risk and expanded it to include two other criteria: 1) proximity to patient, and 2) number of patients at risk7. The first item highlights the need to consider the connection between the equipment (and other care resources) and the patient. (Due to the accelerated networking of hospitals and deployment of telehealth, the concept of proximity should not be interpreted rigidly as physical distance. Patients can be operated remotely by surgeons in another continent, as well as diagnosed and monitored from almost anywhere.) Equipment that applies energy to or introduces substances into a patient could have a more immediate negative impact than those pieces that are more removed from the patient. The second criterion, also known as “mission criticality,”8 serves to emphasize the importance of major diagnostic systems (eg, automated clinical laboratory analyzer systems and imaging systems) that produce essential data for physicians to determine proper care and need for changes. Failure of such systems could jeopardize dozens of patients within a short period of time.
In light of the distinction between risk and severity, CE professionals should now be able to clearly detach maintenance needs from equipment risk classification (called “critical” by the Centers for Medicare & Medicaid Services9). While there is no question that numerous pieces of high-risk equipment exist in any hospital, not every one of them requires frequent, detailed scheduled maintenance. Physiological monitors deployed in intensive care units must be classified as high-risk equipment. However, having been built with solid-state electronics, they are very dependable. More importantly, no scheduled maintenance can prevent failures. On the other hand, low-risk equipment based on mechanical, pneumatic, and chemical components often needs periodic lubrication, carbon dust removal, belt and sensor replacement, removal of chemical deposits, and other procedures. Good examples of this type of equipment include almost-obsolete x-ray film processors, hydrocollators, and continuous passive motion machines.
The combination of ever-increasing deployment of technology in healthcare, pressure to reduce costs, and higher patient demand all conspire to force CE professionals to be very judicious in how they spend their limited resources, particularly their time and attention, on equipment maintenance and management. Sentinel event statistics accumulated by TJC10 and other patient safety organizations have shown that most patient incidents related to medical equipment are primarily caused by human-factors engineering deficiencies and inadequate training of clinical users, instead of maintenance omissions.11
Therefore, it is unwise to waste time performing unnecessary maintenance on high-risk equipment simply because of its risk classification. Instead, more attention should be given to assisting the users in improving the planning, selection, and use of medical equipment. This approach would not only reduce risks but also expenses.
Binseng Wang is vice president, Quality & Regulatory Affairs with Sundance Enterprises. For more information, contact chief editor Jenny Lower at email@example.com.
1. Fennigkoh L & Smith B. Clinical equipment management. JCAHO PTSM Series, 2:5-14, 1989
2. ECRI Institute (formerly ECRI). Types of services, their advantages and disadvantages, in: Special Report on Service Contracts. Health Technology, 3:9-21, 1989
3. 24×7 Magazine. Joint Commission’s Mills to AAMI: Use Common Sense for Alternative Equipment Maintenance, June 9, 2015. Available at https://24x7mag.com/2015/06/joint-commission-mills-common-sense-alternative-equipment-maintenance-still-top-issue-for-biomeds/?ref=cl-title. Accessed Aug. 22, 2015.
4. International Organization for Standardization. ISO 14971:2007 Medical devices — Application of risk management to medical devices, Geneva, Switzerland, 2007
5. National Highway Traffic Safety Administration. National Motor Vehicle Crash Causation Survey: Report to Congress. Available at http://www-nrd.nhtsa.dot.gov/Pubs/811059.pdf. Accessed Aug. 22, 2015.
6. Reason J, Managing the Risks of Organizational Accidents, Ashgate, Publishing Ltd, Surrey UK, 1997.
7. Maurer J. The Healthcare Environment Update. Presentation made at the MD Expo, Washington DC, April 3, 2013.
8. Wang B & Levenson A. Equipment inclusion criteria: a new interpretation of JCAHO’s medical equipment management standard. J Clin Eng., 25:26–35, 2000
9. Centers for Medicare & Medicaid Services, Memorandum S&C 14-07-Hospital, issued on December 20, 2013, available at https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/Downloads/Survey-and-Cert-Letter-14-07.pdf. Accessed Oct. 9, 2015
10. The Joint Commission. Sentinel Event Data – Root Causes by Event Type. Available at http://www.jointcommission.org/sentinel_event.aspx. Accessed Aug. 22, 2015.
11. Wang B, Rui T & Balar S. An estimate of patient incidents caused by medical equipment maintenance omissions, Biomed Instrum & Techn., 47:84-91, 2013
Thank you for a concise overview of this issue. Sadly, the design of medical technology maintenance programs lags far behind programs in other domains, many of which have similar potentials for harm. Believing that healthcare is unique, we have not learned from our colleagues in other fields. It’s long past time to apply these principles in our work.
Agreed re: high risk does not necessarily imply high maintenance. However, I would like to point out that there is a competing school of thought regarding the risk = severity x probabilty model. A short version of the rationale for the newer model is probability cannot be estimated sufficiently well for increasingly complex systems. One example of an argument for the newer model, as well as where PRA still holds, can be found at http://sunnyday.mit.edu/papers/Making-Safety-Decisions.pdf
As always, Binseng provides great insight to issues facing the biomedical/clinical engineering community. In light of what was said, how does the rational from CMS of inventorying and safety inspecting relocatable power taps on a regular basis play into this logic? In my 35 years as a biomed, I have never encountered an issue with these devices as long as you use a quality product and don’t use them for critical equipment.
I am not privy to the evidence or rationale used by CMS to mandate the inventory and inspection of RPTs. Unfortunately, I no longer have access to data that may prove or disprove this mandate.
On the other hand, I must admit that I am appalled that after collecting several decades of maintenance data, so few CE departments have bothered to look back and try to use their own data to estimate probabilities of failures caused by various factors (e.g., human-factors issues, unpreventable failure, preventable and predictable failure, failures due to intense use or abuse, etc.). In 2012, when CMS questioned our collective pushback on following blindly OEM recommendations on maintenance, very few CE departments or service organizations were able to provide data to prove that patient harm caused by maintenance omissions were extremely rare (> 6 sigma). Without data, it was difficult for us to counter political pressure and the myth of higher maintenance => higher safety.
As was noted, risk=severity x probability is well known but lacks scientific rigor. There is no basis for multiplying these variables, and the scales used for each has a great effect on the outcome. With equal scales there is false symmetry–a 2×3 is the same as 3×2, but this has no actual justification. Similarly, is 4×2 really twice as bad as a 2×2? What does it mean to be twice as bad? It is also the case that you don’t have to use the equation very much in order to know what result you are going to get, and then–if so inclined–manipulate the inputs to get the desired output.
Other models have been used, and the cybersecurity world currently suggests several different multi-factorial “equations” with addition, multiplication and weighting factors. I have also seen one factor raised to the power of the other–which has no more basis than multiplication.
A “formula” does have some value, if you remember its limited and arbitrary basis.
It is probably my fault (most likely due to my poor Brazilian-Chinese English) that Professor Hyman and is missing the big picture that I was trying to sketch, i.e., high risk is not caused solely by high severity and, furthermore, high risk does not mean high maintenance.
As I quoted, ISO 14971 defines risk is as “[t]he combination of the probability of occurrence of harm and the severity of that harm.” Instead of proceeding with the nebulous concept of “combination,” I choose to use multiplication to call the attention of the reader that risk is not only determined solely by severity but also by probability. Furthermore, by introducing the product of probabilities in equation 2, I hoped to help the readers to visualize more easily Professor Reason’s Swiss-cheese model. In essence, CE professionals must not focus on their own slice of Swiss cheese (i.e., equipment maintenance) but also help strengthen other slices (e.g., better equipment selection, improve operator proficiency, reduce false alarms and alarm fatigue, etc.) in order to reduce risks and improve care.
There are definitely many ways to compute risk using different ways to combine probability and severity, as well as to estimate probability using both quantitative and qualitative approaches. However, unless you are actually performing risk assessment, it is probably not necessary to understand those intricate details to grasp the big picture I tried to sketch.
My question is, how do we look at this in terms of “due diligence”? Regardless of how we may use statistics and calculations to analyze the question of frequency, tasks or to PM or not to, where are the “real” legal, moral and ethical boundaries? IMHO anything a caregiver uses that impacts the care of patients should be seen by a biomed at least once a year, moreso if the device/system by its nature requires it. Also, in today’s technology environment, a group of individual components by themselves may risk-rank as “no PM required,” but the system the devices create in concert may well have a completely different and much more acute risk ranking which calls for the “larger systems” to considered for a so-called “PM.”
What are your thoughts on this?
Interconnected systems must be analyzed in the same way as individual pieces of equipment, just like individual pieces of equipment are required by risk management standards to be evaluated by their respective manufacturers down to the level of sub-assemblies and even components.
I wish we have the resources (time, money, energy, etc.) to do everything we wish to do in life. The essence of being an engineering professional is IMHO to find reasonable compromises that allow me to “sleep peacefully at night” knowing that I have done what is right with the resources at my disposal. Until proven, all opinions and theories are just nice thoughts. For example, Professor Peter Higgs had to wait almost 50 years to earn the Nobel Prize in Physics until the large hadron collider was made available to prove the existence of the boson (aka, the God’s particle) he predicted in 1964. Without data, we will never attain Evidence-Based Maintenance and will keep arguing based solely on beliefs.