By William A. Hyman, ScD
Repair activities (service on demand) can be disruptive to both the provision of clinical care and the operation of the HTM department. From a clinical perspective, such activities result in the device being out of service from the time it malfunctions until the time the repair is completed. From an HTM standpoint, diverting resources from scheduled activities to repair services can also be problematic.
Moreover, repair can be costly in terms of parts and service time—and it can indicate further problems that need to be addressed, such as use error and equipment degradation. When repair rates are higher than they should be, these effects are amplified. Repair rates can be expressed in terms of “mean time between failure”(MTBF) or “repairs per unit of time” (i.e., year). Someone in-house may discover that repair rates are higher than “normal,” or they may determine that local rates are higher than “benchmark” values.
Unfortunately, despite varying efforts, the systems for collecting and sharing such information among HTM departments continue to be weak. Case in point: AAMI’s Benchmarking Solutions was decommissioned in 2016, with ECRI Institute also making various attempts in this arena. Further, some consider manufacturer feedback on repair rates unreliable, even if it’s available.
Asking the Right Questions
In addition to completing a repair and getting a device back into service, determining why a repair was needed can be an important management tool with respect to rates, costs, and related issues. Categorizing repairs is a good place to start. One type of repair is the replacement of routine worn out items such as lights, batteries, filters, etc. Such a repair scenario is applicable if no maintenance was scheduled, the scheduled maintenance did not include those parts, or the need for repair occurred despite the item being addressed during maintenance. Even so, routine repairs may become excessive due to some of the factors addressed below.
Technician error during routine service could also impact maintenance and repair. This error could be direct in terms of what was done and how it was done, or indirect in terms of failing to notice something that needed attention, even if it was not on the maintenance task list. In this regard, I advocate that every maintenance task list should include “inspect for obvious abnormalities,” or similar verbiage.
Of course, there are practical limits to how thoroughly a device can be inspected—and what is obvious to one person may not be obvious to another. Where appropriate, potential abnormalities could be enumerated, such as “evidence of mechanical damage,” “evidence of spills,” “unauthorized field repairs” (e.g.,tape), etc. Still, after an adverse event takes place, some people may assert that obvious issues were overlooked. That could be a case of hindsight being 20/20, however.
Repairs and maintenance also interact in terms of maintenance intervals and procedures. Therefore, for each repair, we should ask whether maintenance addressed the issue—and, if so, whether the interval and maintenance procedures were appropriate. If the repair is unrelated to current planned maintenance, however, then we must determine if maintenance procedures need to be revised.
Problems with Design
Lack of durability can also lead to excessive repairs. It’s important to note that durability is often a design problem, with the device not designed for reasonable and expected use patterns. Although it’s common to cite “user carelessness” in the case of an adverse event, perhaps the device components are just too fragile. If durability is a local problem (which is hard to determine, given the lack of comparative data) this can be attributable to the use profile, which includes the type and volume of patients, fixed versus mobile use, and users’ actions.
Patient type may go beyond disease patterns. For example, I recently heard an anecdote about a facility that treated prisoners who were particularly hard on the equipment they encountered. Accounting for such use variations when benchmarking or making other comparisons is often tough.
Hard use may also be attributed to the device users who are individually or collectively rough on the equipment—possibly because they disregarded instructions. Consider, for example, equipment being stored on the base of a moveable OR table so that the telescoping shroud interacts with the equipment on the base, thus harming the shroud, internal table parts, or the stored equipment.
Other classic use issues include users placing drinks on the flat surface of a device—and the resultant spills that might occur—as well as using inappropriate parts of portable devices as “handles.” This, too, can be a design problem—since equipment parts shouldn’t “invite” the user to apply force to them if they can’t sustain such force. These types of repair situations should be documented and reported, with additional emphasis put on complying with instructions.
Another durability issue might be poorly performing replacement parts—whether from the OEM or a third party—which could result in a shorter life span and a higher rate of follow-up service than normal. In this regard, if an original part lacks sufficient durability, then using the same part in a repair will likely result in a repeat failure.
I’ve heard it said that some technicians like doing repeat repairs because they are familiar and guarantee short-term success. Some contracted or fee-for-service repairs may also result in less-than-earnest efforts to prevent failures—given that the hospital’s cost is the other party’s income.
Normalization of excessive repairs must be countered with the use of repair analysis to prevent unnecessary costs, as well as equipment downtime, disruption, and repetition. (Note that analysis comes after data collection to weed out unnecessary data.) Further, analysis must go beyond: “It was broken and now it’s fixed.” The important questions are: “Why did it break?” and “Is this failure preventable?” If the failure’s preventable, it then raises the question of: “Should it be prevented, rather than fixed, when necessary?”
These inquiries might benefit from a version of the “five whys,” which are sometimes used in root cause analysis. This method literally requires that you ask “why” at least five times after someone gives you an explanation—for instance, “If it broke, why did it break?” “If the part failed, why did it fail?”
In conclusion, an effective repair program requires that devices are returned to service in an expeditious manner and that repetition is prevented. The latter requires focused efforts in analyzing repair events— and in using that analysis to affect necessary changes.
William A. Hyman, ScD, is professor emeritus, biomedical engineering, at Texas A&M University, College Station, Texas, and adjunct professor of biomedical engineering at The Cooper Union, New York. Questions and comments can be directed to 24×7 Magazine chief editor Keri Forsythe-Stephens at email@example.com.
In my experience, sad to say, one of the things most often overlooked when repairing a device is to look at the service history of the device. Was this very same device down 3 weeks ago for the same thing? I can’t tell you how many times I have seen devices bounced back 2 times only to land on my bench the third time. One look at the service history and you can see what has been going on. A little bit more in depth troubleshooting can reveal the real etiology of the difficulty such that the device will return to it’s historically reliable state. Also, let us not forget, that MIT (Maintenance Injected Troubles) can be a source for higher than normal failure rates, indicating a need for remedial training. Chronically then, this becomes a management issue to identify and remedy.
My three cents!