By Philip Levine, CBET-E

The topic I would like to discuss deals with the evolving technological change in the biomedical/clinical engineering profession, as it impacts patient safety. Specifically, I’ll focus on “state-of-the-art” technology, and how it can affect attitudes about the culture of patient safety. I believe complex equipment (hardware and software) can sometimes foster a barrier, an insulating obstacle, which can be a hindrance toward fulfilling our “prime directive:” that no patient is harmed due to compromised equipment function.

During my years as a member of a healthcare support team, I observed firsthand some trends regarding patient safety, which I believe deserve more attention. Technological advances in the field of healthcare have been dramatic—for instance, more reliable patient equipment and software algorithms.

Phil Levine

Philip Levine, CBET-E

What concerns me is the potential for “state-of-the-art” technology to be given a “bye” in certain situations. By potential to be given a “bye,” I mean that we may become so enamored with the increased functionalites of medical equipment that we ignore dangerous trends.

There are numerous examples. Miscues do occur between the biomedical/clinical engineering and information systems (IS) departments. There have been instances where network servers that support clinical systems unintentionally go offline due to “routine” IS upgrades and maintenance. This is sometimes too easily written off as a result of unpredictable software conflicts—and the price we have to pay for network convenience of the patient monitoring system.

Although complex network devices and software certainly afford increased functionality for clinical monitoring, they also create the potential for “single point of failure” scenarios—far reaching and analogous to a cascade of falling dominos. A single key network component going offline can adversely impact the functionality of the patient network. Another single point of failure example: A “network storm” of data collisions on the patient network can impact multiple care units, leading to significant patient monitoring downtime. The root cause of the network storm may be difficult to pinpoint and resolve—perhaps placing patient safety at risk.

Further, software upgrades from the monitoring equipment manufacturer can have unforeseen events—i.e. “software bugs”—which actually make the upgrade less effective than the previous software revision. This could be due to a lack of quality assurance

testing on the part of the manufacturer, or perhaps a consequence of manufacturers trying to be cost-competitive by cutting corners in the rush to get the product out to the customer. But how many times have you felt your institution was a “Beta test site” without actually being one? It is almost as if the manufacturer is utilizing the customer to do the final testing of the product, in order to obtain feedback so that shortcomings can get fixed in the next software release.

Take another example: telemetry monitoring systems. Often, telemetry is the only equipment on patient care units alerting staff to potential life-threatening events. I believe most clinical/biomedical engineering departments do not have telemetry transmitters on a PM program. Testing a teletransmitter by simply hooking up to a patient simulator is not an adequate test. A simulator is a perfect impedance, unlike patient skin impedance, and therefore may not detect a transmitter flaw.

A PM program, especially for “state-of-the-art” teletransmitters, can proactively detect when the teletransmitter center frequency has drifted out of specification. Significant patient monitoring signal “dropout” can often be attributed to the teletransmitter center frequency drifting out of specification. In order to squeeze more available frequencies into the medical radiofrequency band, newer “state-of-the-art” tele transmitters have much tighter, closer center frequency spacing between tele-boxes.

So, do we wait until a patient care unit calls to report that a transmitter on a patient is displaying monitor signal dropout, or wait until a “sentinel” event occurs before checking transmitter function? A proactive PM program for teletransmitters can detect and sometimes return a teletransmitter to an optimal center frequency via software.

Returning once more to “state-of-the-art” technology, how many biomed departments feel locked in, or forced to stay with one manufacturer? Once a particular manufacturer has established itself within a hospital, with extensive hardware/software/network infrastructure, it is difficult—if not impossible—for other manufacturers to get their foot in the door.

Due to incompatibilities between different manufacturers’ systems—perhaps a consequence of what I perceive is a lack of industry standards—it is cost- and labor-prohibitive to switch horses midstream. Some would say that once a manufacturer is established in a hospital, it has a virtual monopoly tantamount to restraint of trade. But what if a better, safer product from another manufacturer arrives on the scene?

I believe advances in “state-of-the-art” medical equipment technology have led to some unintended consequences in the realm of patient safety. Perhaps these consequences are sometimes too easily and too quickly written off as the price we pay for network convenience, and increased equipment function. Of course, we are better off because of advances technology has made in medical equipment. But perhaps there are consequences we need to be more aware of that impinge upon the very patient safety issues the technology was meant to help improve.

Philip Levine, CBET-E, is retired after a prolific biomedical engineering career at Boston-based Brigham and Women’s Hospital (formerly Peter Bent Brigham Hospital). Questions and comments can be directed to 24×7 Magazine chief editor Keri Forsythe-Stephens at [email protected]