Tired of following manufacturer recommendations for preventive maintenance? Here’s how to develop, implement, and monitor your own in-house guidelines

In February 2014, the Centers for Medicare and Medicaid Services (CMS) finalized changes to 42 CFR 482.41 (c)(2), the federal regulation that provides guidance on the performance of hospital maintenance activities to ensure “an acceptable level of safety and quality.” The revisions, initially issued in a December 2013 memo known as S&C 14-07-Hospital, were implemented in collaboration with The Joint Commission (TJC) and introduced significant flexibility to the existing maintenance requirements.

Historically, there has been no one-size-fits-all approach to ensuring medical devices were maintained properly. The implementation of risk-based, evidence-based, or time-based maintenance—or procedures derived from some conglomeration of these strategies—has been the norm for some time. Unfortunately, there are many organizations that have either failed to properly develop these programs or simply failed in their execution, resulting in less than satisfactory outcomes.

The changes made by CMS, if implemented in accordance with the agency’s intent, are fairly impactful. While simply stating that “hospitals comply with this regulation when they follow the manufacturer-recommended maintenance activities and schedule,” 42 CFR 482.41 (c)(2) goes on to say that “A hospital may, under certain conditions, use equipment maintenance activities and frequencies that differ from those recommended by the manufacturer.” So, what are the “conditions” under which we can deviate from published recommendations?

The simplest solution would be maintaining your entire inventory according to manufacturer recommendations, either through contract or your in-house program. However, this is both impractical and unnecessary. It is fairly safe to assume that many hospitals that claim to be maintaining their equipment in accordance with manufacturer recommendations aren’t. That may be because they have simply overlooked a procedure they feel may be unnecessary, unwittingly adopted an incorrect inspection frequency, or regularly use test equipment that is not specifically identified in the manufacturer’s literature. For each of us to maintain every device in our respective inventories while adhering strictly to manufacturers’ recommendations would be prohibitively expensive, according to published research on the subject. Let’s face it: Manufacturers’ requirements are not always that easy to achieve. In many cases they are simply instituted to absolve manufacturers of some form of liability.

The most efficient use of your time and resources is to trim the fat where safe and practical. CMS has given us this opportunity in the alternate equipment management (AEM) program. So, what is the AEM and how do we use it effectively?

As the name implies, the AEM is an alternative means (differing from that of the manufacturer) by which to maintain your equipment. This alternative does, however, come with a fairly significant level of responsibility and documentation, both necessary to ensure an effective program and to meet the intent of the standard. From here, we will discuss many aspects of implementing an AEM program, using the following document (hereafter referred to as A-0724) for reference: CMS Revised State Operations Manual Appendix A for standard §482.41(c)(2), A-0724 (Rev. 103, issued: February 21, 2014; effective: February 21, 2014; implementation: February 21, 2014). The following insights are based on policies we have successfully put in place at WakeMed Health and Hospitals in Raleigh, NC.

Inventory Management

Having a clean inventory will pay dividends when making critical decisions regarding your AEM. By “clean,” we are referring to its accuracy and consistency. Do you have several device categories for the same device type (for example, monitor, physiological; monitor, vital signs; monitor VSM; etc)? Do you have model inconsistencies (such as 12-345b; 12 345b; 12 345 B; etc)? Do you have categories or models that need to be archived? If so, now is the time to make changes.

TJC EC.02.04.01 EP7 stipulates that “the hospital identif[y] medical equipment on its inventory that is included in an alternative equipment maintenance program.” Simply put, your inventory must identify in some fashion any device that is being maintained outside of parameters established under the manufacturer’s recommendations. We found the easiest method, based on our unique circumstances, was to add the prefix “AEM” to the description field of the asset record. The field is searchable, and we can run reports on it if necessary. You may choose to add a unique identifier to the device category or control number field instead, but you need some method of identifying AEM equipment on your inventory.

So how do you determine what equipment will be moved to the AEM program? A better first question is, how do you determine what makes it into your inventory at all? Some devices don’t even need to be on your inventory, and therefore will not be subject to inspection at all. Think of this as a prefilter for the AEM program. We found that by developing a flow chart using simple true/false logic and questions based on risk and patient impact, we could quickly establish if a device met the criteria to be inventoried.

Expanding on this, we developed a second flow chart to determine if a device should be categorized as “high-risk” or “nonhigh-risk,” which incidentally is another requirement called out in TJC EC.02.04.01 EP3. Finally, a third flow chart establishes the AEM or non-AEM status of the device, once again using true/false logic and simple questions based on service history, regulatory requirements, and data analysis.

These three flow charts were incorporated into two departmental policies named “inventory inclusion” and “inspection criteria,” both of which are now referenced in our medical equipment management plan. They help us satisfy several aspects of TJC and CMS record-keeping requirements, but the most important thing we gained from this approach is consistency. There is no question when a device hits the inventory about what its status will be or how we arrive at that determination.

AEM Inclusion

In order to deviate from manufacturer procedures and frequencies, you must have some documented reasoning as to why you feel that changing how and when you perform a PM on a particular device will not adversely affect patient outcomes. There are some exclusions. Devices that cannot be placed on an AEM program include: those that have a specific maintenance requirement imposed by a federal, state, or local law; imaging/radiological equipment; medical lasers; and equipment for which you do not have sufficient maintenance history to support inclusion in an AEM program.

Essentially, regarding a specific piece of equipment, you either cannot deviate from the manufacturer’s recommendation, or you must have data to support your decision to do so. As stated in A-0724 regarding new equipment, “If a hospital later transitions the equipment to a risk-based maintenance regimen different than the manufacturers’ recommendations, the hospital must maintain evidence that it has first evaluated the maintenance track record, risks, and tested the alternate regimen.”

In an effort to establish a “track record,” we adopted a policy stipulating that a device new to our facility, or one for which we have no data, must have 2 years of maintenance history in order to be considered for the AEM program. During this period, the device will be maintained in accordance with the manufacturer’s recommendations. Incidentally, this requirement is also one of our decision points on our AEM flowchart, along with questions specific to federal, state, and local regulatory requirements and device type (imaging and lasers), which help tie the program together.

As for existing equipment, we developed a grandfather clause in our departmental policy for devices already in the inventory whose PM protocol deviated from manufacturers’ recommendations. Since they had previously been evaluated using an evidence-based program, these devices were transitioned directly to the AEM program, less the 2-year wait time. In every other respect, they are subjected to the same level of scrutiny as new devices. In conjunction with automated maintenance reporting from our maintenance management system, this clause gives us confidence that the AEM strategy employed is not detrimentally affecting device operation.

Alternate Maintenance Activities and Frequency

Once you have made a decision to include a particular device in your AEM program, you will be required to determine to what extent you will deviate from the manufacturer’s recommendations. This is where you have some autonomy as to how you establish your maintenance strategy, but you also will be burdened with accountability. There are three statements contained within A-0724 that are very important regarding your maintenance strategy:

1) “In developing AEM maintenance strategies hospitals may rely upon information from a variety of sources, including, but not limited to: manufacturer recommendations and other materials, nationally recognized expert associations, and/or the hospital’s own (or its third party contractor’s) own experience. Maintenance strategies may be applied to groups or to individual pieces of equipment.”

2) “The risk to patient health and safety that is considered in developing alternative maintenance strategies must be explained and documented in the AEM program.”

3) “The hospital is expected to adhere strictly to the AEM activities or strategies it has developed.”

The first statement is fairly benign—simply base the maintenance you choose to perform on best practices and your experience. One caveat I would add is that equipment suffers more wear and tear based on how and where it is used. You may find it necessary to have differing AEM strategies for the same device type based on application and environment. This also suggests that if you use a third-party vendor with an established AEM program based on a 300,000-piece inventory across five states, it may not meet your specific needs. Therefore, continuous monitoring of your program, which will be discussed shortly, is imperative.

Your knowledge of how and where the data you’re using was acquired also plays into the second statement. If you have chosen a strategy, what type of data did you use to make that determination and where did it come from? Was the data provided to you from an external source? If so, how can you be certain that you do not need to adjust PM frequency to accommodate your specific maintenance requirements? How and where are you maintaining the data for future reference?

The data used needs to be as thorough and accurate as possible. If the service history on a particular device suggests a minimal failure rate between PMs, it may be a candidate for a decrease in PM frequency. But what is the impact of the failure? Does the device employ a built-in self-test that would prevent its use on a patient? Are there consumable components that require replacement at the manufacturer’s suggested PM interval that, if allowed to expire, would be detrimental to the operation of the device? How about omitting certain steps to the PM process? What are the long-term effects? Are you unknowingly inducing a condition that would allow for a parameter to drift? If the device fails, does it fail in a safe mode or in a manner that prevents its use on a patient? What about run-to-failure devices?

There is a plethora of questions you can ask in order to suggest PM frequency or content changes, but the bottom line is determining the risk to the patient and having the data documented to support your decision. So, where do you get the data? Your documentation.

The third statement is something you should have learned over the years from TJC inspections. If you have something in your policy regarding an AEM program, do what you say you are going to do. There is nothing worse than an inspector calling you out for not following your own program. It should also be your call to routinely review and update your policies, as well as train your staff on what they include.

Documentation and Evaluation

A-0724 lists a number of must-haves related to documentation of equipment placed on your AEM program. (Yes, the guideline specifically states, “…there must be documentation indicating…”) For the most part, these requirements are fairly intuitive: knowing the risk to the patient, rationale for the new PM activity or frequency, the dates when maintenance activities were performed, etc. However, one of them bears further discussion: “Documentation of any equipment failures (not including failures due to operator error), including whether there was resulting harm to an individual.”

This statement sounds like standard work order documentation, but consider what CMS is trying to accomplish here. You have taken a device and, based on data, changed what the manufacturer has suggested be done during a PM in favor of your own recommendations. How do you know you’re right? Unfortunately, we can never be 100% sure, but we hope that through continuous monitoring we will make the proper adjustments to ensure patient and staff safety.

Work order documentation can be a tricky thing. In many cases, we are not able to generate a report based on the text of the repair, so trending data becomes difficult. Standardizing the text your technical staff uses would also be difficult. Let’s say you have a particular device on the AEM program whose sensor happens to be subject to a high rate of failure, and you just happened to omit that part replacement from the manufacturer’s PM. How can you identify if you need to add that step back into the PM process? It is unrealistic to create work order action codes for every possible failure type for every device in your inventory. For the most part, maintenance management systems can be difficult, if not impossible, to configure to report specific events like this one.

The solution we have found most effective is incorporating user-defined fields for any failure-related work order within our maintenance management system. By attaching three simple, mandatory questions, we can gather enough data to at least flag whether we have an event that may jeopardize the integrity of a device on the AEM program. The work order cannot be closed until these questions are answered with a yes/no drop-down menu.

The questions are quite simple for your technical staff to answer with a little training:

  • Was the failure predictable? For example, was it caused by PM-related calibration drift, PM-related battery failure, or PM-related component failure (excluding user-replaceable accessories)?
  • Would a change in the PM frequency or an addition to the PM schedule have prevented this failure?
  • Would a change to the PM procedure or the addition of a procedural step have prevented this failure?

By answering these questions (which takes less than 10 seconds) the technical staff is giving us a searchable database with details specific to PM-related failures. Any “yes” answer finds its way onto an automated report that details the make, model, control number, date, and work order text surrounding the failure. This report is then reviewed by leadership staff and compared to trended data to determine the need for a PM procedural or frequency change, and is reported to the Environment of Care as quality data. One interesting aspect of this approach is that it’s not just for equipment on the AEM program. We report on all repairs, which also serves to evaluate the effectiveness of manufacturers’ recommended PM processes.

It is understood that not all maintenance management systems are created equal. You may not have the capabilities described above, but hopefully you can overcome and adapt your current workflow to attain similar results. This is a snapshot of the program that we have implemented, and it must be noted that the success of this program can be directly attributed to the collaboration and support of every member of our clinical engineering team. The development, evaluation, and revision of numerous policies and processes that make up the AEM program would not have been possible without their valuable knowledge, flexibility, and tenacity. For that, the credit goes to them.

An AEM program may not be for everyone, and it will involve a little work to implement. But the long-term benefits gained in flexibility and efficiency far outweigh the initial investment in time. Those who choose to institute an AEM program may also benefit by taking the opportunity to review and revitalize departmental policies and procedures that have been neglected over the years.

Good luck, and remember: If it’s not written down, it didn’t happen. More importantly, from a process improvement perspective, if it’s not written down, you won’t know what happened or how to fix it.

Dallas T. Sutton, Jr, CRES, is Supervisor, Imaging Engineering at WakeMed Health and Hospitals in Raleigh, NC. For more information, contact chief editor Jenny Lower at [email protected].

Photo credit: Copyright Michael Beer | Dreamstime.com