A new AAMI standard and hard‑won lessons from the field are bringing clarity—and confidence—to AEM programs.

By Alyx Arnett

Alternative equipment maintenance (AEM) in US hospitals has had a circuitous path. Guidance has shifted over the years across federal, state, and accrediting bodies, often leaving much open to interpretation. The result has been ongoing uncertainty about how healthcare technology management (HTM) teams can depart from manufacturers’ recommendations without risking noncompliance.

That uncertainty has begun to find direction with the publication of AAMI’s ANSI/AAMI EQ103:2024, which establishes the first formal AEM standard. It offers what Matt Baretich, PE, PhD, president of Baretich Engineering, calls “one source of truth” for how to design, document, operate, and evaluate a compliant program.

The new standard aims to clarify many long-standing gray areas, including how to justify deviations from manufacturers’ recommendations, how often to reassess, which data to track, who is qualified to make decisions, and how to satisfy surveyors.

“AEM is always a hot topic,” says GE HealthCare’s Colleen Haugen-Ortiz, CBET, AAMIF, an HTM quality specialist who co-chaired the EQ103 working group. “We’re hoping that, with the standard, we provide at least a little bit more clarity to some of those questions.”

From Memos to a Standard

The field’s push for clarity goes back over a decade, when CMS memos first opened the door to deviations from original equipment manufacturer maintenance guidance. That initial allowance created flexibility but left hospitals without clear guidance on how to structure and document AEM programs.

In 2018, a New Work Item Proposal for an AEM standard moved through AAMI’s process and reached Haugen‑Ortiz and co‑chair Maggie Berkey, CBET, a BMET III at Bio‑Electronics, in 2022. They pulled together a broad group of experts—including long‑time standards contributors and HTM leaders with hands‑on AEM experience—to craft a consensus document.

Berkey says the goal was to “find the middle ground to make everyone feel confident that we’re doing the right thing,” translating regulatory language into practical, defensible expectations. EQ103 was published in November 2024, and a companion Technical Information Report (TIR) is in development to illustrate best practices. “The standard is going to tell you the rule,” Haugen‑Ortiz says. “With the TIR, we can show you how to utilize that rule.” The working group hopes to publish the TIR in 2026.

Early reception has been positive. Haugen‑Ortiz and Berkey recently taught an AAMI course anchored in EQ103 and say attendees appreciated the clarity. “Everybody has been looking for guidance on AEM for more than 15 years,” Berkey says. “Now we’re finally starting to put the boundaries on it.”

What’s Changed, What Hasn’t, and Where to Start

For teams already operating disciplined, well-documented AEM programs, EQ103 will feel familiar. “It’s surprisingly steady,” Baretich says. “There’s no change in what defines a good program. If you’ve got a good one, keep doing it.”

Where the standard adds value is in consolidation and specificity, says Haugen‑Ortiz. Rather than chasing a patchwork of CMS memos and accreditor manuals, HTM leaders can now align their policies, procedures, and monitoring practices to a single, consensus framework. That framework centers program defensibility around three pillars: clear policies, decision documentation, and ongoing performance monitoring.

Start with policy, Baretich says. An AEM policy should define scope, roles, decision criteria, documentation methods, and monitoring plans. Each asset-class decision then requires its own evidence trail—including the rationale, the data reviewed, the specific changes to preventive maintenance (PM) interval and task content, and a baseline for subsequent comparison.

“You have to talk about what changed and what you’re going to do,” Berkey says. “Leave your trail of crumbs so you can show that change.” If a PM procedure had 10 steps and the team drops steps seven and nine while adding a modified task, she says, that should be explicit in the documentation package.

Finally, teams must monitor post-implementation performance. Baretich cautions that too many teams make a justified change and then move on without checking whether the change achieved the intended outcome. “If we decide to extend the PM interval from one year to two years, we need to monitor whether that was a good decision,” he says. “If the failure rate increases after we made that change, maybe that wasn’t a good decision, and we might go back.”

Let the CMMS Do the Watching

Monitoring AEM performance at scale requires disciplined use of failure coding and computerized maintenance management system (CMMS) reporting, says Baretich. He points to AAMI’s free 2020 white paper, Optimizing the CMMS Failure Code Field, which he co-authored. The paper outlines a practical failure coding taxonomy that, when used consistently, helps enable statistically meaningful trend analysis across asset classes.

With the right codes in place, HTM leaders can set up automated CMMS reports that flag early signs of trouble, he says. Examples include a drop in mean time between failure after an interval change, an uptick in corrective work orders tied to specific failure modes, or increases in no‑problem‑found calls that suggest changes in user checks are needed.

“Just have a report in your CMMS that flags problems,” Baretich says. “If you find that failures are coming more often for some type of equipment you put in the AEM program, you can catch that and make a correction.”

Continuous monitoring should be paired with an annual AEM program review, according to EQ103, that rolls up performance against defined metrics, highlights changes made and their outcomes, and recommends adjustments for the coming year.

What to Include and What Not to Include

Risk remains the central lens for AEM inclusion. “Risk is a combination of the probability of failure and the severity of a failure,” Baretich says. If a device fails often and the consequence is severe, risk is high. If failures are rare and consequences are trivial, risk is low. In practice, that means AEM programs focus on lower-risk devices, not those whose failure could seriously affect patient safety. That lens helps prioritize which assets to examine for interval or task changes and which to leave untouched, Baretich says.

Still, certain categories of equipment cannot be placed on AEM. These include  imaging and radiologic equipment, medical laser devices, and “new equipment without a sufficient amount of maintenance history.” While long marked as excluded, the latter has been a grey area as to what constitutes “new,” Berkey says. EQ103 standard aims to clear that up.

Berkey says many teams historically interpreted “new” as “new out of the box,” when the intent was new technology to the industry with little operating history. That distinction matters when justifying AEM decisions for recently procured models that are not technologically novel. “You might not have data on that new syringe pump,” Haugen‑Ortiz says, “but you may be able to gain data from partners on how well these are holding up and justify putting them on an AEM program, as long as you can get the data to back it up.”

Some departments elect to exclude particularly critical devices as a matter of policy. “Some HTM programs say, ‘I don’t feel confident changing some very critical device like a defibrillator,’” Baretich says. “That’s OK. You can have a policy and just say, ‘I choose not to put it into an AEM program.’” Still, with robust data and care, he says, many asset classes are candidates for AEM.

AEM Is More Than Skipping PMs

A common misstep, Baretich says, is reducing AEM to a binary decision about whether to perform PM. “It’s really more than that,” he says. AEM can include adjusting PM intervals, altering task content, and redefining test points. He offers a defibrillator example: A manufacturer might specify checking output at 10 different levels, while a hospital’s AEM may justify five. With solid supporting data, such changes can save significant time without compromising safety.

The key is evidence, not intuition. “It’s not OK to just say, ‘I don’t think we need to do that so often,’” Baretich says. Surveyors will ask to see the data underpinning each deviation. That data can come from a hospital’s own history or from external sources—such as peer institutions or evidence repositories—as long as the HTM team has access to the underlying information, not just a summary.

“A lot of people knew you could use other people’s data, but not everybody knew you had to have access to that actual data,” Berkey says.

Haugen-Ortiz advises starting with “low-hanging fruit”—assets that consume outsized PM time but carry low risk and high redundancy. Thermometers and otoscopes often fit the bill. “Those items take a lot of time, but they rarely fail,” she says. Many are inexpensive enough that hospitals simply replace them when they fail, and clinicians functionally test them with every use.

More Gray Areas, Cleared Up

Beyond those updates, EQ103 also tackles several remaining ambiguities, including who can make AEM decisions and which authority to follow when guidance conflicts.

As for who’s qualified to make AEM decisions, the standard emphasizes experience and role. Berkey lists clinical engineers, seasoned technicians, and managers with relevant experience among appropriate approvers. Haugen-Ortiz cautions against delegating AEM authority to newly minted BMETs.

Those in charge of AEM decisions should also ensure multidisciplinary awareness. Changes to PM content or intervals often require coordination with clinical leaders, infection prevention, and supply chain. Clear communication helps set expectations—for example, ensuring frontline staff understand when a functional user check replaces a former bench step and how to escalate issues if problems arise.

Another gray area involves navigating overlapping regulations. While EQ103 provides a single framework for how AEM programs should be structured, Haugen-Ortiz notes that hospitals must still align with the most stringent requirements across federal, state, and accrediting bodies. In some states, like California, that can mean pursuing AEM-related waivers or meeting additional oversight. “Follow whatever is more strict,” she says.

AEM’s Workforce Reality and ROI

AEM helps HTM teams offset workforce shortages. With retirements accelerating and vacancies growing, Berkey says, freeing technician time has become essential.

Baretich has heard estimates that well‑designed AEM programs can reduce PM labor by 10% or more. In an understaffed facility, that reclaimed time can be redirected to high‑acuity devices, urgent repairs, cybersecurity patching, or backlogs that directly affect patient throughput.

“By putting those assets on AEM, you’re freeing up manpower,” Haugen‑Ortiz says.

Today, surveyors are not just accepting AEM—they’re increasingly expecting it, says Baretich. “A few years ago, people were reluctant to use an AEM program,” Baretich says. “Now, if you say, ‘I follow manufacturers’ recommendations entirely,’ the surveyor is going to say, ‘Why are you doing that? You should be using AEM. You should save your energy for other work.’”

AEM’s Next Chapter

The release of AAMI EQ103 signals refinement, not reinvention, for AEM programs in 2025. Berkey and Haugen-Ortiz hope the new clarity will translate into less stress and more confidence during surveys.

And, while today’s AEM practice remains grounded in human judgment, emerging tools may soon lend a hand. “I expect that AI will help us in some of this,” Baretich says. “It’s not ready yet, but I expect it to change.” As CMMS platforms, device logs, and failure codes become more standardized, he suggests AI could support faster pattern recognition and earlier detection of unintended consequences after AEM changes.

The core message for 2025 is simple and steady: AEM is mature, defensible, and expected when done well. As Baretich puts it, “We require data, we monitor the output…and we keep the quality up.”

ID 36602720 © Sudok1 | Dreamstime.com