As AI becomes embedded in medical equipment, HTM professionals must navigate new questions of accuracy, accountability, and risk.
By Stacey B. Lee, JD
The call came in on Tuesday afternoon. A BMET at a community hospital was checking infusion pumps when he noticed the artificial intelligence (AI) monitoring system had thrown three “occlusion risk” alerts in two hours. The problem was that all three pumps were working fine.
“Nurses are starting to ignore these warnings,” he told his supervisor. “What happens when we miss a real problem because everyone’s tuned out the false alarms?”
Good question. And one that gets to the heart of liability in healthcare. When these systems get it wrong, who’s responsible?
The Biggest Liability Blind Spots in Healthcare AI Adoption
The biggest blind spots aren’t where you’d expect. They’re in the everyday intersections between smart systems and the equipment that healthcare technology management (HTM) teams already maintain.
One is predictive maintenance algorithms that miss the mark. Sounds great in theory—software that predicts ventilator failures before they happen. But what if the prediction is wrong? Teams pull perfectly good ventilators from service based on faulty recommendations. Now you’re short on equipment during a busy night in the ICU. Who covers the downstream costs? Diverted patients? Emergency equipment rental? Delayed procedures? Traditional service contracts don’t address this scenario.
Integration with existing systems is another potential blind spot. Many smart systems automatically generate work orders in the CMMS. This sounds efficient until the recommendation conflicts with your preventive maintenance schedule. Follow the algorithm, and you might miss required PMs. Follow your schedule, and the administration asks why you’re ignoring the “smart” system.
Additionally, documentation gaps can pose issues. Maintenance records capture what you did and when. But they might not show why you made certain decisions if algorithms were involved. Should something go wrong and lawyers start asking questions, incomplete records become a problem.
HTM can help by treating algorithmic recommendations like any other input—useful data, not gospel. Keep documenting your decision-making process. When software suggests something that conflicts with your protocols, document why you did or didn’t follow the recommendation.
Where HTM Fits in AI Contract Talks
Most contracts are signed long before HTM professionals see them—and that’s backwards. You’re the ones who will live with these systems every day.
Start by redefining service level agreements. Traditional contracts focus on uptime and response times, but smart system contracts require different metrics. After all, what good is 99% uptime if the system delivers inaccurate recommendations half the time?
Service level agreements should include accuracy thresholds. If a system monitoring cardiac devices drops below 95% accuracy, what happens next? Who’s responsible for fixing it—and how quickly? Standard contract language rarely accounts for these scenarios.
Contract Red Flags for HTM
Quick screen for clauses that increase risk or reduce transparency.
- Vague maintenance language (“reasonable upkeep”)
- Liability caps limited to software license costs
- No accuracy performance thresholds
- Training limited to initial deployment only
- Vendor-only access to performance logs
- No escalation procedures for system failures
Liability allocation is another must. Vendors often cap their liability at the cost of the software license—maybe a few thousand dollars. But when faulty recommendations cause patient harm or compliance violations, the financial impact can reach six figures.
Training requirements also deserve attention. AI-enabled systems aren’t like traditional equipment. They demand specialized training that goes well beyond “click here to run the program.” And because staff turnover is inevitable, contracts should spell out who provides training and how often it will be refreshed.
Finally, clarify maintenance responsibilities. With conventional equipment, it’s clear who handles what. Smart systems blur those boundaries. Is algorithm performance the vendor’s responsibility or yours? Who ensures data quality? Who fixes integration issues with existing equipment?
HTM professionals should advocate for performance guarantees with real teeth—accuracy-based SLAs, clear escalation paths when metrics slip, ongoing training commitments, and liability terms that reflect the real-world consequences when things go wrong.
Create Internal Protocols or Cross-Disciplinary Teams to Manage AI Risk
HTM teams already understand how to manage equipment risk. The same mindset applies to smart systems.
Start by integrating risk assessment. High-risk applications—such as those connected to life support or critical monitoring—require more oversight than tools used for scheduling or inventory management.
Next, incorporate preventive maintenance for algorithms. Fold system upkeep into your existing PM schedules, including software updates, data quality checks, and performance validation. Document these activities just as you would for any other critical system.
Develop staff training programs. BMETs don’t need to become programmers, but they do need a practical understanding of how these systems behave—what “normal” looks like, when to escalate concerns, and how AI fits into existing troubleshooting workflows.
Foster cross-departmental communication. Schedule regular check-ins with IT, clinical, and administrative teams. Problems with smart systems often cross boundaries—a missed alarm, for instance, might look like an IT issue until it becomes clear it’s disrupting clinical workflow.
Finally, update your incident response procedures. When equipment problems could involve AI, traditional root cause analysis may overlook key factors. Train your team to consider whether algorithms might be contributing to equipment issues and document those findings accordingly.
Best Practices to Stay Ahead of Evolving AI Liability Expectations
Several regulatory changes directly affect HTM departments managing smart system-integrated equipment:
- FDA Software as Medical Device guidance — Evolving requirements for post-market surveillance and performance tracking mean more documentation work for HTM departments.
- Quality system regulations — Apply to systems that affect medical device performance, impacting documentation procedures, change control processes, and training records.
- Cybersecurity requirements — Get more complex with smart systems. When you’re responsible for networked medical equipment, algorithms create new vulnerabilities you need to understand.
- Professional organization standards — AAMI and ECRI are developing maintenance standards specific to these systems. Getting ahead of standards shows you’re managing risks proactively.
- State-level regulations — Some states are implementing transparency requirements. Most focus on consumer applications now, but healthcare applications will likely face similar scrutiny.
Making It Work
Start with existing processes. You already have procedures for equipment evaluation, maintenance scheduling, staff training, and vendor management. Extend these to include smart systems rather than creating separate procedures.
Update your documentation systems. Add these systems to your equipment database: track version numbers, update schedules, and performance metrics alongside traditional equipment data. Treat algorithm updates like firmware updates—document when they happen and test performance afterward.
Apply proven vendor criteria. Use existing vendor evaluation standards for smart system suppliers. Performance history, training quality, technical support, and contract compliance—the same factors that matter for traditional equipment vendors.
- Budget for ongoing costs (training, documentation updates, additional staff time)
- Include smart systems in existing risk assessment protocols
- Establish accuracy thresholds for performance monitoring
- Create escalation procedures for system performance issues
The Bottom Line
Smart systems aren’t going away. The question is whether HTM departments will shape how they’re implemented or get left out of decisions that affect daily work.
HTM teams have skills these systems need: equipment lifecycle management, vendor relations, regulatory compliance, and risk assessment. These apply directly to algorithmic system oversight.
Hospitals that integrate smart system management into existing HTM processes will handle risks better than those treating this as purely an IT issue. Professionals who adapt their existing skills to include algorithmic oversight will become indispensable.
The bottom line: HTM’s expertise in preventing equipment problems before they happen is exactly what healthcare needs to manage smart system liability risks.
ID 312824800 © Yan Zabolotnyi | Dreamstime.com
About the author: Stacey B. Lee, JD, is a professor at Johns Hopkins Carey Business School and Bloomberg School of Public Health and an award-winning author of “Transforming Healthcare Through Negotiation.” She provides legal analysis on healthcare technology and policy for national media.