The hospital association urges the FDA to adopt risk-based post-market oversight for AI-enabled medical devices while aligning new requirements with existing regulatory frameworks.
The American Hospital Association (AHA) has called on the US Food and Drug Administration (FDA) to adopt a risk-based approach to measuring and evaluating the real-world performance of artificial intelligence (AI)–enabled medical devices, citing both the rapid growth of AI tools in healthcare and the unique challenges they pose once deployed.
In comments submitted in response to an FDA request for information, the AHA urged the agency to focus post-deployment oversight on higher-risk AI-enabled devices, align new requirements with existing regulatory frameworks, and address infrastructure and resource barriers that could limit adoption—particularly in rural and safety-net hospitals.
“AI-enabled medical devices offer tremendous promise for improved patient outcomes and quality of life,” the AHA wrote. “At the same time, they also pose novel challenges—including model bias, hallucinations, and model drift—that are not yet fully accounted for in existing medical device frameworks.”
Rapid Growth, New Risks
According to the AHA, more than 1,240 AI-enabled medical devices have been cleared or approved by the FDA to date, with the majority receiving clearance within the past three years. While many AI tools used by hospitals remain administrative—such as scheduling or revenue cycle applications—the association noted that clinical use is expanding, particularly in diagnostic imaging and radiology.
AI-enabled imaging tools, the AHA says, can identify patterns and anomalies in X-rays, MRIs, and CT scans that may be difficult for human clinicians to detect, supporting earlier diagnosis and care decisions. At the same time, the adaptive nature of AI systems introduces risks that may emerge only after deployment.
“AI tools are inherently designed to be agile and adaptive, taking in new data points, discerning patterns, and continually updating to improve model accuracy,” the AHA writes, noting that this is “especially true for generative AI.”
Call for Risk-Based Post-Deployment Monitoring
While the FDA currently regulates AI-enabled medical devices through existing pathways—including 510(k), de novo, and premarket approval—the AHA argued that gaps remain when it comes to monitoring device performance in real-world clinical settings.
The association encouraged the FDA to update adverse event reporting mechanisms to better capture AI-specific issues such as algorithmic instability and shifts between training data and real-world patient populations.
“The potential for bias, hallucinations, and model drift demonstrates the need for measurement and evaluation after deployment,” the AHA writes, adding that current reporting tools do not adequately account for these risks.
Rather than applying uniform monitoring requirements, the AHA recommends a tiered approach in which higher-risk devices are subject to more intensive oversight, while lower-risk applications face fewer burdens. The goal, the association says, should be to focus limited resources—time, personnel, and cost—on applications with the greatest potential impact on patient safety.
Aligning With Existing Frameworks
The AHA also cautions against creating entirely new regulatory structures for post-market AI evaluation, urging the FDA to build on its existing total product lifecycle framework. Aligning post-deployment monitoring with current clearance pathways could help reduce redundancy and inefficiency, the association said.
More than 96% of AI-enabled medical devices are currently cleared through the 510(k) process, the AHA notes, which “caps the number of indications for which applicants can seek approval at a given time.” Because AI systems can evolve and support new clinical use cases, the AHA argues that this structure may slow adoption and increase costs for both vendors and providers.
The association suggests that allowing manufacturers to submit detailed post-market monitoring plans—paired with additional controls where appropriate—could streamline the clearance process while maintaining safety.
Clarifying Scope and Reducing Burden
In its comments, the AHA also urges the FDA to clarify that any new measurement and evaluation standards should apply only to AI-enabled medical devices and not to certain clinical decision support or administrative AI tools that are excluded from the definition of a medical device under the 21st Century Cures Act.
The association warns that extending monitoring requirements to lower-risk or short-term tools could create unnecessary barriers and divert attention from higher-risk applications tied directly to diagnosis or treatment.
“At the same time, evaluation and monitoring activities should not be overly burdensome and resource-intensive,” the AHA writes.
Infrastructure and Workforce Considerations
Finally, the AHA highlights disparities in hospitals’ ability to support AI governance and monitoring activities. Smaller, rural, and safety-net hospitals may lack the staffing and technical resources needed to manage complex AI systems, potentially widening the digital divide.
While acknowledging that the FDA alone cannot address these challenges, the AHA encourages cross-agency collaboration to support training, technical assistance, and potential funding opportunities related to AI deployment and oversight.
The association says it looks forward to working with the FDA as the agency considers future policies for evaluating AI-enabled medical device performance in real-world settings.
ID 317206042 © Denisismagilov | Dreamstime.com