New framework calls for clear regulatory pathways, data access reform, and reimbursement strategies to advance AI-enabled medical devices.


By Alyx Arnett

The Advanced Medical Technology Association (AdvaMed) has released its first artificial intelligence (AI) policy roadmap, offering lawmakers and regulators a framework for advancing the safe and effective use of AI-enabled medical devices. 

Developed by AdvaMed’s Digital Health Tech division, the roadmap outlines five key policy areas: privacy and data access, FDA AI regulatory framework, reimbursement and coverage, AI assurance labs, and generative AI.

“[A]ll of us in industry and medical devices are going after AI, and there’s such commonality that we could either play defense and wait for regulations, or we could partner with the FDA, legislators, CMS and help them understand the journey of something that sometimes is not very understandable,” says Robert Cohen, Digital Health Tech board member and president of digital, robotics, and enabling technology at Stryker, during a press call. 

AdvaMed’s roadmap, which draws input from a board that includes representatives from companies such as Siemens, Medtronics, Philips, Microsoft, Google, and Apple, arrives at a time when more than 1,000 AI-enabled medical devices have already received FDA authorization, and many more are in development. The organization hopes this roadmap will guide policymakers in ensuring that regulation, reimbursement, and infrastructure evolve alongside innovation.

“We wanted to cut through some of the noise and be able to put something in front of policymakers to get a better sense where medtech and big tech are coming together [and] where those primary issues are,” says Shaye Mandle, executive director of the Digital Health Tech division, during the press call.

1. Privacy and Data Access

AI-enabled medical devices rely on access to large, high-quality datasets to deliver accurate, personalized insights in diagnostics and treatment planning. However, fragmented data systems, inconsistent privacy standards, and siloed aggregation processes pose major barriers to innovation.

AdvaMed’s roadmap highlights the tension between protecting patient privacy and enabling developers to train robust, bias-mitigated algorithms. Current HIPAA regulations, while essential for safeguarding health information, were not designed for the scale and complexity of AI development. These limitations can make it difficult for companies to gather the longitudinal, demographically diverse data needed to meet FDA expectations and conduct meaningful bias analyses.

To address these challenges, AdvaMed calls for a modernized approach that supports responsible data use without weakening privacy protections.

Policy recommendations:

  • Ensure data protection without stifling innovation.
  • Evaluate the need to update HIPAA for the AI era and create clear guidelines specifically for data use in AI development.
  • Develop appropriate guidelines around patient notice and authorization for the data used to develop AI. 

2. FDA AI Regulatory Framework

AdvaMed’s roadmap positions the FDA as the lead regulator for AI-enabled medical devices. The organization states that FDA’s risk-based framework—used to assess safety, effectiveness, and post-market performance—is already equipped to handle the unique challenges of AI in medical technology.

Rather than creating new oversight structures, AdvaMed recommends that FDA maintain its central role while continuing to adapt existing tools such as the Predetermined Change Control Plan (PCCP), which allows pre-approved software updates without requiring a full resubmission. The roadmap also calls for greater consistency in implementation and international alignment to avoid regulatory fragmentation.

Cohen notes during the call that “as long as we stick to some guiding principles and processes, I think we can work with the FDA to keep us moving at a pace that keeps us competitive, not just in the United States, but in the world.”

Board members also emphasized the need for global alignment on AI regulatory standards, warning that varying rules across countries—such as the US, EU, and Japan—could slow innovation and increase time to market. AdvaMed supports harmonized criteria for data validation, privacy, and bias mitigation to ensure consistent oversight without duplicative requirements.

“We could have a five-year regulatory process in Europe, an [eight]-year process in the United States, and a six-month [process] in Australia…It creates complexity with software, and we don’t have consistency,” Cohen says during the press call. 

Policy recommendations:

  • FDA should remain the lead regulator responsible for overseeing the safety and effectiveness of AI-enabled medical devices.
  • FDA should implement the existing PCCP authority to ensure it achieves its intended purpose of ensuring patients have timely access to positive product updates.
  • FDA should issue timely and current AI guidance documents related to AI-enabled devices and to prioritize the development and recognition of voluntary international consensus.
  • FDA should establish a globally harmonized approach to regulatory oversight of AI-enabled devices.

3. Reimbursement and Coverage

AdvaMed’s roadmap identifies reimbursement as a critical factor in determining whether patients can access the benefits of AI-enabled medical devices. Although FDA clearance establishes safety and effectiveness, many technologies face delays in adoption due to unclear or nonexistent payment pathways—especially within Medicare, which influences broader reimbursement policy across private insurers and Medicaid programs.

The roadmap states that CMS has the regulatory authority to expand access, but its current framework lacks the specificity needed to consistently support AI technologies and software-driven tools. AdvaMed urges legislative and regulatory action to ensure payment models evolve alongside innovation, particularly for emerging categories such as algorithm-based health care services (ABHS) and digital therapeutics.

Policy recommendations:

  • Consider legislative solutions to address the impact of budget neutrality constraints on the coverage and adoption of AI technologies.
  • CMS should develop a formalized payment pathway for algorithm-based health care services (ABHS) to ensure future innovation and to protect access to this subset of AI technologies for Medicare beneficiaries.
  • To ensure future innovation and to protect access to ABHS for Medicare beneficiaries, we urge CMS to develop a formalized payment pathway for ABHS.
  • Facilitate the adoption and reimbursement of digital therapeutics through legislation and regulation.
  • CMS should leverage its model authority to test the ability of AI technologies to improve patient care and/or lower costs.

4. AI Assurance Labs

As AI-enabled medical devices evolve, some stakeholders have proposed the use of third-party quality assurance labs to manage ongoing performance evaluation. However, AdvaMed’s roadmap raises concerns about this approach, questioning whether these labs would add value beyond existing FDA oversight.

The organization warns that introducing external labs could create redundant regulatory layers, increase costs, and introduce new risks related to data security and intellectual property. The roadmap also notes that third-party evaluators may lack the clinical context or device-specific understanding needed for accurate performance assessments, which could affect reliability and transparency.

Instead of relying on third-party labs, AdvaMed recommends building on FDA’s existing risk-based framework and encouraging alignment around accredited, consensus-based standards.

Policy recommendation:

  • Policymakers should encourage FDA to participate in the development of and timely recognition of accredited and consensus-based standards for quality assurance processes rather than rely on third-party assurance labs.

5. Generative AI

AdvaMed’s roadmap addresses the growing interest in generative AI (GenAI) and its potential role in medical technology. While no GenAI-enabled devices have yet been authorized by the FDA, early use cases—such as generating synthetic medical images for training or producing preliminary radiology reports—point to future clinical applications. Cohen says that generative AI may be best suited for improving efficiency in hospital operations, such as operating room scheduling or planning perioperative staffing—applications he describes as having a lower entry bar while the technology continues to evolve

Given the evolving nature of this technology, AdvaMed urges regulators to avoid premature or overly rigid frameworks. Instead, the organization recommends applying the FDA’s existing risk-based approach, which evaluates devices based on intended use, risk profile, and performance, rather than focusing solely on the underlying AI method.

The roadmap also stresses the importance of continued communication between the FDA and stakeholders to ensure thoughtful, transparent policy development as GenAI applications emerge.

Policy recommendations:

  • Ensure FDA reviews and considers GenAI-enabled devices using the existing risk-based frameworks.
  • Encourage FDA to maintain ongoing dialogue with stakeholders in the health care sector and regular information-sharing on generative AI applications in medical devices.

ID 119392112 © Roberto Scandola | Dreamstime.com