The Coalition for Health AI (CHAI) published a new guide—Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare—that addresses the quickly evolving landscape of health AI tools by outlining specific recommendations to increase trustworthiness within the healthcare community.

The 24-page guide reflects a unified effort among subject matter experts from leading academic medical centers and the healthcare, technology, and other industry sectors, who collaborated under the observation of several federal agencies over the past year.

“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” says Brian Anderson, MD, a co-founder of the coalition and chief digital health physician at MITRE. “The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care.”

Members of the CHAI Steering Group will discuss the blueprint during an upcoming webinar.

“The successful implementation and impact of AI technology in healthcare hinges on our commitment to responsible development and deployment,” says Eric Horvitz, chief scientific officer at Microsoft and CHAI co-founder. “I am truly inspired by the incredible dedication, intelligence, and teamwork that led to the creation of the Blueprint.”

Further reading: HHS Releases Healthcare Cybersecurity Guide

CHAI and the National Academy of Medicine Collaboration

The National Academy of Medicine’s (NAM) AI Code of Conduct effort is designed to align health, healthcare, and biomedical science around a broadly adopted code of conduct in AI to ensure responsible AI that assures equitable benefit for all. The NAM effort will inform CHAI’s future efforts, which will provide robust best-practice technical guidance, including assurance labs and implementation guides to enable clinical systems to apply the Code of Conduct.

CHAI’s technical focus will help to inform and clarify areas that will need to be addressed in NAM’s Code of Conduct. The work and final deliverables of these projects are mutually reinforcing and coordinated to establish a code of conduct and technical framework for health AI assurance.

“We have a rare window of opportunity in this early phase of AI development and deployment to act in harmony—honoring, reinforcing, and aligning our efforts nationwide to assure responsible AI. The challenge is so formidable and the potential so unprecedented. Nothing less will do,” says Laura L. Adams, senior advisor, National Academy of Medicine.

Following Patient-Centered Policy Approaches

The Blueprint builds upon the White House OSTP Blueprint for an AI Bill of Rights” and the AI Risk Management Framework (AI RMF 1.0) from the U.S. Department of Commerce’s National Institute of Standards and Technology. OSTP acts as a federal observer to CHAI, as do the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration, Office of the National Coordinator for Health Information Technology, and National Institutes of Health.

Further reading: Confronting the Healthcare Cybersecurity Labor Shortage

“The needs of all patients must be foremost in this effort. In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The Blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry,” says John Halamka, MD, MS, president, Mayo Clinic Platform, and a co-founder of the coalition.

Launched in the spring of 2022, CHAI’s mission is to identify health AI standards and best practices and to provide guidance where needed. It has grown to over 150 organizations across academia, government, healthcare systems, and industry, the organization says.

“The CHAI Blueprint is the result of the kind of collaborative approach that’s essential for achieving diverse perspectives on issues affecting AI in medicine,” says Michael Pencina, PhD, a co-founder of the coalition and director of Duke AI Health. “And given our rapidly evolving understanding of the significant impacts of AI on health, health delivery, and equity, the fact that the Blueprint is designed to be a flexible ‘living document’ will enable us to maintain a continuous focus on these critically important dimensions of algorithmic healthcare.”