New AI cybersecurity guidance is set to roll out in early 2026, as AI becomes more embedded in healthcare technology.


By Alyx Arnett

Artificial intelligence (AI) in healthcare technology management (HTM) feels like it’s at that familiar in-between stage: You can see where things are headed, but the path isn’t fully paved yet. That’s why the Health Sector Coordinating Council’s (HSCC) recent preview of its AI cybersecurity guidance has landed as both useful and unfinished—less a final answer and more a starting point.

Michael W. DeGraff, CISSP, CCSA, CHP, CSCS, chief information security officer at The Joint Commission, a member of the HSCC Cybersecurity Working Group, summed up the tension well. “AI has extraordinary potential to transform healthcare delivery,” he said, but “its adoption also brings new cybersecurity and patient safety challenges that healthcare leaders must address proactively.”

That balance is reflected in how HSCC has structured its guidance, which is expected to roll out starting in January and continue through Q1 2026. The guidance breaks the issue into five workstreams: education, cyber operations, governance, secure-by-design, and third-party risk. Based on my conversations with DeGraff and Soumya Sen, PhD—an associate professor in the Information and Decision Sciences Department at the University of Minnesota’s Carlson School of Management—the structure reflects how AI cybersecurity issues show up across different parts of healthcare organizations. The workstreams align with responsibilities many of you already carry—training, incident response, procurement, and vendor oversight. 

Sen emphasized that some of the most immediate AI-related risks are already familiar in form, even if the tools are new. AI-enhanced phishing, for example, is becoming more convincing as synthetic voice, images, and video improve. Sen also pointed to data poisoning and manipulation that can undermine clinical decision-support tools, as well as “agentic” systems—AI that acts with a high degree of autonomy—as areas where governance often lags. DeGraff added that these risks, as well as algorithmic bias and data privacy concerns, “can undermine trust and even lead to patient harm if not managed responsibly.”

Those conversations helped me understand why HSCC is emphasizing secure-by-design principles and an AI bill of materials. Governance gaps, opaque training data, and poorly defined system behavior are often what allow these risks to surface in the first place. These efforts push vendors to answer questions that HTM teams often struggle to get clear answers to: What data trained the model? How is drift detected? What happens if a system needs to be rolled back to a known safe state? For those of you managing device fleets over time, those details aren’t abstract—they affect purchasing decisions, maintenance strategies, and risk planning.

Sen also pointed out that securing AI-enabled medical devices isn’t only about design. As wearables and in-home devices become more common, he said education and training need to extend to users and patients themselves—otherwise, they risk becoming an easy entry point for cyberattacks.

If there’s a challenge ahead, it’s a familiar one. Guidance alone doesn’t guarantee action. The phased rollout through Q1 2026 gives organizations time to digest the material, but it also raises questions about how consistently it will be implemented, especially in smaller or resource-constrained settings.

What I take away from these conversations is that HSCC’s work provides a shared starting point. It creates common language and sets expectations, while leaving room for organizations to adapt the guidance to their own realities. AI will continue to change devices and workflows, but the work of making those systems safe and secure still lands where it always has—with the people managing the technology, the risk, and the care it supports.

Alyx Arnett is chief editor of 24×7. Question or comments? Email [email protected].

ID 14070249 © Skypixel | Dreamstime.com