Lessons from 2025’s breaches point to a tougher reality in 2026, as AI adoption accelerates and cybersecurity becomes inseparable from patient care.
By Skip Sorrels, field CTO and CISO, Claroty
As healthcare leaders look back on 2025, it stands out as a year that offered clear lessons about the state of data security across the industry. Massive breaches at organizations including Aflac, Yale New Haven Health, Blue Shield of California, and DaVita exposed the personal and medical data of more than 35 million individuals. While each incident differed in execution, together they revealed a deeper, more systemic problem: Healthcare is still struggling with shared accountability for cybersecurity at a moment when the attack surface is expanding faster than governance frameworks can keep up.
Healthcare organizations are under immense pressure to do more with less. Budgets remain tight. Staffing shortages persist. And the industry is rapidly embracing artificial intelligence (AI) to relieve operational strain by automating contract reviews to accelerating vulnerability assessments and analyzing massive volumes of threat intelligence in near real time. Used responsibly, AI has the potential to break healthcare free from the legacy mindset that meaningful change must be slow, manual, and resource-intensive.
But the same acceleration that makes AI so powerful also introduces new risks. Without clear governance, human oversight, and informed consent, AI can undermine patient safety, privacy, and resilience, especially in environments where technology cannot simply be taken offline when something goes wrong.
From Financial Impact to Patient Harm
One of the most important lessons from 2025’s breaches is that healthcare cyberattacks are no longer motivated solely by ransom payments or financial gain. Increasingly, attackers are aiming for disruption.
Healthcare is uniquely vulnerable because of the criticality of its connected systems. A catheterization laboratory filled with patient care devices cannot be powered down for routine patching without risking lives. Downtime windows are narrow, and many medical devices exist in a frustrating limbo—replacement parts are available, but security patches are not. While HHS 405(d) guidance rightly emphasizes network segmentation, traditional approaches often fall short in 24/7 care environments.
As a result, organizations are turning to more advanced techniques, like microsegmentation, to buy time while planning costly lifecycle refreshes. But attackers understand these constraints just as well as defenders do. In 2026, the ripple effects of ransomware and destructive attacks will increasingly be measured not in delayed reimbursements or IT outages, but in patient care delivery, morbidity, and even mortality. Cybersecurity in healthcare can no longer be treated as separate from clinical operations; it is patient safety.
Governance Is the Defining Challenge of 2026
If 2025 exposed the cracks, 2026 will test whether healthcare can close them. The defining challenge ahead is governance: establishing oversight frameworks that keep pace with rapid AI adoption while ensuring accountability across departments.
Cybersecurity can no longer sit solely with IT or security teams. AI systems touch legal, compliance, clinical workflows, supply chains, and third-party relationships. When something fails, responsibility is often diffused, and attackers exploit that ambiguity.
Effective governance means defining clear roles, acceptable levels of friction, and decision-making authority before a crisis occurs. It means implementing human-in-the-loop guardrails so AI augments, rather than replaces, clinical and security judgments. It also means recognizing that AI systems are trained by humans, can inherit bias, and must operate within clearly defined scope and controls.
As a former trauma and ICU nurse, I often think about triage. In the emergency room, you focus first on the chest wound before worrying about the broken wrist. Healthcare cybersecurity leaders in 2026 must take the same approach by using AI to understand where they are weakest right now and remediate the most critical risks first. m
Consumer AI and the Question of Trust
These governance challenges extend beyond hospital walls. The rise of consumer-facing AI health tools, including offerings like ChatGPT Health, raises important questions about privacy, consent, and appropriate use.
With any interactive AI platform, especially those that enable web searches or third-party integrations, there is an inherent risk that personal data may be shared beyond the original application. Once that data is passed to a third party, it is governed by their privacy policies, not the platform the user started with. Human nature inclines us to trust without verification, yet few people read or fully understand the fine print in privacy policies and terms and conditions.
Patients and providers alike must ask hard questions: Do I really want to share highly personal health information with this tool? Am I comfortable with the possibility that it could be accessed by other parties or exposed publicly? If the answer is no, then caution is warranted.
At the same time, I strongly believe in empowering individuals to better understand their health. As a clinician, I know how overwhelming medical information can be even for trained professionals. AI has the potential to help patients navigate complex health topics, ask better questions, and become more informed partners in their care. The challenge is ensuring that these tools provide information, not diagnoses, and that they adhere to the principle that has guided medicine for centuries: do no harm.
Regulation, Responsibility, and the Role of Collaboration
Recent signals that regulators may take a lighter-touch approach to certain AI-enabled devices and clinical decision support tools place even greater responsibility on healthcare organizations and technology vendors. When AI systems begin to move toward diagnosing disease rather than presenting data, the risks escalate significantly. Diagnosis should remain in the domain of trained healthcare practitioners, supported rather than supplanted by technology.
Technology providers must meet the moment with greater transparency, particularly in plain language. Patients and providers deserve clear explanations of how data is protected, what security controls are in place, and exactly how information is shared during third-party interactions. Transparency builds trust, and trust is foundational to healthcare.
Looking ahead, no single organization can solve these challenges alone. The threats facing healthcare are increasingly intertwined with national resilience, extending to healthcare-adjacent infrastructure like power, facilities, and supply chains. Public-private collaboration will be essential to share intelligence, align standards, and respond at the speed modern threats demand.
Treating Cybersecurity as Care
In 2026, healthcare leaders face a choice. AI can be a powerful force multiplier, helping organizations assess risk in real time, reduce manual burden, and improve patient outcomes. Or it can become another layer of complexity that attackers exploit.
The difference will come down to governance, accountability, and a clear understanding that cybersecurity is not just a technical discipline. It is a core component of patient care.
When we treat it that way, we stand a better chance of not just being more automated but more resilient.
About the author: Skip Sorrels serves as Field CTO and CISO at Claroty. Skip is a cybersecurity professional known for his leadership in crafting robust cybersecurity programs. He has a master of science in cybersecurity and information assurance.

ID 412919823 | Data © PhotoPawel | Dreamstime.com