Despite high confidence in digital defenses, few healthcare organizations have assessed the physical risks posed by generative AI tools.
As artificial intelligence rapidly transforms the cyber threat landscape in healthcare, new data from Black Book Research reveals a critical and overlooked vulnerability: the physical security of hospitals, clinics, and payer organizations.Â
While 93% of surveyed cybersecurity leaders say their digital defenses are strong, fewer than one in five have any strategic plan to address the rise of AI-enabled physical security threats.
Based on Q2 2025 polling of 1,128 provider and payer cybersecurity decision-makers worldwide, Black Book’s findings point to a dangerous disconnect. Healthcare organizations are investing heavily in digital firewalls, endpoint protection, and ransomware defense, yet they remain largely blind to a new class of threats powered by generative AI—threats that can mimic clinician voices, manipulate surveillance footage, bypass building access systems, and compromise smart infrastructure.
“AI is no longer just a digital threat; it is a physical one,” says Doug Brown, founder of Black Book Research, in a release. “We are now seeing threat actors use generative AI to impersonate clinicians, defeat voice authentication, bypass smart locks, and manipulate surveillance systems. These are no longer hypothetical scenarios. Attackers are walking through the front doors of hospitals using tools that outpace the slow churn of healthcare policy, procurement, and security oversight. Any health system that still separates physical and cyber risk is operating on outdated assumptions.”
Respondents described a widening gap between cyber risk awareness and operational readiness. Despite growing headlines about AI-generated phishing, deepfake impersonations, and drone surveillance, the healthcare sector has not meaningfully upgraded its physical security posture in parallel with its digital investments.
Key findings from the Black Book Q2 2025 poll:
- 93% of cybersecurity leaders say their digital protections are adequate, but only 18% report having any strategy to mitigate AI-driven physical threats.
- 71% of hospital executives acknowledge their facility’s physical security systems are unprepared for manipulations such as deepfake badge credentials or sensor spoofing.
- 67% of payer organizations with physical office sites or hybrid call centers were unaware that AI voice cloning could defeat IVR authentication or front-desk verification processes.
- 82% of all respondents reported they had not conducted a cyber-physical risk audit in the past 12 months.
Vendors Recognized for Addressing AI-Driven Cyber-Physical Threats
Survey respondents identified several vendors as having strong capabilities in detecting and mitigating emerging AI-driven threats that cross digital and physical domains. These platforms are used across hospitals, health systems, and payer networks and include tools based on machine learning, behavioral analytics, and autonomous threat detection.
- Armis offers agentless visibility and AI-based monitoring for connected medical devices and operational technologies.
- Bishop Fox provides red teaming services used to expose vulnerabilities in surveillance, badge access systems, and connected care infrastructure.
- Claroty (Medigate) protects IoMT and clinical systems through machine learning that detects manipulation of connected devices and smart facility components.
- Cisco Secure supports Zero Trust architectures and includes AI-powered analytics to monitor digital and physical access behaviors in hybrid clinical environments.
- CrowdStrike offers agent-based AI for detecting behavioral anomalies and sophisticated threat campaigns across clinical endpoints.
- Cynerio secures medical IoT systems by baselining device behavior and flagging manipulation or ransomware infections.
- Darktrace uses self-learning AI to detect impersonation, badge cloning, and network manipulation across thousands of healthcare organizations.
- IBM Security offers platforms that correlate digital and physical access data while automating threat response.
- Okta provides identity and access management with adaptive AI to prevent credential theft and synthetic access.
- Ordr enforces security policies for medical and building systems, isolating unauthorized device activity.
- Palo Alto Networks uses AI-powered platforms to enforce segmentation and detect polymorphic malware.
- SentinelOne delivers autonomous endpoint protection against AI-crafted exploits and real-time threat behaviors.
- Vectra AI flags privilege escalation and behavioral deviations common in AI-generated attacks.
These solutions reflect a growing shift toward integrated cyber-physical risk management, as AI-generated threats increasingly evade traditional rules-based defenses.
What Makes a Tool ‘AI Threat-Ready’ in Healthcare?
According to Black Book, effective tools must detect synthetic behaviors rather than just known malware signatures, identify identity misuse such as voice or video impersonation, monitor IoMT and operational environments, and support red teaming or simulation of AI threats to proactively assess system vulnerabilities.
ID 309840907 © Tero Vesalainen | Dreamstime.com