Artificial intelligence (AI) is rapidly becoming an integral part of the healthcare industry, but as we march forward into a new era of technology there are increasing concerns about its regulation.

President Biden recently issued an executive order mandating new safety and security standards for AI and Ty Greenalgh, Claroty’s industry principal, healthcare, spoke with 24×7 about the challenges facing the healthcare sector.

24×7: How did the executive order on safety and security standards for AI come about, and why is its timing crucial as we approach 2024?

Ty Greenhalgh: The order was prompted by a number of factors, including the increasing sophistication of AI, the growing use of AI in critical applications, and the potential risks of AI misuse. AI has the potential to cause more harm than good if it is not developed responsibly and as hospital’s interconnectivity accelerates, so does the speed at which AI misuse can too.

24×7: What are the implications of the executive order for healthcare delivery organizations, or HDOs, especially with labor shortages increasing reliance on AI support?

Greenhalgh: Despite the healthcare industry historically being slower to adopt emerging technologies, recent surveys have found that almost half of health systems currently use AI to address workforce challenges. Introducing more third-party connected systems like generative AI into hospitals and clinics to alleviate these challenges can make it more complicated for healthcare providers to stay secure.

Ty Greenhalgh

What is promising about the recent executive order focused on the safety and security of these technologies is that it proactively places an emphasis on the responsible use of AI in healthcare, with the Department of Health and Human Services establishing a safety program that not only follows reports of unsafe healthcare practices involving AI but also acts to remedy any harm done.

24×7: Can you explain why AI isn’t universally applicable to all industries, and what factors determine its suitability for integration in specific sectors?

Greenhalgh: While the benefits of AI can seem promising for many industries, several currently do not have a mature enough security posture to handle the unforeseen consequences that comes with AI adoption. For example, a technology company might look to leverage AI to streamline administrative processes within their IT department and a hospital may also have a similar idea to do the same. However, the technology company most likely uses state of the art cyber technology and practices to keep their IT and OT infrastructure secure.

On the other hand, many hospitals are still in critical need of prioritizing basic cybersecurity hygiene before adopting the new shiny AI tools that have been notorious access points for bad actors to infiltrate. Industries like banking or technology are best positioned for AI integration as they already have strong security protocols in place that minimize any risk associated with implementation.

24×7: What are some examples of the potential consequences if healthcare providers fail to properly secure third-party connected systems like ChatGPT and generative AI in hospitals and clinics?

Greenhalgh: ChatGPT and generative AI are already being used by hospitals and clinics for medical transcription and patient communications. However, with an increasingly connected landscape and larger attack surface, healthcare providers are extremely vulnerable to cyberattacks, especially if new technologies are implemented without robust security protocols in place during the adoption stage. If hackers gain access to a hospital’s BMS system or patient care systems through vulnerable AI apps meant to aid workflows, the consequences could be dire, impacting operations, patient care, or worse, potentially putting lives at risk.

24×7: As part of the NIST Generative AI Risk Management work group, what insights can you provide about ongoing efforts to mitigate generative AI risks in healthcare and other sectors?

Greenhalgh: The NIST GAI Public Working Group launched in June 2023. GAI refers to AI models or systems able to generate content (image or text) via plain language interaction, capable of tasks beyond its original training and covers adaptable and expansive capabilities. We are focusing on Governance, Pre-Deployment Testing, Content Provenance and Incident Disclosure. The Working Group seeks to provide outcome-focused actions that enable dialogue, understanding and guidelines to manage AI risks, responsibly develop and use trustworthy GAI systems.

24x7: What recommendations or best practices do you suggest for healthcare providers to boost AI technology security in light of the changing landscape and the executive order’s guidelines?

Greenhalgh: Don’t rush in when adopting new technologies just because the benefits speak directly to your immediate needs. While the realm of possibilities AI offers can be exciting to industries, like healthcare, that are in desperate need of labor support, security teams must take the time to identify vulnerabilities, mitigate potential risk, and build resiliency within their XIoT systems to prevent the unseen threats that many of these AI technologies come with. For example, prioritizing network segmentation so clinical devices don’t share a network with external apps and ensuring security teams have proper visibility of all connected medical devices on their network can help to easily identify and prevent risk.