From pulse oximeters to CT scanners, a typical hospital averages two medical devices per bed, capturing, storing, or transmitting patient data through the IT network on a daily basis. Yet medical devices are often left out of risk assessments, security, and compliance efforts—an automatic “fail” of both HIPAA and Meaningful Use requirements. And that’s just part of the problem. The range of potential security risks posed by medical devices is much broader.
What follows are nine medical device vulnerabilities that put patients and their data at risk, and that could land healthcare providers in hot water with federal regulators. After a quick review, we’ll explore the causes of these vulnerabilities, their ramifications, and possible solutions.
1) Let’s start with the basic security functionalities you’ve come to expect from your own personal smart devices. Chances are your smartphone, tablets, and personal computers receive regular updates and patches for known vulnerabilities. Because of FDA restrictions and manufacturer practices, most medical devices do not.
2) Half of computerized medical devices run on Windows operating systems, whose familiarity facilitates both intentional and unintentional breaches. (We’ll expand on this later.)
3) Many operating systems in medical devices are no longer supported by the original designer. In fact, many medical devices like CT scanners, MRI systems, and lab machines carry at least eight versions of Windows that predate XP, as well as several versions of DOS, the iconic black screen of the 1980s.
4) More than 97% of medical devices cannot have antimalware software added to them because the manufacturer has not validated it in accordance with FDA guidelines.
5) Most medical devices do not have supported data encryption capabilities.
6) Similarly, many medical devices do not have supported mechanisms for complying with basic HIPAA requirements (eg, unique logins and passwords, user log maintenance, etc).
7) Many medical devices are set up with Internet access for the convenience of the manufacturer, without regard to the risks of undue exposure of electronic protected health information (ePHI).
8) Worse yet, between patient studies on medical devices, many clinicians use those same devices to surf the Web, check personal email, and use social media accounts.
9) Finally, because they are viewed as the weakest link in healthcare data security, medical devices are increasingly being targeted for hacking, cyber attacks, and medical ID theft.
What’s At Risk?
As it happens, the risks posed by these vulnerabilities are considerable. Failure to secure connected medical devices leaves healthcare facilities open to a variety of perils, including the following:
- Financial losses averaging $2.4 million per breach, according to Ponemon Institute reports, including federal penalties, corrective action, legal costs, and lost revenue.
- The potential loss of Meaningful Use funds, or fines for incorrect attestation.
- Patient misdiagnosis or harm due to corrupted data or device malfunction.
- The loss or impairment of patient data.
- Disruption of other devices connected to the same network.
- Delayed patient testing, increased work backlogs, and patient diversion.
- Financial and reputational damage following corrective action.
- Criminal and civil penalties.
We know data breaches are bad news. Yet of all the possible negative outcomes, a data breach may be the least of your worries. Much worse is the possibility that the failure to secure medical devices can harm patients. “It’s one thing for a CT scanner to be down,” one CMIO told us. “It’s another thing if that CT scanner is impacted in a way that delivers an abnormally high dose of radiation,” as one hospital actually experienced.
The problem behind the problem is this: medical devices are often misfit devices. Because they are FDA-regulated, we can’t treat them as conventional computers, updating their software, patching operating systems, and slapping on antivirus or encryption capabilities as we wish. By doing so, we would be altering their FDA-approved state, and potentially corrupting the device’s function or the data stored in it.
Instead, the FDA requires that any modifications to a medical device be validated, regression-tested, and cleared by the manufacturer first. This requirement isn’t a bad thing, but just a necessary safeguard designed to ensure the device remains accurate and safe for patients. But these restrictions do contribute to the risks outlined above.
Another obstacle is that the IT department in a typical hospital isn’t equipped (or even permitted, in some instances) to tinker with FDA-regulated equipment. Similarly, the clinical engineers who fix the mechanics of that equipment when it breaks down often aren’t well-versed in discerning how those devices, the IT network, patient data, and federal mandates all affect one another.
An additional challenge, mentioned earlier, is the fact that about half of medical devices operate on a Windows system. That means they resemble a home computer screen. Such familiarity has two unintended consequences: It makes those devices more vulnerable to viruses and malware, and it invites clinicians to treat the devices casually, checking emails, updating their pedometer software, or surfing the Web between patient studies. (We’ve seen plenty of that.)
And one more complication is worthy of mention: Try asking which department owns the accountability for medical device security and compliance. You will very likely find what one CIO called a “neverland” of finger-pointing over a bunch of devices that could take down several departments.
What to Do
Fortunately, there are steps you can take to reduce this wide range of risks:
1) When conducting your HIPAA-mandated risk assessment, be sure to include every place where ePHI resides in your facility. Think beyond traditional computers and tablets. Medical equipment capturing patient data includes behemoth devices like MRI systems all the way down to more ordinary patient care tools like pulse oximeters, or IV pumps that regulate medicine dosages. If a device has patient data in it, it is subject to HIPAA.
2) Evaluate whether legacy systems connected to your network really need to be connected. Are there more secure ways to get the data from a device to the network, such as downloading information to an encrypted USB device, then uploading it to the network where or when needed? Not all devices need to “talk” to one another, the Internet, or your network. Limit those capabilities where no real need exists.
3) Determine whether you need to keep information on a medical device once you’ve put it in the patient’s chart. Why leave test results on a laptop-based medical device when they are so easy to steal? If nothing else, remove old exams on a regular schedule—daily, weekly, or monthly—to ensure that they are appropriately transferred to your EHR.
4) Because Windows is such a common operating system, counter the risks associated with its misuse by defining acceptable use policies for medical devices, then conducting routine security awareness training as required by HIPAA.
If you do nothing else, please observe our first recommendation: Complete a risk assessment that captures all places where ePHI is stored in your organization. Not only does that assessment fulfill a vital HIPAA requirement, but it is also the foundation for identifying your true risk level and what risk-mitigation steps to take next.
Armed with the findings of the risk assessment, you’ll be able to pinpoint high-impact steps that don’t waste your resources and that result in greater patient safety, data security, and HIPAA compliance.
Photo: © Tashatuvango | Dreamstime.com
Derek Brost is the chief security officer for eProtex, Indianapolis, a medical device security and compliance firm. For more information, contact firstname.lastname@example.org.