By Rick Schrenker
In May, 24×7 shared a story about a recent Brookings Institute report discussing health care data breaches. The story ended with the following: “Organizations that suffer from a security breach also undergo an audit by the Office for Civil Rights, a process many say is unduly punitive and discourages health organizations from sharing details about the breach with other hospitals. Many health organizations are also reluctant to circulate their experiences because of the negative publicity associated with breaches.”
If this is an accurate reflection of the current state of data breach reporting, then health care may be taking a step back from what it has learned about improving safety nearly two decades ago. Or perhaps it does not see the parallels between safety and security. It has been 17 years since the Institute of Medicine published the landmark “To Err Is Human.” Among the strategies it recommended to improve safety was implementing voluntary report systems with legally built in confidentiality protections so health care organizations wouldn’t be discouraged from participating. This was hardly a novel idea even then; after all, the Aviation Safety Reporting System has been around since 1976.
“To Err Is Human” did not recommend voluntary reporting in a contextual vacuum. It was part and parcel of a strategy to introduce safety cultures into health care. Organizations were encouraged to move toward a culture of safety, which was characterized as being informed, just, flexible, and supportive of learning. It’s difficult to imagine how that would be possible without the ability to openly share stories.
And that was not the Institute of Medicine’s final statement on the topic of sharing information in health care. In its 2012 publication “Heath IT and Patient Safety- Providing Safer Systems for Better Care,” among its recommendations were for the U.S. Department of Health and Human Services (HHS) to ensure health IT (HIT) vendors don’t prohibit sharing information about HIT problems and to make sharing of user-safety-related reports more public. They did not make these recommendations in a vacuum, either, as HIT vendor contracts often included prohibitions against such sharing.
We don’t need to look outside of the HTM profession to find examples of freely reported lessons learned. Think about it: How many of us have shared stories about lessons we’ve learned at professional meetings? Early in my career, I learned more about real problems over drinks at the end of the day than I did in most of the formal sessions I attended. And in 2009, I collaborated with 24×7 on an article along this line titled “Failures and Consequences.” We put out a call for readers to share stories about system or device failures and how they addressed them—and we promised not to reveal their identities. A handful of readers responded, and everyone got to read their stories—and benefited from them.
We lose powerful leverage if we are discouraged from sharing, and I encourage 24×7 readers to do it more often. I would also encourage you to share security stories, but would you feel comfortable doing so in the current environment? It might prove interesting to survey readers about whether they would share safety or security stories.
At this point, it is reasonable to question whether what we’ve learned about how to approach developing and managing safe systems is applicable to making and keeping them secure. There is at least some evidence that points in that direction. For instance, assurance cases focused on security are being developed using the same tools used to develop safety assurance cases. Then again, security may be a sufficiently different domain and its culture won’t map that of safety. If so, then now is the time to identify and address where and how they diverge, while we are still relatively early in the integration of medical and HIT.
Case in point: Consider how IEC 80001-1 has identified and described three “key properties” to be managed for risk when IT networks incorporate medical devices: safety, effectiveness, and security—in that order. I actually served on the committee that developed that standard and its initial set of technical reports, including one for security. Rick Hampton and I summarized those documents in a series of articles for 24×7, and you can look back to the series for details. But in short, the success of 80001-based risk management is grounded in disclosure of sufficient information to support the identification, assessment, and mitigation of all three arguably related components of risk. Clamping down on the availability of any potentially supporting information can only lessen confidence in any risk management process.
Taking all of this information into account, a fundamental question is suggested by the rush to address security breaches via punitive actions before considering other options: After all these years, and all the stories we’ve shared, what have we really learned?
Rick Schrenker is a systems engineering manager for Massachusetts General Hospital and senior biomedical engineer for the MGH MD PnP Program.