I am very impressed with Carl Jones’ recent Soapbox column, “Five Things That Every Healthcare Professional Should Know.” It is a simply outstanding exposition of key areas for today’s BMETs.

There was one statement that struck this old timer’s eye as concerning, but I’m really not quite sure what to say about it other than to recommend it as worthy of additional discussion:

Every year, more and more equipment becomes network-capable. As a result, the frequency of outages and reboots will also increase. In order to compensate for this increase, we must have a plan for quick response and repair.

I agree that the field needs to develop strategies to address problems with network-capable devices. But should we accept as a given that the frequency of consequential problems will increase? That would be a change from the past, where increases in reliability were among the reasons for introducing new technologies. Indeed, among the marketing pitches for early microprocessor-based medical devices was one for the increased reliability microprocessors offered, which led to the potential for increased safety and availability as well as decreased cost of ownership. There may be good reasons to trade off some degree of reliability, eg, increased capabilities. But should it be tacitly accepted?

In her book The Challenger Launch Decision, Diane Vaughan described what she termed the “normalization of deviance.”1 Discussing Vaughan’s book, blogger Rob Boe defines the term as follows: “The gradual process through which unacceptable practice or standards become acceptable. As the deviant behavior is repeated without catastrophic results, it becomes the social norm for the organization.”

The normalization of deviance can be regenerative, in that the new norm can later survive deviations that displace the new norm even farther from the original. Is there any hint of that in the expectation that the frequency of outages and reboots will [increase]? Or is this increase a phenomenon of a different sort? For example, reliability and safety are not necessarily synonymous, as the very word “failsafe” suggests. Could that be the case here?

In addressing the impact of design changes in general, Henry Petroski has provided numerous insights that can inform this discussion, among them the following:

Things work because they work in a particular configuration, at a particular scale, and in a particular context and culture.2 

Any design change…can introduce new failure modes or bring into play latent failure modes. Thus it follows that any design change, no matter how seemingly benign or beneficial, must be analyzed with the objectives of the original design in mind.3

Rick Schrenker is a systems engineering manager, department of biomedical engineering, Massachusetts General Hospital, Boston.

References

1. Vaughan D. The Challenger Lauch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press; 1996.

2. Petroski H. Success through Failure—The Paradox of Design. Princeton, NJ: Princeton University Press; 2006:167.

3. Petroski H. Design Paradigms—Case Histories of Error and Judgment in Engineering. Cambridge, UK: Cambridge University Press; 1994:57.