The ancient Hindu parable says that six blind men decided to settle their argument about what an elephant looks like by touching it. The man who touched the leg said it felt like a pillar, the one who touched the tail said it was like a rope, the one who touched the trunk said it was like a tree branch, and the one who touched the ear said it was like a big leather fan. The fifth man said it was a huge wall after touching its belly, and the last said it was a solid pipe after touching the tusk. Obviously, none of them was correct, but also not totally wrong because each of them had an incomplete “view” of the elephant.

Like the elephant in the parable, many clinical engineering (CE) professionals and health care executives apparently misunderstand benchmarking when they focus on a single or few metrics. For example, if a hospital administrator relies solely on the full-time employee (FTE)/bed metric reported in the benchmarking paper that my colleagues and I wrote,1 he/she could conclude that their clinical engineering head count is excessive. This apparently motivated Gary Evans to author the March 2009 “Soapbox” article questioning the validity of the “rules of thumb” and propose an elaborate staffing model.

It is easy to see how one could be totally misled by focusing solely on the FTE metric, regardless of which denominator (the number of beds or pieces of equipment) is used. Using only this metric, the best performer would be a one-person clinical engineering department that relies entirely on service contracts and time-and-material calls. This approach will almost certainly result in very high costs, as well as poor customer satisfaction due to the slow response and turnaround times.

Similarly, if one focuses solely on financial performance, the clinical engineering department may be forced to cut deeply into its staff quantity and training budgets, thereby providing slow and poor service, even to the point of putting patients and users at risk. Conversely, overstaffing and heavy reliance on OEM contracts can help improve equipment availability and customer satisfaction, but the resulting high expenses may put the hospital in serious competitive disadvantage, if not financial peril.

I believe one of the reasons the name of our profession has the word “engineering” in it is because, like other engineering disciplines, cost and quality need to be balanced in order to keep both the customer and organization happy. Furthermore, being in the service industry, it is also important to keep the staff engaged for long-term success. One well-known performance management method used in many industries, including health care, is the Balanced Scorecard (BSC) pioneered by Kaplan and Norton.2 BSC suggests each organization use at least three to four groups of metrics, covering all relevant aspects such as internal process (operations), staff (learning and growth), customer, and finance. It is only by using a comprehensive set of metrics that one can truly measure and, thus, compare the performance of any enterprise.

Now, after many years of debate, clinical engineering benchmarking seems to be finally gaining ground—witness the new efforts launched by AAMI and ECRI Institute. The clinical engineering community needs to learn not only how to perform benchmarking properly, but also how to interpret and use the results. The second part is especially important because we also need to educate health care executives and consultants so they will not attempt to judge us using a single, isolated metric, especially when the data is incomplete or inconsistent.

Listen to 24×7’s podcast on benchmarking.

For example, one of the typical comparisons consultants use is a percentile ranking of a particular organization against a peer group with similar characteristics (size, specialty, teaching status, etc). Unfortunately, it is difficult to find truly comparable organizations in sufficient quantity to provide a statistically significant comparison. So, such comparisons are often meaningless. Furthermore, even when a good peer group is found, the quality of data is often questionable. As discussed in our benchmarking paper, we found a large number of clinical engineering departments with an internal labor budget exceeding 80% of the hospital’s total clinical engineering expenditure, while most others are clustered around only 20%. Apparently, the former reported only their department budget, ignoring the service contracts and high-cost parts paid by other departments.

So let us take off the blinders that we have used for so long, pretending that benchmarking will never find us, so that we can see what the elephant truly looks like from different perspectives. In doing so, hopefully we can avoid being trampled. Perhaps we even have a chance to ride on the elephant and see the wonderful health care landscape from a higher vantage point.


Binseng Wang, ScD, CCE, FAIMBE, FACCE, is vice president of performance management and regulatory compliance for ARAMARK Healthcare’s clinical technology services, Charlotte, NC. For more information, contact .

References

  1. Wang B, Eliason RW, Richards SM, Hertzler LW, Koenigshof S. Clinical engineering benchmarking: An analysis of American acute care hospitals. J Clin Eng. 2008;33:24-37.
  2. Kaplan RS, Norton DP. The Balanced Scorecard: Translating Strategy Into Action. Cambridge, Mass: Harvard Business School Press; 1996.