If a biomed thinks the downtime for a particular type of equipment in the hospital seems excessive, how can he/she find out if it actually is excessive? If it does turn out to be excessive, how can the department improve this metric? And how will it know that the measures put into place have actually worked?

The answer is benchmarking. According to Ted Cohen, MS, CCE, manager of clinical engineering for the UC Davis Health System in Sacramento, Calif, benchmarking is the process of comparing quantifiable measurements, such as cost, quality, productivity, or downtime, within one’s own department over time or, more commonly, among similar companies, departments, or institutions for the purpose of quality improvement and/or cost improvement.

For some, the concept has been adapted as metrics, progress reports, or scorecards. Christopher J. Correll, TSGT, USAF, CBET, NCOIC, clinical engineering, assistant facility manager at Pope Air Force Base in North Carolina, thinks of the tool as the department report card. “Basically, benchmarking provides an overall performance picture for our department and helps us gauge where we are in relation to a set standard. The result is often the clinical engineering department’s case for making changes in order to make improvements,” Correll says.

Ideally, benchmarks are used correctly with a positive objective, in which case their application within a clinical engineering department results in quantitative improvements in performance, quality, and/or cost. These measured results can then be used as promotional tools, “where you can show people how you are improving over time,” explains Frank R. Painter, adjunct professor and clinical engineering internship program director at the University of Connecticut in Storrs, Conn.

However, if used incorrectly, such as when comparing dissimilar metrics or institutions, benchmarking can lead to changes that decrease quality or increase expense. “Every situation is different, and it can be misleading since no two hospitals’ metrics, size, mission, and department resources will be exactly the same,” Correll says.

To make accurate comparisons, clinical engineering departments need to have similarities among institutional demographics and metric definitions. “For example, it is not valid to compare cost-of-service ratio if institution one includes imaging equipment and institution two does not,” Correll says. An exchange of information among apples can result in shared insights into how to improve metrics.

The Measuring Stick

Painter suggests the best way to get started is with high-level objectives. “The way you approach them is identify what your boss, what the hospital, what the organization has in terms of expectations for the department, and try to include benchmarks related to that in your monitoring,” Painter says.

For instance, if the objective is excellent customer service, then the department might want to track user complaints or customer satisfaction. To reduce costs, the department might focus on the cost-of-service ratio.

Benchmarks that Matter

A benchmarking program should reflect the benchmarks that actually matter. As a result, every program should be different, monitoring the metrics that relate to the issues the organization has identified for improvement. Frank R. Painter, adjunct professor and clinical engineering internship program director at the University of Connecticut in Storrs, Conn, recommends biomed departments choose about eight benchmarks to track. Here he provides five of the most common:

Total number of nonmanagement, nonsupervisory full-time employees (FTEs) divided by the number of supervisory and management FTEs;

Number of persons in the clinical/biomedical engineering program assigned to medical device maintenance and repair, per 1,000 medical devices;

Ratio of total annual maintenance hours to total annual hours paid to clinical/biomedical engineering program personnel assigned to maintenance and repair (percentage);

Ratio of maintenance contracts—plus vendor time and materials costs—to the total cost of maintenance; and

Ratio of the total annual maintenance cost to the total acquisition cost for medical devices managed by the clinical/biomedical engineering program (percentage).

—RD

“Some indicators are appropriate for different settings, but there are some indicators that are appropriate across the board,” Painter says.

Biomeds want to be careful that they don’t choose too many metrics to monitor. There are limitations, particularly in regard to time, to how many benchmarks can be tracked by one team accurately during one period.

“It really requires a lot of effort, so you don’t want to be monitoring 25 parameters when 20 of them really aren’t important,” Painter says. “You want to pick five, six, or seven and make sure that you’re doing a pretty tight job of monitoring those.”

Measurements

Any quantifiable measurement can be benchmarked, but clear objectives can help to narrow the choice. One of the most common clinical engineering benchmarks is the cost-of-service ratio: the total cost a hospital spends to support its medical equipment divided by the acquisition cost of that medical equipment. Maintenance expenses include labor, parts, service contracts, overhead, and vendor service.

The definition is fairly consistent across institutions, and because the measure is a ratio, it lends itself easier to comparison across facilities. For example, “for a little hospital, we’d have a smaller amount of medical equipment [than in a larger hospital], but the cost would also be smaller. So, theoretically, the ratios are in the same ballpark,” Painter says.

There are many options for measurement in the clinical engineering department, and other metrics will tend to show greater variance among institutions, particularly if definitions and demographics do not match.

Potential data to be tracked and analyzed includes annual expenses per something, such as the number of beds or devices; scheduled work orders completed per time period; outstanding work orders per time period; contractor response time; technician response times; technician productivity; customer satisfaction; equipment downtime; and equipment that cannot be found.

To be useful, the data must be tracked consistently over time. “Statistically speaking, the bigger the sample size (volume), combined with greater accuracy of the data, results in a bigger, brighter, and accurate picture of how the report card will look,” Correll says.

Cohen recommends at least 1 year’s worth of data be used for any analysis. Sometimes, this data is available electronically, often in a database; other times, it must be collected manually.

In the latter instance, obtaining old data can be an expensive venture. “Beginning to collect that data very carefully and very completely from now on is the perfect start to having good benchmarking in the future,” Painter says.

At the very least, it can tell a department where they stand in the present. “Some people don’t know what their costs are,” Cohen says. “Knowing where you are currently is a big step toward improvement.”

Measuring Against

Data collection should be methodical, and the data type should remain consistent over time so that any comparisons made, whether with one’s own department or others, are meaningful. Consistency makes evaluations—of equipment, processes, and people—easier. Thresholds can be established, and improvements can be made.

Results are quantitatively measured, providing a clear indication of whether a goal has been reached. “We really need quantitative measurements that show, ‘Here’s what we were doing before, and we did these things, and here’s what we’re doing now, and you can plainly see that we’ve gotten better,’ ” Painter says.

Improvements—measurable improvements—can be used to justify future initiatives, build support within administrative and clinical staff, and identify new initiatives. “We can use these numbers as justification for adjusting personnel manning, technician training opportunities, certification, contract services, and more,” Correll says.

Further opportunities can be created through comparison with similar institutions, but the institutions really must share like characteristics. “If I’m a teaching hospital in a major city with all sorts of fancy support services, I really pretty much want to compare myself against those kinds of hospitals, not community hospitals,” Painter says. If the institutions are too unlike, the comparison becomes meaningless rather than beneficial.

Once benchmarking partners or resources have been identified, comparisons can reveal best practices, supported by the numbers. “Recall, only standards may be the same. It’s vital that managers keep an open mind by means of comparing their data to take away from those other hospitals key concepts and see what works best in order to improve their departments and, overall, the safety of the patient,” Correll says.

Rulers

The computerized maintenance management system (CMMS) or clinical equipment management system (CEMS) can help with this effort tremendously. For many clinical engineering departments, much of the data they will want to track is already stored in this software.

“The greatest benchmarking weapon in the clinical engineer’s toolbox starts with measuring the department’s performance using the automated CMMS,” Correll says.

Many biomed departments already have electronic databases of some type to store inventory data, but for those considering the acquisition of a new CMMS or CEMS, criteria should include: scalability; full automation; reliability; user-friendliness; real-time and historical report capabilities; integration; upgradeability; and functionality, particularly the ability to generate, prioritize, and track maintenance activities.

For help with choosing, tracking, and comparing data, clinical engineers can turn to industry resources, particularly associations. Both the Association for the Advancement of Medical Instrumentation (AAMI) and the ECRI Institute offer tools designed specifically to aid clinical engineering departments with their benchmarking efforts.

AAMI’s Benchmarking Solution is comprised of an online survey and an analysis tool intended to help clinical engineering departments measure their practices, policies, and procedures against similar organizations. The survey features close to 120 benchmarking and best practices measurements that address topics such as staffing, budgeting, customer service expectations, and reporting structures. The analysis uses categories that include size (such as number of beds), type (such as teaching hospital), and location to show how hospitals of that demographic perform overall. Individual hospital metrics are not shared.

ECRI Institute offers two benchmarking tools with similar intent: the “Benchmarking: Best Practices for Clinical Engineering” CD toolkit and the BiomedicalBenchmark online tool. The CD captures a 90-minute interactive Web conference addressing the benefits of benchmarking, its useful metrics, meaningful comparison, and “lessons learned” from the perspective of both large and small health care systems. BiomedicalBenchmark features real-time comparative data and a library of inspection and preventive maintenance procedures for more than 80 medical devices. Users can access and share data about equipment acquisition cost, service contract cost, expected life, failure rates, and clinical engineering department composition.

Each offers different types of information and will be suitable for different types of goals. “AAMI’s benchmarking tool has to do with department management and general department performance,” Painter says, who contributed to the development of the AAMI survey. “ECRI’s tool is related to preventive maintenance and service times.”

“It all goes back to the mission,” Painter adds, who believes the choice of a tool should be based on the indicators chosen to meet the mission. “If you need the overall department cost-of-service ratio, then ECRI’s tool is not going to provide that for you, but AAMI’s will. If you need the time to complete various preventive maintenance procedures to know you’re spending the appropriate amount of time doing PMs, AAMI’s benchmarking tool doesn’t collect that, but ECRI does. So it depends on your specific needs.”

Listen to the three-part podcast series on benchmarking.

If neither tool helps, then a department can act on its own. Painter estimates that any effort will require the same amount of time, although the associations’ tools may offer easier comparison. Any tool that is selected for analysis should use an adequate population.

“The tool should have a large enough pool of participants so that it is likely—and you can determine—that the data set is a valid comparison data set,” Cohen says. Updates will help to maintain this accuracy both for tools and for biomed departments.

“Once you get your benchmarking program going, you need to keep your eyes open to make sure you’re monitoring the right stuff,” Painter says. Otherwise, how will a biomed follow up if he thinks the downtime for a new type of equipment in the hospital seems excessive?


Renee Diiulio is a contributing writer for 24×7. For more information, contact .