Evaluate and define procedures to improve your department’s performance
If your hospital isn’t benchmarking yet, it will be.”
That is what Jack Harmon of Beaumont Services Co, Royal Oak, Mich, believes after having looked at measurements for the biomedical and clinical engineering areas. The director of biomedical and facilities management says that as cost concerns increase at facilities across the country, it behooves biomeds who have not been involved in benchmarking to be proactive and aware of measurement in terms of basic terminology and implications.
Benchmarking essentially is a means to identify the “best of the best.” The American Productivity and Quality Center further defines it as the process of identifying, learning, and adapting outstanding practices and procedures from any organization to help an organization improve its performance. Manufacturers tend to be thought of as the first organizations to have developed and used benchmarking as a tool for process improvement. But now the concept of benchmarking is fairly common across industries.
Ultimately, benchmarking in the biomedical and clinical engineering areas is simply an evaluation tool to help reduce repair and maintenance expense and increase the quality of services. Understanding the data behind your area can help you, for instance, plan staffing needs. For example, an increase in “x” pieces of equipment may have a correlation to “y” number of technicians.
As another example, benchmarking can help you understand how service contracts can correlate to equipment cost. If you know that contracts in the aggregate should be no more than 5% of equipment cost—with the exception of some equipment, such as CT scanners, because they consume a high number of tubes that have a high replacement cost—then you will have a reference point as you look at this data from year to year. This will also serve as your starting point to look at variations that might arise.
“You can try to bury your head in the sand, but when a hospital administrator gets a report from a benchmarking firm that says you can save, for example, $9 million—and eventually something like that will happen at a facility—then you’ll have a tough road explaining your position without an understanding of metrics,” Harmon says.
What to Benchmark?
“It’s a very complex topic,” says Ted Cohen, MS, CCE, manager of clinical engineering, University of California Davis Health System, Sacramento, Calif, who worked in the 1990s on a comprehensive project to look at benchmarking activities. The metric that came out of Cohen’s work in the 1990s was the correlation of total service cost to total equipment cost. Even prior to that, in the 1980s, through several reports and work by the Association for the Advancement of Medical Instrumentation, this was considered a suitable metric. All labor, parts, and materials for both scheduled and unscheduled service compose the service cost element of this metric. This includes maintenance insurance and in-house service, along with vendor and prepaid contracts. The total equipment cost portion is composed of the price of the equipment at the time of purchase.
Although the ratio does not factor in the age of equipment, the variation of expense by geographic area, and such things as extended warranty, it does have the advantage of looking at all service costs, it is easily understood, and it is commonly used.
“Until recently, the service-cost to acquisitions-cost ratio has been the only consistent and reliable metric,” according to Binseng Wang, ScD, CCE, FAIMBE, senior director of program support and quality assurance for ARAMARK Healthcare, Charlotte, NC. An advocate for benchmarking, Wang says that without it, you just don’t know the best way to do things.
Wang says he and his colleagues have thought that the service/acquisitions ratio was only part of the story. With this belief as impetus and with access to data from a large number of hospitals, Wang and his colleagues revisited the topic. Some interesting statistics that may further benchmarking discussions came out of their study.
|
Among the findings was a ratio of 13 capital devices to staffed beds, with each clinical engineering staff responsible for some 520 pieces of equipment. Looking at the correlation between cost of capital equipment at time of purchase to patient discharge, the study showed an investment of $3,000 in equipment for each patient discharge.
The study also showed unplanned repairs at about one per year per device for most hospitals, which was in line with an earlier study they published using mostly ARAMARK Healthcare’s data. Further, for every 100 staffed beds, there were 2.6 full-time employees, even though some hospitals used outside vendors to maintain some of the equipment. And the common service-cost to acquisition-cost metric came in at 4%, also corroborated by data from the 1990s.
Wang believes it is important to view a study such as this broadly—not for precise benchmarks—because of the lack of consistency in the definitions and the impossibility of data verification. It is also necessary to consider other dimensions, such as customer (especially nursing) satisfaction and employee growth, when developing benchmarks. “No one has the perfect formula, but you have to start somewhere,” Wang says.
Practical Benchmarking
Large studies, such as Wang’s that looked at 174 facilities, help further the discussion on valid benchmarking metrics across organizations, whereas internal benchmarking can serve practical in-house needs.
Cohen notes that a comparison across institutions is extremely difficult. Even if an organization benchmarks its department, trying to then compare it to another health care system would only provide an apples to oranges type of comparison that would not help an individual organization enough to make it worth the cost or the time involved. For example, a benchmark study may include imaging equipment at a health care system that employs 40 biomeds, with a certain number of beds, but if your facility has only 10 biomeds and fewer beds, how does that help you? Rather, Cohen suggests that each facility create its own benchmarking standards and data, using tools other facilities have successfully used.
“While external benchmarking lets us know how we fit into the ‘big picture,’ internal benchmarking reports are one of the best tools we have as managers to produce best practices BMET to BMET,” says John Crissman, BSET, CBET, biomed manager, Beaumont Services Co.
Crissman says at Beaumont they continue to standardize and refine their procedures that address verifying and validating proper equipment function, and look at variances from BMET to BMET and measure the differences.
“All processes have statistical norms and standard deviations,” Crissman explains. “If the task list of what to check is clearly specified, those variations should be minimized.”
Differences in documentation methods may explain variations, which has been Crissman’s experience. As an example, repair time recorded against preventive maintenance (PM) work orders rather than another type of work order often has explained differences. “In our department, we are careful to teach keeping these issues separate,” Crissman says. “Time for a PM procedure should be tracked separately from time for an emergency repair or equipment failure. This helps the benchmarking effort be more precise and helps evaluate the overall effectiveness of PM efforts.”
Numerous other factors may explain differences. An analysis of data helps you understand discrepancies and identify ways to minimize recurrence. As well, analysis can help you suggest procedural changes. “If we are always checking items that have never had a failure,” Crissman notes, “the reason to keep checking those items has to be really analyzed and understood.”
Create a Clear Checklist
A clear, standardized checklist of what needs to be checked and how is key. Crissman notes that a checklist helps jog the memory of seasoned BMETs and is a great teaching tool when newer BMETs are being trained. Most importantly, patient safety and improved quality are the direct benefits of implementing standardized testing processes, Crissman believes.
For Cohen at UC Davis, internal benchmarking is ongoing and part of the department’s routine data collection, and is a very reasonable way to manage cost and equipment properly. Cohen looks at service costs, the service-cost to acquisition-cost ratio, scheduled maintenance, downtime of approximately 75 critical systems, and failure rates by equipment model. Man-hours and technician productivity also are measured and evaluated. Other areas that can be evaluated are uptime and response time. Periodically, UC Davis conducts customer-satisfaction surveys.
If you are looking at doing internal benchmarking, Cohen says it is important to establish definitions and use the same definitions from year to year. For instance, you would want to include both in-house and vendor costs in the definition for total service cost. Another example of a definition would be average turnaround time per repair—length of time from receipt of customer request to repair of device and return to service or, if it is a spare unit, availability for return to customer use.
Beaumont Services’ Harmon echoes Cohen’s emphasis on definitions. The difficult part in setting up a program is determining the definitions and parameters for including what is relevant and what is not to a particular metric. “My personal preference,” Harmon says, “is maintenance cost per adjusted admission.” Two big variables are how intensely the equipment is used and how complex the equipment is. Without adjusting for differences, a hospital with a 60% occupancy rate would have a lower cost than one with a 90% occupancy rate.
Harmon says Beaumont also looks at incidents from medical devices. Resources such as the US Food and Drug Administration, ECRI (formerly the Emergency Care Research Institute), the Joint Commission on Accreditation of Healthcare Organizations, the Institute for Safe Medication Practices, and others are used on a regular basis to track incident reports, recalls, and hazard alerts related to medical devices. As does Cohen, Harmon undertakes an annual survey that addresses services provided and seeks recommendations. Quality, customer service, and safety are evaluated.
Determine in what areas your hospital excels—be it patient safety, quality, service, or clinical specialties. Then, review the processes from these areas and share with your team their success stories that can help inspire duplication.
Facilities can control expenses and service quality internally by developing their own benchmarks and evaluating their data from year to year, Harmon believes. But it is still a good idea to look outside your department to see how other departments and other medical facilities are approaching the same issues, and to look for trends and best practices.
Despite the challenges to create a standard across the industry, discussion has been ongoing. Izabella A. Gieras, CCE, director of technology management at Beaumont Services, and the past president of the American College of Clinical Engineering, says she is working with the Biomedical Advisory Council, an independent group looking at common goals and strategies for benchmarking and other areas important to clinical and biomedical engineering, including best practices and educational initiatives. Results for these studies will be published in the coming months.
In the spirit of “forewarned is forearmed,” choosing to be conversant on the metrics of benchmarking in your area by leading the process, or at a minimum being part of the process, is a good option. “Being a little proactive can go a long way,” Harmon says. So get your benchmarking mojo on, “climb into the driver’s seat,” and be champions for biomed benchmarking, Harmon recommends.
Maria Fotopoulos is a contributing writer for 24×7. For more information, contact us at [email protected]