Jeff Kabachinski

Virtualization is mainstream and white hot. It is another IT growth trend, and it is a huge one. It is the underlying structure that allows clouds to exist. Every data center in the world will make some use of virtualization in the next 5 years, according to Nelson and Danielle Ruest in their book, Virtualization, A Beginner’s Guide. Virtualization is a key IT concept to get plugged into. Anyone with interests in a future working in IT will need to have basic virtualization knowledge and competencies in their skill set.

The concepts of virtualization have been around since the mid 1960s in the days of “big iron” prevalence (mainframes). More recently, a convergence of IT developments drove virtualization popularity, even making it a necessity. By no means is it well understood by all. With something as hot as virtualization there is bound to be hype and overblown claims, so arm yourself with the basic concepts and terminology. Fundamentally, it is a pretty simple concept that is made complex by the number of options and “exceptions to the rule.”

Full Virtualization

Virtualization is the hot term being applied to all kinds of things. In reality, in IT the term simply means that you are imitating or simulating the function of hardware or software. It is a method to increase the computer’s ability to do work. It decouples and manages the hardware interface instead of the operating system (OS) having direct access to things like memory and storage.

Full virtualization (also known as native virtualization) refers to a concept where a piece of hardware, like a server, is coordinated and controlled to allow multiple guest OSs to share it. None of the guest OSs are aware of the other OSs or that they are sharing hardware. The guest OS is hosted and mediated by the virtualization software layer called a hypervisor. The hypervisor interfaces the guest OSs with the hardware—unbeknownst to the OS. Each of the guest OSs appear to have sole access to the bare hardware.

With full virtualization there is no need to modify the OS code to operate as a virtual machine (VM). It is this ability to run one or more VMs on a single physical host that opened many of the opportunities in cost savings and carbon footprint reductions. There are four main drivers in the move to virtualization: The need for better server utilization, emerging or new data space requirements, rising energy costs, and IT administration staff costs.

Server Sprawl

There are a number of factors that caused what is known as server sprawl. The cumulative effect of Moore’s Law can be seen in today’s powerful computers, especially servers. CPUs are many times more powerful than they were just 10 years ago. Consider that processors operated at 1.5 million instructions per second (MIPS) in 1979, and by 2005 it went to more than 10,000 MIPS. Today, the processing power is in the 159,000 MIPS range—a clear indication of the compounding effect of Moore’s Law! In addition, Intel’s Hyper-Threading Technology allows each processor core to run two simultaneous independent threads or processes. A 10-core processor can therefore run 20 threads at the same time.

The cores also have a turbo boost mode where they can run faster than the base operating frequency while dynamically adjusting speeds based on use. It is amazing to consider that in 1965 a single transistor—the building block for the digital age—cost more than a dollar. By 1975, manufacturing developments dropped the cost to less than a penny. In today’s CPUs, we are talking costs of less than 1/10,000 of a cent with a billion of them on a single chip!

In addition, today’s software applications have become so efficient that they are using less processing power. A dual effect to creating underutilization resulted: More powerful computers running efficient modern software suites that consumed only a small percentage of available computing power. This freed CPU power that ultimately went unused. Underutilization was also driven by end users who did not want their apps running on the same server as other apps, in case they might cause the Blue Screen of Death and lock it up for everyone. In addition, some app vendors would only support isolated or “stand-alone” app servers. When there are more apps per server there are more “attack surfaces” for hackers and malware on that server.

Things like this added to security and regulatory compliance requirements, which also played a role in server sprawl. Most of the time the solution was to purchase single-use servers for these apps, causing a proliferation of physical servers. In some cases, IT did not have much of a choice—they must isolate by growing the farm. All this adds up to a bunch of underutilized servers. It is not uncommon to see servers running at 10% to 15% efficiency when the IT budget may have 70% earmarked for capital expenditure—an obvious waste needing attention! Keep in mind that the server is not the only place virtualization can occur, just the most obvious.

Environmental concerns also play a part in this picture. Overall electrical power costs increased as more servers required more cooling. A nonefficient server uses nearly as much power as an efficient one. Energy is not as cheap as it used to be, and it cannot be assumed that it will always be available, like in the good old days.

Data Space Needed

There are fantastically huge amounts of data that need to be online and shared, and it is growing exponentially. One estimate holds that in 2003 the world generated 5EB (Exabyte, or one million terabytes) of new data. In 2010 the estimate was closer to 25EB of new data to add to the existing data piles. Where are we going to put all this stuff? Looks like new equipment for your datacenter! Who is going to manage all that stuff? Looks like added IT staff to maintain, service, and support it.

Dell KACE, regarding its management appliance, quotes Steve Wagner, senior Microsoft engineer, Mercy Medical Center, Des Moines, Iowa, “We estimated the time we put into administering and maintenance of the other system [the old systems management solution], we could have hired another employee.” It didn’t make sense to add a full-time employee to manage a single system. Juggling legacy systems mixed in with new is another call for virtualization.

These are among the factors that began to drive use of virtualization. In many ways, virtualization is the result of this kind of waste in resources. A waste in resources is a waste in bottom-line dollars of capital equipment expenditure and the added staff to manage, service, and support it. Server sprawl made servers a prime target area to reduce and consolidate. Only ~7% of x 86-based servers are virtualized; and fewer client PCs have been virtualized (2008 numbers). While the main target areas for virtualization became servers, clients, storage, and networks are also candidates. This caused one IT manager visiting his data center to say, “If it moves—virtualize it!”

Other Virtualization Techniques

With full virtualization we have seen where a hypervisor can add capability, but it is also adding another layer or go-between to deal with. With paravirtualization some of the guest OS code is altered to better cooperate with the virtualization processes in the VM. For example, since some of the OS’s protected instruction set must be trapped and handled by the hypervisor, it can be faster and more efficient to modify how the OS operates to bypass the hypervisor and the needed time to trap and respond.

Even faster capability can be achieved if the guest OS provided its own imbedded hypervisor or inherently had the ability to operate in a virtualization environment. Operating System-level virtualization does just that. It requires many changes to the OS’s kernel, but the advantages are speed and efficiency. This is the direction we are headed. It is also happening in hardware. Intel’s latest CPUs are being built to fully accommodate virtualization with something they are calling Rapid Virtualization Indexing.

Summary

Virtualization impacts the whole enchilada, from the network and operations to business processes and the bottom line. Virtualization is changing the face of IT but is riddled with complexities. Virtualization is changing almost every aspect of how systems are managed—from storage, networks, security, OS, and apps—but the benefit promises of virtualization are many. Server consolidation results in increased utilization. It reduces server sprawl to lower hardware, support, and maintenance costs. Costs are also saved with improved software management and security. It improves system response times, reduces downtime, and simplifies backup and recovery, which also saves money. Additionally, you can retain legacy systems much easier and more reliably as VMs. These are the obvious benefits, but second-level benefits like freeing up IT staff to work on other IT projects and pursuing health care business goals should also be considered.

There are countless virtualization alternatives to choose from. By the time you are finished reading this column, there will probably be dozens more. Virtualization tasks can be daunting in that there are so many alternatives, and combining techniques only adds to the complexity. Some call virtualization an unruly task because it initially requires close attention, especially during implementation stages. Unfortunately, understanding the nuances is key to implementing and managing them well. They are a rare breed of IT projects because they actually pay for themselves in a short time, making it a worthwhile effort for all. If you intend to be a part of the future of health care IT and there is a virtualization project happening near you, lend a hand and get involved to learn more about virtualization.


Jeff Kabachinski, MS-T, BS-ETE, MCNE, has more than 20 years of experience as an organizational development and training professional. Visit his Web site at kabachinski.vpweb.com. For more information, contact .