Virtualization: Update on the Client Hypervisor
I recently blogged about client hypervisors as the future of desktop virtualization
. However, it has become obvious that we will have to wait a bit longer for that evolution than anticipated.
Originally slated for release in late 2009 and early 2010, both Citrix and VMware have delayed the launches of their client hypervisors (Citrix XenClient and VMware Client Virtualization Platform) until, at least, the end of this year.
I mention only Citrix and VMware because they have emerged as early leaders in the client hypervisor space, but that does not mean they are the only players. Will the Citrix and VMware delays open the door for the others to swoop in and capture the client hypervisor market? Only time will tell.
Why the delay?
It seems that those who are trying to get into the client hypervisor market should take a look at the history of operating systems in the x86 desktop world (i.e. desktop computers).
One of the biggest issues that the PC hardware model, named "Open Architecture" or "IBM Compatible," has always brought with it is the need to support a large number of component manufacturers. Anyone can build devices that will work in this hardware model, which is why it is called "Open Architecture." Each hardware component from each hardware manufacturer requires a unique piece of software called the device driver. Device drivers are the biggest problem in the traditional operating system world and, it seems, are going to be an issue in the client hypervisor world as well.
How much of an impact will this delay have on the desktop virtualization market? Likely, not much. Currently, those who are looking at desktop virtualization are looking at shop floors, computer labs and the like. These are considered the safe and least critical computers in a business environment. You would never use the CEO of a company to test a new technology.
End users with mobile needs will continue to be a part of the decentralized computing model for the next couple of years, and this is fine. Some end users will continue to need local access to a local computer with the operating system and software installed directly to it. All new technologies take time to be adopted into the market, and the client hypervisor will be no different.
By the time the client hypervisor is ready for primetime, the need for it will be present as well. The decentralized model that we know today has been stretched as far as possible and it is no longer scalable or sustainable. There are two few people supporting too many computers in too many different geographic locations. IT departments know this but, until now, there was no other option. Desktop virtualization, in general, and the client hypervisor, in specific, will provide the IT staff the ability to centralize - i.e. bring into the datacenter - the technologies, while still allowing for the offline access of that virtual desktop. For now, the plethora of desktop virtualization technology that exists should be enough keep everyone busy and organizations supported with what we have at our disposal, so it is business as usual for IT professionals.
The best advice I can offer is to start your strategic roadmapping now, and keep an eye on the progress of client hypervisors and other developments in the desktop virtualization market. Ours is an industry that changes rapidly, and without staying abreast of the latest technologies available, we're only doing ourselves, and our companies or clients, a disservice.
Jason Dell is a Converged Network Solution Consultant at MCPc, and is responsible for developing and programming custom solutions for clients. His expertise includes network security and security for mobile devices in the enterprise. Connect with Jason on LinkedIn.