Is Your Data Covered in a Multi-Hypervisor World? Q&A With Acronis' Sam Roguine
It's every IT pro's job to stay on top of the latest technology trends that could help turn a profit or increase employee productivity. Virtualization is so commonplace — Gartner, for example, estimates that Windows servers are nearly 70 percent virtualized — that IT pros might not give it much thought. And that can create problems since virtual data must be protected, starting with the driving force behind virtualization: the hypervisor.
Here, Sam Roguine, Asia-Pacific engineering director at Acronis, discusses why IT pros neglect to back up the hypervisor, how that puts company data at risk and the need for technology to solve the migration and disaster recovery challenges in a multi-hypervisor world. Are you covered?
What common data protection challenges must IT departments contend with in a virtual environment?
When companies started to get into virtualization, most used one hypervisor, VMware’s vSphere. Now, there are a lot more options on the market — including Hyper-V from Microsoft; KVM from Linux; XenServer from Citrix; and a couple from Parallels, Virtuozzo and Parallels Bare Metal — which complicates how IT backs up company data.
There are many choices, and IT departments will pick and choose which hypervisors they use because they see advantages in each. It’s common for companies to use hypervisors from VMware, Microsoft, Citrix and other companies. Many IT people understand that it’s inevitable that they will use multiple hypervisors.
For example, IT professionals who buy VMware’s vSphere must pay a licensing fee. Microsoft’s Hyper-V, however, is included with the purchase of the Microsoft operating system. Companies can now switch to Hyper-V for secondary projects because it’s essentially free, or they can switch from VMware to a new vendor because of changing economic situations, buying criteria or other reasons.
What are the challenges in a multi-hypervisor environment?
There are three major challenges that every company will face. The largest challenge is virtualization itself. Companies must move existing workloads from physical servers to the virtual environment (P2V). The easiest way to accomplish that is through migration, though it’s still a difficult task.
The second challenge is migrating across hypervisors, from virtual to virtual (V2V) environments.
The third challenge is backing up this virtual data, especially if the company uses multiple hypervisors. It may require separate software products to back up data for each hypervisor environment, which is costly in terms of manpower. For one, it’s difficult to find people who know how to operate different software products. Also, multiple solutions can be prohibitively expensive.
Companies need software that will handle those three problems: migration from existing environments into virtualized environments, migration between hypervisors and hypervisor backups.
What should IT do to migrate data between hypervisors, or to back up data across multiple hypervisors?
One area is often overlooked: the hypervisor itself. The various hypervisors operate differently and work on different concepts. It’s important to back up the physical host — the hypervisor itself — not just the virtual machine. If disaster strikes, IT will need to rebuild the entire ecosystem. That’s why it’s important to back up everything, including the hypervisor. If you don’t, your risk and downtime will be higher.
It’s not enough to only back up the virtual machine, just like it’s not enough to only back up files or databases on a physical server. .Every piece of data must be backed up, and the hypervisor is no exception. Hypervisors have settings, extensions and drivers, which must be saved. Otherwise, IT risks spending a lot of time reconfiguring — and that creates downtime for the business. If some hypervisor settings cannot be restored, the risks become far greater.
Is the concept of hypervisor backup controversial in the industry?
Some people disagree with the idea of hypervisor backup, but we heard a similar argument 10 years ago about backing up the operating system on physical servers. (There was no virtualization back then.) Some people said that it wasn’t necessary to back up the OS. Well, guess what? Now everybody backs up operating systems. In 10 years, everyone will back up hypervisors, too.
Core disaster recovery planning principles require that you cover all angles, not just a portion of the environment. The company runs on the entire ecosystem. It’s important to make complete recovery plans. These always existed, but they were for physical environments. Many companies, however, have neglected to create proper disaster recovery plans for their virtual environments. Sometimes they have replication, high availability, fault tolerance or even simple RAID [redundant supply of independent disks] on their storage area network and think that’s sufficient. It's not.
For example, having two data centers across the road from each other is not sufficient. An earthquake or other area disaster could destroy both centers, and the company would need to rebuild its entire environment.
What does a sound disaster recovery plan look like in a multi-hypervisor environment?
Build your disaster recovery plans based on the sum of all fears. Design your disaster recovery plan under the assumption that everything will be lost.
You need technology that protects against that worst-case scenario and all contingencies. You need the ability to switch between hypervisors should a problem or disaster occur. Also, make sure you have the ability to switch from and to physical and cloud servers as well, even if your environment is completely virtual.
Any other challenges IT pros should be aware of?
There is one more angle to add: the cross-version hypervisor compatibility problem.
There are different versions of each hypervisor, similar to updated versions of software or operating systems. The problem for IT is that hypervisors are not backward-compatible. Virtual machines created for Hyper-V 3.0 (released in 2012) are not compatible with Hyper-V 2.0 (released in 2008). The problems are similar to migrating between hypervisors from different companies, which I spoke about earlier.
Consider this scenario: IT’s production environment is Hyper-V 3.0, but the standby system is Hyper-V 2.0. The standby machine will not be able to absorb a recovery from the production machine because they are incompatible. This is true for all vendors, including VMware and Microsoft.
You cannot fall back to older hypervisors if your software doesn’t know how to go back. The takeaway for IT is to make sure the company's technology can back up between different hypervisors and different versions of the same hypervisor.
This is another reason why a multi-hypervisor environment is inevitable. Even if you use hypervisors from the same vendor, at some point you will use a newer version of that hypervisor.
[Image via Can Stock]