This is my second posting in a serie of postings in which I will compare features of VMware vSphere 5 with Microsoft Windows Server 2012 Hyper-V.
Goal of the postings is to give a non-biased overview on features of two main players in the server virtualization market: VMware vSphere and Microsoft Hyper-V. I will not use the marketing comparison tables used by both vendors to promote their unique features while ignoring the competitors features (as marketing is all about).
Other blogs in the serie are:
vSphere 5 versus Windows Server 2012 Hyper-V: Resource metering for chargeback
vSphere 5 versus Windows Server 2012 Hyper-V:management
vSphere 5 versus Windows Server 2012 Hyper-V: live migrations
vSphere 5 versus Windows Server 2012 Hyper-V: high available VMs
A lot can be written about storage. This posting will give a global overview of storage integration by both vendors. I left out some of the features for a future post.
The most important resource for virtual machine is storage and storage performance in particular. Because of the nature of server virtualization (virtual servers with different roles) the demand for storage resources is almost impossible to predict. By far the most issues in virtual infrastructures are storage related. Smaller environments with around 100 virtual servers for less might likely not run into storage related performance issues very often but enterprises are very likely to.
Infamous are issues with so called noisy neighbors; virtual machines which all of a sudden demand lots of IOPS and will bring other VMs to a hold. Personally I have seen this happen a couple of times. As an administrator you want to have control over Storage IO-consumption.
Both vSphere and Windows Server 2012 Hyper-V support block level storage (iSCSI and Fiber Channel).
For file level storage Hyper-V supports Windows shares over SMB. vSphere supports NFS. I would say both solutions are equal on this.
Cheap shared storage/DIY storage
Storage arrays offers a lot of features but are not cheap. Especially when FiberChannel is used this can be costly. Think about the FC HBA’s in each host, switches etc.
Microsoft offers SMB3 as an alternative and cheap to implement protocol for FC . Using a cluster of Windows Server 2012 hosts and locally attached storage accessed over SMB3 a relative cheap shared storage solution can be created. Several design options are available.
VMware offers vSphere Storage Appliance (VSA) software which can be used to create redundant storage out of cheap local harddisks. The software runs on a VM running on the ESXi host. The VSA is now available for free when a vSphere license is purchased. More info on VSA 5.1 here.
Vendors like StarWind also offer solutions to deliver SAN-alike features out of regular locally attached storage.
Hyper-V is a winner on this aspect. Two Windows Server 2012 Standard edition licenses are way cheaper (less tha $ 2000,- ) than VSA software.
Offload to storage
Some disk related processing is more efficient when done by the storage layer. Examples are cloning a VM or moving the virtual disk to another location. Both vSphere (VAAI) and Hyper-V (ODX) integrate with the firmware of the storage to command the storage to perform datacopy actions. Without VAAI/ODX data needs to be copied from the storage layer over the network to the hypervisor and back to the storage layer. This takes times and uses resources on the host.
At this moment Hyper-V ODX cannot be used to provision a new VM based on a template. VMware VAAI is able to do that.
ODX support has been announced by vendors like EMC and Dell. All major storage vendors support VAAI which has been available for some years now.
VMware is a winner here.
Storage IO Control/ Quality of Service
As mentioned before: you do not want a noisy neighbor to disturb other much more business critical applications. VMware offers several methods to guarantee one or more VMs a certain level of storage performance.
First there is Storage DRS. This feature allows virtual disk files to be moved to another storage location if the original storage location does not deliver the requested performance. SDRS will also move virtual disk files when available free storage space reaches a limit.
When a move of storage is not an option vSphere Storage IO Control can be used. Basically this is a kind of Quality of Service. Without congestion on the storage layer each VM can consume as much storage as it wants. When latency reaches a certain critical level, VMs which are more important get more resources. A bit like cars getting access on a dedicated lane of a motorway in rush hours.
Windows Server 2012 Hyper-V does not offer any control to the storage layer for individual VMs. On a converged network when muliple types of networks (iSCSI storage, Live Migration, Cluster heartbeat, CSV redirection and VM network) are using the same physical wire QoS can be used. However this is set per type of network. Not per individual VM. When storage is accesses over FC there is not control possible.
While SMB might not need Storage Control very often, Enterprise will need it sooner or later. There are workarounds when a noisy neighbor has been identified. It can be moved manualy for instance to another storage location. In a large enviroment this is not what you want as an admin. It will take time to identify the VM, space to move it etc.
VMware is a winner in the enterprise market. Mind SDRS and SIOC are available only in the most expensive edition of vSphere.
Virtual disks size
VMware uses VMDK as the native virtual disk format. It is limited to a maximum size of 2 TB.
Hyper-V uses VHD and VHDX. The latter supports a maximum of 64 TB. It is self healing which mean it can overcome corruption caused for example by a sudden shutdown of the VM.
Hyper-V is a clear winner here.
4K disk support
Hyper-V VHDX format supports 4K disks. vSphere 5 does not support 4K disks. If you want to know more read this posting by Aidan Finn
Virtual disk replication
For disaster recovery purposes replication of the virtual disk can be a very usefull feature. It allows to recover a virtual machines very quickly without a lot of manual effort, time and recovery from backup media.
Windows Server 2012 Hyper-V offers a feature named Hyper-V replica. This free feature can be used to replicated Hyper-V virtual machines to another location.
VMware vSphere does not offer a free replication feature. To replicate virtual machines either storage replication needs to be purchased or additional software like Veeam Backup & Replication, VMware Site Recovery Manager, Zerto Virtual Replication or VirtualSharp ReliableDR.
Mind Hyper-V Replica cannot be compared to VMware Site Recovery Manager as Microsoft likes to do. While Hyper-V Replica might be a suiteable solution for SMB, for enterprise it is probably not. Hyper-V Replica does not offer a runbook, automated testing and other enterprise features. Hyper-V is limited to asynchronous replication while VMware SRM support synchronous replication executed by the storage layer.
For DR in an SMB enviroment Hyper-V is a winner. For Enterprise environments vSphere is a winner.
To create new LUNs from System Center Virtual Machine Manager 2012 the SMI-S protocol is used. This is a standard way of communicating with storage arrays. Many vendors support SMI-S. CSV volumes can be created without manual interaction of the storage admin.
vSphere 5 does not offer an integrated way to automatically provision new LUNs from vCenter Server. A storage admin needs to create the LUN and make it accessible. Then the vSphere admin can create a datastore on it.
Hyper-V is a winner in storage automation.
Management of storage array
Most vendors offering storage solutions compatible with VMware offer integration with vCenter Server. This makes it possible to manage the storage array from the same console as management of hosts, virtual machines.Tasks like configuration of datastores, configuration of replication can be done from a single vCenter console.
SCVMM 2012 does not offer management of storage. Generic tasks can be performed from the Fabric toolbar in SCVMM 2012. For more advantaged management the IT-admin needs to switch to the management console of the storage vendor.
VMware is a winner in storage management integration.
Virtual Desktop Infrastructures (VDI) demand a lot of the storage layer. Especially in the mornings when employees start their virtual workstations and log on (bootstorms), lots of IOPS are requested from the storage. This might lead to performance issues. To overcome this several solutions are available. Use faster and dedicated to VDI storage arrays mosty based on Flash SSD disks. Another solution is to cleverly cache part of the most requested storage blocks into the physical memory of the hypervisor.
Windows Server 2012 Hyper-V offers a feature named CSV cache. When enabled it delivers a performance increase of 20 x for VDI bootstorms.
vSphere currently does not have a caching feature. When VMware View is used caching can be enabled.
More on these features here.
There is no clear winner here.