Handling peak I/O demand for Hyper-V and VMware View

Storage is very important in both server and desktop virtualization. If the storage array is not able to deliver the requested IOPS it will degrade the performance of applications and virtual desktops and making users unhappy. Making sure the storage array meets the demand of particulary virtual desktops (VDI) is complex and for sure expensive. Read more about storage design for VDI in this article by Ruben Spruijt titled Understanding how storage design has a big impact on your VDI (updated September 2011) .

Demand for storage IOPS is high during a relatively short time frame like when virtuals desktops are booted and when users log in. To size the storage for those peaks makes VDI in particular expensive.

One way to make sure the storage has enough capacity to handle peaks is using SSD or Flash drives and VDI-optimized storage solutions like GreenBytes and WhipTail. An alternative could be placing a special PCI-card in the host holding SSD-storage like the HP Storageworks IO Accelerator modules for HP Blade and ProLiant servers. As this is not shared storage the use case for this module is somewhat limited.

A cheaper way to handle peak demands for storage is using host memory (RAM).

Both Microsoft in Microsoft Server 2012 Hyper-V (expected to be released in Q4 2012/early 2013) and VMware in View 5.1 (available now) offer a solution for these peaks in demands (boot storms).
Microsoft will be offering a new feature named CSV Cache in Windows Server 2012. VMware is offering View Storage Accelerator. Both features are based on the same technique: a part of the internal memory of the host (RAM) is being reserved to cache blocks of data which otherwise would have to be read from the storage (physical disks). While CSV Cache can be used for any type of virtual machine (server, desktop OS) VMware Storage Accelerator is currently only available for VMware View virtual desktops.

Hyper-V CSV Cache benefits mostly for VM’s which are read intensive. Think about virtual desktops. The feature  is enabled using PowerShell commands. Benchmarks show an improve of 4-5 times of the read performance. Up to 20% of the physical RAM of the host can be allocated for CSV Cache.

VMware View Storage Accelerator can be set to use between 100 MB and 2048 MB of RAM for host cache. The default is set at 1024 MB. Adjustment to the host cache is done using the VMware View Administrator console. Next the Pool definition needs to be enabled for host cache. See a video of the configuration here.

Actually View Storage Accelerator is using a technique which is baked into the  ESXi5 kernel named Content Based Read Cache (CBRC). CBRC uses a reserved (up to 2GB) amount of host memory to store the most accessed storage data blocks. This allow for faster retrieve of the data block when a guest VM request that specific block. CBRC reduce storage fabric latency moving the most accesses data closer to the guests.

As CBRC is available in the ESXi kernel it can be used by other VDI solutions like Citrix XenDesktop as well. Probably because of strategic reasons VMware makes it available in View first. William Lam of virtuallyghetto.com  found out how to enable CBRC without View as described in his blogposting titled New Hidden CBRC (Content-Based Read Cache) Feature in vSphere 5 & for VMware View 5?

I guess it will be a matter of time VMware will officialy support CBRC in vSphere for all types of worksloads just as Hyper-V CSV Cache.

A serie of very good blogposting on Storage Accelerator are written by Andre Leibovici
Understanding CBRC (Content Based Read Cache)
View Storage Accelerator Performance Benchmark

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current ye@r *