NFS versus blocklevel storage for VMware infrastructures

One of the decisions which needs to be made when designing a VMware vSphere infrastructure of expanding it is which storage to use. Two types are possible, file storage(VMware supports NFS) or blocklevel storage (VMFS filesystem accessed over iSCSI or FC).

Most storage vendors support blocklevel devices, some do NFS of which NetApp (and IBM oem) is mostly used.

This posting will give an overview of pro’s and con’s of both.

In summary, there is not a clear winner. It depends on the organization requirements what protocol fits best. One thing to mind is that Microsoft does not support running Exchange data on a NAS (which NFS is). It will work and a Microsoft customer can request Microsoft to officially support the configuration. For a large customer they might.


Microsoft states NFS or other NAS devices type of presenting data is not supported for Exchange 2010 in this article. The same is true of Exhange 2007.

The article states:

The storage used by the Exchange guest machine for storage of Exchange data (for example, mailbox databases or Hub transport queues) can be virtual storage of a fixed size (for example, fixed virtual hard disks (VHDs) in a Hyper-V environment), SCSI pass-through storage, or Internet SCSI (iSCSI) storage. Pass-through storage is storage that’s configured at the host level and dedicated to one guest machine. All storage used by an Exchange guest machine for storage of Exchange data must be block-level storage because Exchange 2010 doesn’t support the use of network attached storage (NAS) volumes. Also, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported. The following virtual disk requirements apply for volumes used to store Exchange data:

  • Virtual disks that dynamically expand aren’t supported by Exchange.
  • Virtual disks that use differencing or delta mechanisms (such as Hyper-V’s differencing VHDs or snapshots) aren’t supported.

The main reasom Microsoft does not support running Exchange data on NAS devices is that this configuration is not tested by Microsoft. It will work perfectly but you get into trouble when contacting Microsoft for support.

The other reason for Microsoft not supporting NAS and NFS in particular is because files on NFS volumes are thin provisioned by default. It means the diskspace is allocated when it is actually needed/written. VMware as well recommends to used fully thick provisioned virtual disks (eagerzeroedthick) for Tier1 applications like Microsoft Exchange or any other guest-level clustering application. VMDKs backed by NFS cannot be eagerzeroedthick.

The reason is that the extra latency of writing data to disk because of non eagerzeroedthcik might confuse the cluster and have a negative impact on performance. Strange as on the other hand Exchange 2010 needs far les IOPS than previous versions of Exchange.

More information on this VMware communities thread.

Add a Comment

Your email address will not be published. Required fields are marked *

Current ye@r *