Skip to content

In a previous blog post I briefly mentioned VMware vSphere Flash Read Cache (vFRC)?—?here are some more comparisons with PernixData FVP.

Write-caching

Importantly, PernixData FVP can cache writes in write-back mode, as well as reads in write-through mode.

Equally importantly, write-caching includes fault tolerance with writes mirrored to another host or multiple hosts in the flash cluster, as outlined in the previous blog post above.

Setup

With vFRC, cache files need to be created on a per virtual disk basis, for each virtual machine. So the cache needs to be sized on a per-VM basis, along with the block size (between 4Kb and 1024Kb)?—?you would typically monitor a VM with vscsistats before making a decision on this.

With PernixData FVP, we can simply enable write-back FVP for a VM, or even an entire datastore, using their vSphere client plugin. Graphing of metrics such as IOPS, VM latency, and so on are also then visible in the vSphere client.

As we operate a multi-tenanted Cloud platform with customers building their own virtual machines using vCloud Director, it is easy to see how the administrative overhead of vFRC would quickly mount up. Instead we can choose to enable FVP at the datastore level which will then work automatically for new virtual machines.

vMotion

With vFRC, the destination host must be vFRC-enabled. You then need to choose whether to move or drop the cache. Moving the cache is obviously better, but takes longer.

With PernixData FVP, the hosts in the flash cluster are aware of each other?—?this means that a VM can continue to read from the cache stored on its previous host in the short term, while the cache is populated on the new host.

Technically, as long as the host is licensed and has the FVP extension installed, a VM can be vMotioned onto a host with no flash resource and continue to read from the cache that was built up on the previous host. However, it’s unlikely you would have this setup in production.

Once thing worth noting when sizing your network is that by default, PernixData FVP will use the vMotion network for inter-host communication used for write-cache redundancy. However, this can be changed if required.

Conclusion

While we can technically use VMware’s vFRC ‘for free’ as part of our VSPP licensing agreement, the differences above should demonstrate why we have chosen to partner with PernixData and use FVP on our Cloud platform.

modern vibrant office Woman smiling at laptop

Question?
Our specialists have the answer