Skip to content

Archive for

What you mean to say about VSAN

Having spent some time with VSAN, and talking to customers there is a lot of excitement. It goes without fail that some people who are selling other scale out storage, and traditional solutions might be a little less excited. I have a few quick thoughts on Henderson.

He points out that VSAN is not concerned with data location. He goes on to say that this will limit performance as data will have to be read over the network and this will prevent scalability as VM’s sprawl across hosts, that the increased east west traffic back to the storage will cause bottlenecks. In reality a 16 node VSAN cluster will likely be served by a single stack of 10Gbps Core or TOR switches and this will potentially have fewer hops than a large centralized Netapp or Pure or other traditional big iron array that is trying to serve multiple clusters and having to contend with switch uplinks. Limiting this traffic to the cluster actually makes it easier to handle, as there is less contention and stress points. All traditional vendors require random reads reach out over a storage network (and In Netapp Cluster Mode or Isilon deployments may be any number of different nodes). Given that VSAN does not use NFS or iSCSI, but a simpler more lightweight protocol would actually imply that this might even put a simpler, lower load on the network. IBM’s XIV system (Their Tier 1.5 solution for anyone who does not need a DS 8000) even uses a similar design internally. This is not a “fragile” or 1.0 design. It is one used extensively in the storage world today.

Next he goes on to dwell on recommendations of 10Gbps for the storage network. This is no different than what a typical architect would design for a high throughput Netapp or Pure deployment. If you need lots of storage IO, you deploy lots of network. This is nothing particularly novel. While he cites Duncan saying 10Gbps is recommended he ignores Duncan’s great article on how VSAN can be deployed on a 2 x 10Gbps connected host using vSphere Network IO Control to maintain performance and control port costs.

He points out that a VSAN with 16 core host could use up 2 cores (in reality I’m not seeing this in my lab). I would question, what his thoughts on Nutanix (I’ve heard as many as 8 vCPU’s for the CVM?). There is always a tradeoff when offloading storage (or adding fancy features like inline dedupe). I will agree When you are paying for expensive Oracle, SQL and Datacenter licenses by the socket, anything that robs CPU can get expensive. That is why VSAN was designed to be lightweight on CPU, was placed in the kernel, and does not include other vCPU sucking features like Compression and Dedupe. Considering CPU power seems to be the new benchmark of licensing, keeping this under control is key if VSAN is going to be used for business critical applications.

I think he missed the point of Application centric virtual machine storage. It is not about having a single container to put all the virtual machines in. Its about being able to dynamically assign policies to virtual machines. Its about being able to have applications reach into VMware, and using VASA and the native API’s to define their own striping and mirroring, and caching policies on the fly. An early example of this is VMware View automatically defining and assigning unique policies for linked clones and replica’s that are optimized based on their IO and protection needs. Honestly it wouldn’t take much to layer on a future storage DRS, that added striping or caching based on SLA enforcement (And realistically its something you could hack together with with power-shell if you think about it).

His final sendoff seems an attempt at putting VSAN in a SMB discount box.

“The product itself is less mature, unproven in a wide cross-section of production data centers, and lacking core capabilities needed to deliver the reliability, scalability, and performance that customers require.”

I feel that this is a bit harsh for a product that can define quadruple mirroring of a VM or VMDK, can strike close to a million read IOPS on a cluster (and a few hundred thousand write), can handle 16 node clusters, can scale almost as well as a flash array at VDI.

Set Brocade FC ports to Loop mode

So you want to do a small VMware cluster, and you don’t need Fibre Channel switches. By default most array’s and HBA’s are in point to point mode (used for switches). You will want to setup Loop mode in both your Array (In my HUS this is under the FC Port config). Next up if you have brocade HBA”s they likely have some ancient 3.0 firmware that does not support loop mode. Here’s how to upgrade your HBA’s, and how to set the port mode’s (make sure to set it for BOTH ports on the HBA).

http://sites/thenicholson.com/files.storagenetworks.com/writeups/brocade/hba_vmware/415_425_815_825_fcal.php

esxcli software vib install -d /tmp/bcu_esx50_3.2.3.0.zip

cd /opt/brocade/bin
./bcu port –topology 1/0 loop
./bcu port –disable 1/0
./bcu port –enable 1/0
./bcu port –topology 1/1 loop
./bcu port –disable 1/1
./bcu port –enable 1/1

Sub 20K Arrays like a HUS 110 that can support up to 4 hosts, make for a great storage option for the discerning SMB or remote office. Down the road you can always add a switch, so it gives a nice flexible middle zone between using direct SAS, and 10Gbps iSCSI. Also this is useful if you have a business critical application and want dedicated target queue’s and really simple troubleshooting and lower latency.