Skip to content

Posts from the ‘VSAN’ Category

When is the right time to transition to vSAN?

 

When is the right time to swap to vSAN?

Some people say: When you refresh storage!

Others say it’s: When you refresh Servers!

They are both right. It’s not an “or” both are great times to look at it. Let us dig deeper….

Amazing ROI on switching to HCI can come from a full floor sweep that is tied to refreshing with faster servers, and newer loss cost to acquire and maintain storage. There are even awsome options for people who want another level of wrapped support and deployment (VxRAIL, HCP-UC).

But what about for cases where an existing server or storage investment makes a wholesale replacement seem out of reach?  What about the guy who just bought storage or servers yesterday and learned about vSAN (or new features that they needed like Encryption or local protection today?

Lets split these situations up and discuss how to handle them.

What happens when my existing storage investment is largely meeting my needs? What should I do with the server refresh?

Nothing prevents you from buying ReadyNodes without drives and adding them later as needed without disruption. Remember ESXi includes the vSAN software so there will be nothing to “install” other than drives in the hosts. HBA’s  are the most common missing feature from a new server and a proper high queue depth vSAN certified HBA is relatively cheap (~$300). That’s a solid investment. Not having to take a server offline later to raise the hood and install something is instant ROI on those components. Remember with Dell/Lenovo/SuperMicro/Fujitsu vSAN Config assist will handle deploying the right driver/firmware for you at the push of a button.

Some other housecleaning items to do when your deploying new hosts (on the newest vSphere!) to get you vSAN ready down the road.

  1. See if the storage is vVols compatible. If it is, start deploying it. SPBM is best way to manage storage going forward, and vSAN and vVols both share this management plane. As you move forward into vSAN, having vRA, vCloud Director, OpenStack and other tools that leverage SPBM configured to use it will allow you to leverage your existing storage investment more efficiently. It’s also a great way to familiarize yourself with vSAN management. Being able to expose storage choice into vRA to end users is powerful. Remember, VAIO and VM Encrypt also use SPBM. so it’s time to start migrating your storage workflows over to it!
  2. Double check your upcoming support renewals to make sure that you don’t have a spike creeping up on you. Having a cluster of vSAN deployed and testedand with hosts ready to expand rapidly puts you in a better position to avoid getting cornered into one more year of expensive renewals. Also watch out for other cost creep. Magic stretched cluster virtualization devices or licensing, FCoE gear, fabric switches, structured cabling for Fibre Channel expansion, and special monitoring tools for fabrics all have hidden capex and support costs. [LOL]
  3. Look at expansion costs on that storage array. Arrays will often be discounted deeply on the initial purchase but expansion can sometimes be 2-3x what the initial purchase cost was! Introducing vSAN for expansion guarantee’s  lower cost per GB as you expand (vSAN doesn’t tax drives or RAM like other solutions).
  4. Double check those promised 50x dedupe ratios and insanely low latency figures! Often data efficiency claims are made and include  Snapshots, Thin Provisioning, linked clones and other basic features.   Also, check to see that you’re getting the performance you need.

What happens when my servers were just refreshed, but I need to replace storage?

If your servers are relatively new (Xeon v3/v4/Intel Scalable/AMD EPYC) then there is a good chance that adding the needed pieces to turn them into ReadyNodes is not far off. Check out the ready node bill of materials to see if your existing platform will work. See what it needs and reach out to your server vendor for the needed HBA (and possibly NIC) upgrades to get them ready for vSAN. Your vSAN SE’s and account teams can help!

 

 

 

How big should my vSAN or vSphere cluster be?

This is a topic that comes up quite a bit. A lot has been written previously about how big should your vSphere clusters be and Duncan’s musings on this topic are still very valid.

It generally starts with:

“I have 1PB in my storage frame today, can I build a 1PB vSAN cluster?”

The short response is yes, you can certainly build a PB vSAN cluster, and build 64 node clusters (there are customers who have broken 2 PB within a cluster, and customers with 64 node clusters), but you stop and think if you should.

You want 16PB in a single rack, and 99.9999999% availability?

We have to stop and think about things beyond cost control when designing availability. I always chuckle when people talk about arrays having seven 9’s of availability. The question to ask yourself is if the storage is up, but the network is down does anyone care? Once we include things “outside of storage” we often find that the reality of uptime is often more limited. The actual environmental (Power, Cooling) of a datacenter are rated at best 99.98% by the uptime institute. Traditionally we tried to make the floor tile that our gear sat in to be as resilient as possible.

 

 

James Hamilton of Amazon  has pointed to WAN connectivity to being another key bottleneck to uptime.

 “The way most customers work is that an application runs in a single data center, and you work as hard as you can to make the data center as reliable as you can, and in the end you realize that about three nines (99.9 percent uptime) is all you’re going to get,”

The uptime institute has done a fair amount of research in this space, and historically their definition of a Tier IV facility involved providing only up to 99.99% uptime (4 nines).

 

Getting beyond 4 nines of uptime for remote users (who are the mercy of half finished internet standards like BGP) is possible but difficult.

Availability most be able to account for the infastructure it rests on, and resiliency in storage and applications must account for the physical infrastructure.

 

Lets review traditional storage cost and operational concepts and why we today have reached a point where customers are putting over 1PB into a storage pool.

  1. Capital Costs – Some features may be licensed per frame, and significant discounts may be given if large purchase are made up front rather than as capacity is needed. Sparing capacity and overhead as a % of a storage pool become smaller if your growth rate is fixed.
  2. Opex – While many storage frames may have federation tools, there are still process’s that are often done manually, particularly for change control reasons because of the scale of an outage of a frame (I talked to a customer who had one array fail and take out 4000 VM’s including their management virtual machines).
  3. Performance – wide striping or on hybrid systems aggregating cache and controllers and ports reduced the change of a bottleneck being reached.
  4. The next Change Control Window for my Array is 2022

    Patching/Change Control – Talking to a lot of customers they are often running the same firmware that their storage array came with. The risk, or the 15 second “gap” in IO as controllers are upgraded is often viewed as a huge risk. This is made worst by the most risk averse application on the cluster effectively dictates patching and change control windows. No one enjoys late night all hands on deck patching windows for storage arrays.

  5. Parallel remediation in patch windows – Deploying more storage systems means more manual intervention. Traditional arrays often lack good tools for management and monitoring of parallel remediation. Often times more storage arrays means more change control windows.
  6. Aligning the planets on the HCL –  To upgrade a Fibre Channel Array, you must upgrade ESXi, the Array, The Fabric Version, the Fibre Channel HBA firmware, and the server BIOS to align with the ESXi upgrade.  This is a lot of moving parts, all of which that carry risks of a corner case being identified.

 

Lets review how vSAN dresses these costs without driving you to put everything in one giant cluster..

  1.  Capital Costs – vSAN licensing is per socket and hosts can be deployed with empty drive bays. Drives for regular severs regularly fall in in price, making it cheaper to purchase what you need now and add drives to hosts as needed to meet capacity growth. Overhead for spare capacity for rebuilds does reduce as you add hosts, but nothing forces you to fill each host with capacity up front and no additional licensing costs will be invoked by having partially full servers.
  2. Opex – vSAN’s normal management plane (vCenter) is easily federated and storage policies span clusters without any additional work. Lifecycle management like controller updates from the Config assist, and health monitoring alerts easily roll up to a single pane of glass.
  3. Performance – All Flash has changed the game. You no longer need 1000 spindles and wide striping to get fast or consistent performance. Pooling workloads with 3 tier storage architecture and storage arrays actually increases the chance that you might saturate throughput, or buffers on fibre channel switching.
  4. Patching – vSAN patching can be done simply using existing tools for updating ESXi (VMware Update Manager), and lifecycle update for storage controllers can be pushed by a simple click from the UI in vSAN 6.6. Customers already have ESXi patching windows and processes deployed and maintenance mode with vMotion is as trusted and battle tested means to evacuate a host.
  5. VMware Update manager (VUM) can remediate multiple clusters in parallel. This means you can patch as many (or as few) clusters, and when used with DRS this is fully automated including placement of virtual machines.
  6. Additional intelligence has been deployed for vSAN to include remediation of Firmware. Given that vSAN does not use proprietary Fibre Channel fabrics, is integrated into ESXi, and lacks the need for proprietary fabric HBA’s this significantly reduces the number of planets to align when planning an upgrade window.

In summery I wanted to say. While vSAN can certainly scale to the multi-PB cluster size, you should look if you actually need to scale up this much. In many cases you would be better served by at scale running multiple clusters.

vSAN Backup and SPBM policies.

I get asked a lot of questions about how Backup works with vSAN. For the most part it’s a simple request for a vendor support statement and VADP/CBT documentation. The benefit of native vSAN snapshots (better performance!) does come up, but I will point out there is more to backup and restores than just the basics. Lets look at how one vendor (Veeam) integrates SPBM into their backup workflow.

 

Storage Based Policies can tie into availability and restore planning. When setting up your Backup or Replication software make sure that it supports the ability to restore a VM to it’s SPBM policy, as well as have the ability to do custom mapping. You do not want to have to do a large restore job then after the restore re-align block locations again to apply a policy if only the default cluster policy is used for restores. This could result in a 2x or longer restore time. Check out this Video for an example of what Backup and Restore SPBM integration looks like.

While some questions are often around how to customize SPBM policies to increase the speed of backups (on Hybrid possibly increase a stripe policy), I occasionally get questions about how to make restores happen more quickly.

A common situation for restores is that a volume needs to be recovered and attached to a VM simple to recover a few files, or allow temporarily access to a retired virtual machine. In a perfect world you can use application or file level recovery tools from the backup vendor but with some situations an attached volume is required. Unlike a normal restore this copy of data being recovered and presented is often ephemeral. In other cases, the speed of recovery of a service is more important than the protection of it’s running state (maybe a web application server that does not contain the database).  In both these cases I thought it worth looking at creating a custom SPBM policy that favored speed of recovery, over actual protection.

 

In this example  I’m using a Failure To Tolerate (FTT) of 0.  The reason for this is two fold.

  1. Reduce the capacity used by the recovered virutal machine or volume.
  2. Reduce the the time it takes to hydrate the copy.

In addition I’m adding a stripe width of 4. This policy will increase the recovery speed by splitting the data across multiple disk groups.

Now it should be noted that some backup software allows you to a run a copy from the backup software itself (Veeam’s PowerNFS server is an example). At larger scale this can often tax the performance of the backup storage itself. This temporary recovery policy could be used for some VM’s to speed to recovery of services when protection of data can be waived for the short term.

Now what if I decide I want to keep this data long term?  In this case I could simple change the policy attached to the disk or VM to a safer FTT=1 or 2 setting.

How to bulk create VMkernel Ports for vMotion and vSAN in vSAN 6.6

Quick post time!

A key part of vSAN 6.6 improvements is the new configuration assist menu. Common configuration requirements are tested, and wizards can quickly be launched that will do various tasks (Setup DRS, HA, create a vDS and migrate etc).

One of my least favorite repetitive tasks to do in the GUI is setup VMkernel Ports for vSAN and vMotion. Once you create your vDS and port groups, you can quickly create these in bulk for all host at once.

Once you put in the IP address for the first host in the cluster it will auto fill the remainder by adding one to the last octet. Note, this will use the order that hosts were added to the cluster (So always add them sequentially). Note you can also bulk set the MTU if needed.

If you have more questions about vSAN, vSAN networking, or want more demo’s check out the vSAN content, head over to storagehub.vmware.com

The GIF below walks through the entire process:

So Easy a caveman could do it!

How to make a vSAN storage only node? (and not buy a mainframe!)

I get asked on occasion, “can I buy a vSAN storage only node?” It’s generally followed by a conversation how they were told that storage only nodes are the only way to “control costs in an HCI future”. Generally they were told this by someone who doesn’t support external storage, doesn’t support easy expansion of existing hosts with more drives, and has management tools that are hostile to external storage and in some cases not support entire protocols.

It puzzled me at first as it’s been a long time since someone has tried to spin only being able to buy expansion storage from a single vendor in large chunks as a good thing. You would think it’s 1976 and we are taking about storage for mainframes.

 

 

By default vSAN allows you to use all hosts in a cluster for both storage and compute and encourages you to scale both out as you grow.

First off, this is something that can be avoided with a few quick tricks.

  1. If you are concerned about growing storage asymmetrically, I encourage you to design some empty drive bays in your hosts so that you can add additional disk groups in place (It’s not uncommon to see customers double their storage by just purchasing drives and not having to pay more for VMware licensing!). I see customers put 80TB in a host, and with all flash RAID 5 and Dedupe and Compression you can get a LOT of data in a host! I’ve seen a customer buy a R730XD and only use 8 drive bays to start and triple their storage capacity in place by simply buying some (Cheaper, as it was a year later) drives!
  2. If this is request is because of HIGHLY asymmetric growth of cold data (I have 50 TB of data for hot VM’s, and 600TB per host worth of cold data growth) I’d encourage you to use vSAN for the hot data and look at vVols for the cold data. VMware is the only HCI platform that gives you a seamless management framework (SPBM) for managing both HCI storage, as well as external storage. vSAN is great for 80% of total use cases (and more than often enough for 100% of many customers) but for corner cases we have a great way to use both. I’ve personally run a host with vSAN, iSCSI, FC and NFS and it works and is supported just fine. Having vVols to ease the management overhead of those other profiles can make things a lot better! If your growing bulk cold data with NL-SAS drives at large scale like this JBOD’s on a modular array are going to be the low cost option.

Now back to the question at hand. What about if the above approaches don’t work. I just need a little more storage, (maybe another host or 3’s worth) and my storage IO profile is growing with my data so it’s not a hot/cold problem and I’d rather keep it all on vSAN. Also you might have a concern about licensing as you have workloads that if they use a CPU for compute will need to license the host (Oracle, Windows etc).  In this case you have two options for a vSAN storage only node.

First lets define what a storage only node is.

  1. A storage only node is a node that does not provide compute to the cluster. It can not run virtual machines without configuration changes. 
  2. A storage only node while not providing compute adds storage performance and capacity to the cluster.

The first thing is to determine what licensing you are using.

If you are using vSphere Enterprise Plus here is how to make a storage node

Lets assume we are using all flash and purchase a 2RU host with 24 drive bays of 2.5” drives and fill it full of storage (~80TB of SSD can be put into a host today, but as bigger drives are certified in the future this could easily be a lot more!). Now to keep licensing costs down we are going to get a single socket CPU, and get fewer cores (but keep the clock speed high). This should help control power consumption.

you can leverage DRS “Anti-affinity” rules to keep virtual machines from running on a host. Make sure to use the “MUST” rules, and define that virtual machines will never run on a host.

Deploy LogInsight. It can track vMotions and power on events and give you a log that shows that a host was never used for licensing/auditing purposes.

At this point we just need a single CPU license for vSphere, and a single vSAN socket license and we are ready to roll. If down the road we decide we want to allow other workloads (maybe something that is not licensed per socket) we can simply tune our DRS rules and allow that host to be used for those virtual machines (maybe carve out a management DRS pool and put vROPS, LI, and the vCSA on those storage hosts?).

Next up, if you are using a licensing tier that does NOT have access to DRS you can still make a storage only node.  

Again, we buy our 2RU server with a single CPU and a token amount of ram to keep licensing costs down and stuff it full of 3.84TB drives!

Now since we don’t have DRS we are going to have to find other ways to prevent a VM from being powered onto a host, or vMotioned to a host.

Don’t extend the Virtual Machine port groups to that host!

Deploy a separate vDS for the storage hosts, and do not setup virutal machine port groups. A virtual machine will not power up on a host that it can not find it’s port group on.

What if I’m worried someone might create a port group?

Just take away their permissions to create them, or change them on Virutal Machines!

In this case your looking at a single socket of vSphere and a single socket of vSAN. Looking at the existing price for drives, in this case the “Premium” for software for this storage only node would be less than 10% of the costs of the drives. As someone who used to sell storage arrays I’d put the licensing costs as comparable to what I’d pay for an empty JBOD shelf. There’s a slight premium here for the server, but as your adding additional controller capacity, for workloads that are growing IO with capacity this isn’t really a bad thing as the alternative was overbuying controller capacity up front to handle this expansion.

The other thing to note, is that your investment in vSAN and vSphere licensing is a perpetual one. In 3 years when 16TB drives are low costs nothing stops you from upgrading some disk groups and using your existing licensing. In this way your perpetual license for vSAN is getting cheaper every year.

If you want to control storage and licensing costs, VMware gives you a lot of great options. You can expand vSAN in place, you can add storage only nodes for a low cost for perpetual licenses, and you can serve wildly diverse storage needs with VVOls hand the half a dozen protocols we support. Buying into a platform that can only be expanded by a single vendor runs counter to the promise of a software defined datacenter. This leads us back to the dark ages of mainframes.

Using SD cards for embedded ESXi and vSAN?

*Update to include corruption detection script, and better KB on endurance and size requirements for  boot devices*

I get a lot of questions about embedded installations of VMware vSAN.

Cormac has written some great advice on this already.

This KB explains how to increase the crash dump partition size for customers with over 512GB of RAM.

vSAN trace file placement is discussed by Cormac here.

Given that vSAN does not support running VMFS on the same RAID controller used for pass thru this often causes customers to look at embedded ESXi installs. Today a lot of deployments are done using embedded SD cards because they support a basic RAID 1 mirror system.

The issue

While not a vSAN issue directly this issue can impact vSAN customers. We have identified this issue on non-vsan hosts.

GSS has seen challenges with lower quality SD cards exhibiting significantly higher failure rates as bad batches in the supply chain have caused cascading failures in clusters. VMware has researched the issue and found that a amplification of reads is making the substandard parts fail quicker. Note the devices will not outright fail, but can be detected by running a hash of the first 20MB repeatedly and getting different results. This issue is commonly discovered on a reboot. As a result of this in 6.0U3 we have a method of redirecting the VMTools to a RAMDisk as this was found to be the largest source of reads to the embedded install. The process for setting this as follows.

Prevention

Log into each host using an SSH connection and set the ToolsRamdisk option to “1”:

1. esxcli system settings advanced set -o /UserVars/ToolsRamdisk -i 1
2. Reboot the ESXi host
3. Repeat for remaining hosts in the cluster.

Thanks to GSS/Engineering for hunting this issue down and getting this work around out. More information can be found on the KB here. As a proactive measure I would recommend all embedded SD card and USB device deployments use this flag, as well as any environment that seeks faster VMTools performance.

Detection

Knowing is half the battle!

This host will likely not survive a reboot!

What if you do not know if you are impacted by this issue?  William Lam has written this great script that will check the MD5 hash of the first 20MB in 3 passes, to detect if you are impacted by this issue. (Thanks to Dan Barr for testing).

Going forward I expect to see more deployments with High endurance SATADOM devices, as well as in future server designs embedded M.2 slots for boot devices becoming more common and SD cards retired as the default option. While these devices may lack redundancy I would expect a higher MTBF for one of these than a pair of low quality/cost SD cards. The lack of end to end nexus checking on embedded devices vs a full drive also contribute to this. Host profiles and configuration backups can mitigate a lot of the challenges of rebuilding one in the event of a failure.

Mitigation

Check out this KB for how to Backup your ESXi configuration (somewhere other than the local device).

Evacuate the host swap in the new device with a fresh install and restore the configuration.

Looking for a new Boot Device?

Although a 1GB USB or SD device suffices for a minimal installation, you should use a 4GB or larger device. The extra space will be used for an expanded coredump partition on the USB/SD device. Use a high quality USB flash drive of 16GB or larger so that the extra flash cells can prolong the life of the boot media, but high quality drives of 4GB or larger are sufficient to hold the extended coredump partition. See Knowledge Base article http://kb.vmware.com/kb/2004784.

Looking for guidance on what the endurance and size you need for a embedded boot device (as well as vSAN advice?). Check out KB2145210 that breaks out what different use cases need.

 

VMware vSAN, Cisco UCS and Cisco ACI information

I’ve had a few questions regarding VMware vSAN with Cisco ACI.

While mostly the guidance for ACI is the same there are a few vendor specific considerations. upon internal testing we found some recommended configuration advise and specific concerns for the multicast querier. For more information see this new storage hub section of the networking guide. 

If your looking for General vSAN networking advice, be sure to read the networking guide.

If your looking for Cisco’s documentation regarding UCS servers and VMware vSAN it can be found here.

If your looking for guidance on configuring Cisco Controllers and HBA’s Peter Keilty has some great blogs on this topic. As a reminder while I would strongly prefer the Cisco HBA over the RAID controller if you use the RAID controller you will need the cache module to have proper queue depths.

 

Looking for VMware Storage Content?

Looking for Demo’s, Videos, Design and sizing guides, VVOLs, SRM, VSAN?

Go check out storagehub.vmware.com

Did you get a fake Ready Node?

We’ve all been there…

Maybe its the streets of NYC, or a corner stall in a mall in Bangkok, or even Harwin St here in Houston. Someone tried to sell you a cut rate watch or sunglasses. Maybe the lettering was off, or the gold looked a bit flakey but you passed on that possibly non-genuine watch or sunglasses. It might have even been made in the same factory, but it is clear the QC might have issues. You would not expect the same outcome as getting the real thing. The same thing can happen in Ready Nodes.

Real Ready Nodes for VMware Virtual SAN have a couple key points.

They are tested. All of the components have been tested together and certified. Beware anyone in software defined storage who doesn’t have some type of certification program as this opens the doors to lower quality components, or hardware/driver/firmware compatibility issues. VMware has validated satisfactory performance with the ready node configurations. A real ready node looks beyond “will these components physically connect” and if they will actually deliver.

VSAN Ready Nodes offer choice. Ready nodes are available from over a dozen different server OEM’s. The VMware VSAN Compatibility Guide offers over a thousand verified hardware components also to supplement these ready nodes for further customization. Real Ready Nodes are not limited to a single server or compoennt vendor.

They are 100% supported by VMware. Real VMware Ready Nodes don’t require virtual machines to mount, present or consume storage, or non-VMware supported VIBs be installed.

So what do you do if you’ve ended up with a fake ready node? Unlike the fake watch I had to throw away, you can check with the VSAN compatibility list and see if you can with minimal controller or storage devices changes convert your system in place over to VSAN. Remember if your running ESXI 5.5 update 1 or newer, you already have VSAN software installed. You just need to license and enable it!

Virtual SAN performance Service: What is it? (And what about these other things)

One of the newest exciting features of Virtual SAN 6.2 is the new performance service.  This is an ESXi native performance monitoring system with API, as well as UI access.

One misconception I wanted to be clear on is that it does not require the use of vCenter Operations Manager, or the vCenter database. Instead, Virtual SAN performance service uses the Virtual SAN object store to store its data in a distributed and protected fashion.  For smaller environments who do not want the overhead of VSOM this is a great solution, and will complement the existing tools.

Now why would you want to deploy VSOM if this turnkey simple, low overhead performance system is native? Quite a few reasons:

  • VSOM offers longer term granular performance tracking. The native Virtual SAN performance service uses the same roll up schedule as vCenter’s normal performance graphs.
  • VSOM allows for forecasting and capacity planning as it analysis trends.
  • VSOM allows overlaying performance from multiple area’s and systems (Including things like switching, application KPI’s) to do root cause and anomaly analysis and correlation.
  •  VSOM offers powerful integration with LogInsight allowing event correlation with performance graphs.
  • VSOM allows for rolling up performance information across hundreds (or thousands of sites) into larger dashboards.
  • In heterogenous enivrements using traditional storage, VSOM allows collecting fabric, and array performance information.
Screen Shot 2016-05-08 at 3.45.16 PM

vCenter Server vDisk Advanced metrics

So if I don’t enable this service (or deploy VSOM) what do I get? You still get basic Latency, IOPS, throughput information from the normal vCenter performance graphs by looking at the vDisk layer. You miss out on back end component views (things like internal SSD queues and latency) as well as datastore/cluster wide metrics, but you can still troubleshoot basic issues with the built in performance graphs.

What about VSAN Observer? For those of you who remember previously this information was only available by using the Ruby vCenter shell interface (RVC). VSAN observer provides powerful visibility, but it had a number of limits:

  • It was designed originally for internal troubleshooting and lacks consistency with the vCenter UI.
  • It ran on its own web service separately and was not integrated into the existing vCenter graphs.
  • It was manually enabled from the RVC CLI
  • It could not be accessed by API
  • It was not recommended to run it continuously, or to deploy a separate Virtual machine/Container to run it from.

All of these limitations have been addressed with the Virtual SAN performance service.

I expect the performance service will largely replace VSAN Observer uses. VSAN observer will still be useful for customers who have not upgraded to VSAN 6.2 or where you do not have capacity available for the performance database.

Screen Shot 2016-05-08 at 3.59.02 PMThere is an extensive amount of metrics that can be reviewed. It offers “top down” visibility of cluster wide performance, and virtual machine IOPS and latency.

 

 

 

 

Click to expand!

Individual device metrics

Virtual SAN Performance service also offers “bottom up” visibility into device latency and queues on individual capacity and cache devices.  For quick troubleshooting of issues, or verification of performnace it is a great and simple tool that can be turned on with a single checkbox.

 

 

 

Requirements:

vSphere 6.0u2

vCenter 6.0u2 (For UI)

Up to 255GB of capacity on the Virtual SAN datastore (You can choose the storage policy it uses).

In order to enable it simple follow these instructions.