vSAN Sizing and RVtools Tips
VMware has released a new vSAN sizing tool!
Some guidance for the tool has been included on how to use it are in the design and sizing guide on StorageHub.
A lot of partners like using RVtools (A great way to make a simple capture of an inventory, health, and configuration) as a means to collect storage capacity information, as well as a snapshot of compute allocations.
- If you have a large number of powered off VM’s have a serious discussion if they will all be started or needed at any time. If not, consider excluding them from compute sizing.
- Use the health tab and look for Zombie VM’s and see if these cold VM’s can be deleted or migrated out.
- Look for open snapshots, and see if these need to be collapsed (which can save space).
- Be aware of the difference in the two storage metrics (allocated vs. consumed MB). If you intend to keep using thin provisioning, you do not need to size for all of the allocated. In the video, this is a significant capacity difference.
- If the existing solution has VM’s tied to storage demands (Storage management VMs, VSA’s) that will be deprecated by vSAN be sure to exclude them.
- Have a serious discussion on if the vCPU to physical core ratio is “working” or if they see performance issues. I’ve seen both people be too conservative (1:1 in test dev) and too aggressive (20:1 for databases!). You can see the existing ratio’s on the host tab.
- Pay attention to CPU generations. Vintage Xeon 5500 will be crushed clock for clock by new EPYC processors.
- Realize you can change the CPU configuration (Cluster advanced options). Some people may want to optimize their CPU model for licensing (commonly 16 core for windows, or possibly lower core but higher clock for Oracle). You can change these assumptions.
- Be sure to check out the health tab, and look through the host configs. Make sure NTP is set up on hosts! Use this as an opportunity to see if the existing environment is even healthy.
Have any more tips and tricks? Check out the comments section below!