Skip to content

Archive for

Yes, you can change things on a vSAN ESA ReadyNode

First I’m going to ask you to go check out the following KB and take 2-3 minutes and read it. :

Pay extra attention to the table from this document it links to.

Also go read Pete’s new blog explaining read intensive drive support.

So what does this KB Mean in practice?

You can start with the smallest ReadyNode (Currently this is an AF-2, but I’m seeing some smaller configs in the pipeline), and add capacity, drives, or bigger NICs and make changes based on the KB.

Should I change it?

The biggest things to watch for is adding TONS of capacity, and not increasing NIC sizes, could result in longer than expected rebuilds. Putting 300TB into a host with 2 x 10Gbps NICs is probably not the greatest idea, while adding extra RAM or cores (or changing the CPU frequency 5%) is unlikely to yield any unexpected behaviors. In general balanced designs are preferred (That’s why the ReadyNode profiles as a template exist) but we do understand sometimes customers need some flexibility and because of the the KB above was created to support it.

What can I change?

I’ve taken the original list, and converted it to text as well as added (in Italics) some of my own commentary on what and how to change ESA ReadyNodes. I will be updated this blog as new hardware comes onto the ReadyNode certification list.


  • Same or higher core count with similar or higher base clock speed is recommended.
  • Each SAN ESA ReadyNode™ is certified against a prescriptive BOM.
  • Adding more memory than what is listed is supported by SAN, provided Sphere supports it. Please maintain a balanced memory population configuration when possible.
  • If wanting to scale storage performance with additional drives, consider more cores. While vSAN OSA was more sensative to clock speed for scaling agregate performance, vSAN ESA additional threading makes more cores particularly useful for scaling performance.
  • As of the time of this writing the minimum number of cores is 32. Please check the vSAN ESA VCG profile page for updates to see if smaller nodes have been certified.

Storage Devices (NVMe drives today)

  • Device needs to be same or higher performance/endurance class.
  • Storage device models can be changed with SAN ESA certified disk. Please confirm with the Server vendor for Storage device support on the server.
  • We recommend balancing drive types and sizes(homogenous configurations) across nodes in a cluster.
  • We allow changing the number of drives and drives at different capacity points(change should be contained within the same cluster)as long as it meets the capacity requirement of the profile selected but not exceed Max Drives certified for the ReadyNode™. Please note that the performance is dependent on the quantity of the drives.
  • Mixed Use NVMe (typically 3DWPD) endurance drives are best for large block steady State workloads. Lower endurance drives that are certified for vSAN ESA may make more sense for read heavy, shorter duty cycle, storage dense cost conscious designs.
  • 1DWPD ~15TB “Read Intensive” are NOW on the vSAN ESA VCG, for storage dense, non-sustained large block write workloads these offer a great value for storage dense requirements.
  • Consider rebuild times, and consider also upgrading the number of NICs for vSAN or the NIC interfaces to 100Gbps when adding significant amounts of capacity to a node.


  • NICs certified in IOVP can be leveraged for SAN ESA ReadyNode™.
  • NIC should be same or higher speed.
  • We allow adding additional NICs as needed.
  • If/When 10Gbps NIC hosts ReadyNode profiles are released it is advised to still consider 25Gbps NICs as they can operate at 10Gbps and support future switching upgrades (SFP28 interfaces are backwards compatible with SFP+ cables/transceivers).

Boot Devices

  • Boot device needs to be same or higher performance endurance class.
  • Boot device needs to be in the same drive family.


Please just buy a TPM. It is critically important for vSAN Encryption key protection, securing the ESXi configuration, host attestation and other issues. They cost $50 up front, but hours of annoying maintenance to install after the fact. I suggest throwing a NVMe drive at any sales engineer who forgets them off a quote.