Skip to content

Posts from the ‘VSAN’ Category

Did you get a fake ReadyNode?

We’ve all been there…

Maybe its the streets of NYC, or a corner stall in a mall in Bangkok, or even Harwin St here in Houston. Someone tried to sell you a cut rate watch or sunglasses. Maybe the lettering was off, or the gold looked a bit flakey but you passed on that possibly non-genuine watch or sunglasses. It might have even been made in the same factory, but it is clear the QC might have issues. You would not expect the same outcome as getting the real thing. The same thing can happen in ReadyNodes.

Real ReadyNodes for VMware vSAN have a couple key points.

They are tested. All of the components have been tested together and certified. Beware anyone in software-defined storage who doesn’t have some type of certification program as this opens the doors to lower quality components, or hardware/driver/firmware compatibility issues. VMware has validated satisfactory performance with the ReadyNode configurations. A Real ReadyNode looks beyond “will these components physically connect” and if they will actually deliver.

vSAN ReadyNodes offer choice. ReadyNodes are available from over a dozen different server OEM’s. The VMware vSAN Compatibility Guide offers over a thousand verified hardware components also to supplement these ReadyNodes for further customization. ReadyNodes are not limited to a single server or compoennt vendor.

They are 100% supported by VMware. Real VMware ReadyNodes don’t require virtual machines to mount, present or consume storage, or non-VMware supported VIBs be installed.

They are Mature. They run a 7th release, battle-tested, mature hypervisor integrated storage stack.

So what do you do if you’ve ended up with a fake ReadyNode? Unlike the fake watch I had to throw away, you can check with the vSAN compatibility list and see if you can with minimal controller or storage devices changes convert your system in place over to vSAN. Remember if your running ESXI 5.5 update 1 or newer, you already have vSAN software installed. You just need to license and enable it!

Virtual SAN performance Service: What is it? (And what about these other things)

One of the newest exciting features of Virtual SAN 6.2 is the new performance service.  This is an ESXi native performance monitoring system with API, as well as UI access.

One misconception I wanted to be clear on is that it does not require the use of vCenter Operations Manager, or the vCenter database. Instead, Virtual SAN performance service uses the Virtual SAN object store to store its data in a distributed and protected fashion.  For smaller environments who do not want the overhead of VSOM this is a great solution, and will complement the existing tools.

Now why would you want to deploy VSOM if this turnkey simple, low overhead performance system is native? Quite a few reasons:

  • VSOM offers longer term granular performance tracking. The native Virtual SAN performance service uses the same roll up schedule as vCenter’s normal performance graphs.
  • VSOM allows for forecasting and capacity planning as it analysis trends.
  • VSOM allows overlaying performance from multiple area’s and systems (Including things like switching, application KPI’s) to do root cause and anomaly analysis and correlation.
  •  VSOM offers powerful integration with LogInsight allowing event correlation with performance graphs.
  • VSOM allows for rolling up performance information across hundreds (or thousands of sites) into larger dashboards.
  • In heterogenous enivrements using traditional storage, VSOM allows collecting fabric, and array performance information.
Screen Shot 2016-05-08 at 3.45.16 PM

vCenter Server vDisk Advanced metrics

So if I don’t enable this service (or deploy VSOM) what do I get? You still get basic Latency, IOPS, throughput information from the normal vCenter performance graphs by looking at the vDisk layer. You miss out on back end component views (things like internal SSD queues and latency) as well as datastore/cluster wide metrics, but you can still troubleshoot basic issues with the built in performance graphs.

What about VSAN Observer? For those of you who remember previously this information was only available by using the Ruby vCenter shell interface (RVC). VSAN observer provides powerful visibility, but it had a number of limits:

  • It was designed originally for internal troubleshooting and lacks consistency with the vCenter UI.
  • It ran on its own web service separately and was not integrated into the existing vCenter graphs.
  • It was manually enabled from the RVC CLI
  • It could not be accessed by API
  • It was not recommended to run it continuously, or to deploy a separate Virtual machine/Container to run it from.

All of these limitations have been addressed with the Virtual SAN performance service.

I expect the performance service will largely replace VSAN Observer uses. VSAN observer will still be useful for customers who have not upgraded to VSAN 6.2 or where you do not have capacity available for the performance database.

Screen Shot 2016-05-08 at 3.59.02 PMThere is an extensive amount of metrics that can be reviewed. It offers “top down” visibility of cluster wide performance, and virtual machine IOPS and latency.

 

 

 

 

Click to expand!

Individual device metrics

Virtual SAN Performance service also offers “bottom up” visibility into device latency and queues on individual capacity and cache devices.  For quick troubleshooting of issues, or verification of performnace it is a great and simple tool that can be turned on with a single checkbox.

 

 

 

Requirements:

vSphere 6.0u2

vCenter 6.0u2 (For UI)

Up to 255GB of capacity on the Virtual SAN datastore (You can choose the storage policy it uses).

In order to enable it simple follow these instructions.

 

How to handle isolation with scale out storage

I would like to say that this post was inspired by Chad’s guide to storage architectures. When talking to customers over the years a recurring problem surfaced.  Storage historically in the smaller enterprises tended towards people going “all in” on one big array. The idea was that by consolidating the purchasing of all of the different application groups, and teams they could get the most “bang for buck”.  The upsides are obvious (Fewer silo’s and consolidation of resources and platforms means lower capex/opex costs). The performance downsides were annoying but could be mitigated. (normally noisy neighbor performance issues). That said the real downside to having one (or a few) big arrays are often found hidden on the operational side.

  1. Many customers trying to stretch their budget often ended up putting Test/Dev/QA and production on the same array (I’ve seen Fortune 100 companies do this with business critical workloads). This leads to one team demanding 2 year old firmware for stability, and the teams needing agility trying to get upgrades. The battle between stability and agility gets fought regularly in the change control committee meetings further wasting more people’s time.
  2. Audit/regime change/regulatory/customer demands require an air gap be established for a new or existing workload. Array partitioning features are nice, but the demands often extend beyond this.
  3. In some cases, organizations that had previously shared resources would part ways. (divestment, operational restructuring, budgetary firewalls).
Feed me RAM!

“Not so stealthy database”

Some storage workloads just need more performance than everyone else, and often the cost of the upgrade is increased by the other workloads on the array that will gain no material benefit. Database Administrators often point to a lack of dedicated resources when performance problems arise.  Providing isolation for these workloads historically involved buying an exotic non-x86 processor, and a “black box” appliance that required expensive specialty skills on top of significant Capex cost. I like to call these boxes “cloaking devices” as they often are often completely hidden from the normal infrastructure monitoring teams.

A benefit to using a Scale out (Type III)  approach is that the storage can be scaled down (or even divided).  VMware VSAN can evacuate data from a host, and allow you to shift its resources to another cluster. As Hybrid nodes can push up to 40K IOPS (and all flash over 100K) allowing even smaller clusters to hold their own on disk performance. It is worth noting that the reverse action is also possible. When a legacy application is retired, the cluster that served it can be upgraded and merged into other clusters. In this way the isolation is really just a resource silo (the least threatening of all IT silos).  You can still use the same software stack, and leverage the same skill set while keeping change control, auditors and developers happy. Even the Database administrators will be happy to learn that they can push millions of orders per minute with a simple 4 node cluster.

In principal I still like to avoid silos. If they must exist, I would suggest trying to find a way that the hardware that makes them up is highly portable and re-usable and VSAN and vSphere can help with that quite a bit.

 

Upcoming Live/Web events…

Spiceworks  Dec 1st @ 1PM Central- “Is blade architecture dead” a panel discussion on why HCI is replacing legacy blade designs, and talk about use cases for VMware VSAN.

Micron Dec 3rd @ 2PM Central – “Go All Flash or go home”   We will discuss what is new with all flash VSAN, what fast new things Micron’s performance lab is up to, and an amazing discussion/QA with Micron’s team. Specifically this should be a great discussion about why 10K and 15K RPM drives are no longer going to make sense going forward.

Intel Dec 16th @ 12PM Central – This is looking to be a great discussion around why Intel architecture (Network, Storage, Compute) is powerful for getting the most out of VMware Virtual SAN.

Is VDI really not “serious” production?

This post is in response to a tweet by Chris Evans (Who I have MUCH respect for and is one of the people that I follow on a daily basis on all forms of internet media). The discussion on twitter was unrelated (Discussing the failings of XtremeIO) and the point that triggered this post was when he stated VDI is “Not serious production”.

While I might have agreed 2-3 years ago when VDI was often in POC, or a plaything of remote road warriors or a CEO, VDI has come a long way in adoption. I’m working at a company this week with 500 users and ALL users outside of a handful of IT staff work in VDI at all times. I”m helping them update their service desk operations and a minor issue with VDI (profile server problems) is a critical full stoppage of the business. Even if all of their 3 critical LOB apps going down would be less of an impact. At least people could still access email, jabber and some local files.

There are two perspectives I have from this.

1. Some people are actually dependent on VDI to access all those 99.99999% uptime SLA apps so its part of the dependency tree.

2. We need to quit using 99.9% SLA up-time systems and process’s to keep VDI up. It needs real systems, change control, monitoring and budget. 2 years ago I viewed vCOPS for View as an expensive necessity, now I view it as a must have solution. I’m deploying tools like LogInsight to get better information and telemetry of whats going on, and training service desks on the fundamentals of VDI management (that used to be the task of a handful of sysadmins). While it may not replace the traditional PC and in many ways is a middle ground towards some SaaS web/mobile app future, its a lot more serious today than a lot of people realize.

I’ve often joked that VDI is the technology of last resort when no other reasonable offering made sense (Keep data in datacenter, solve apps that don’t work under RDS, organizations who can’t figure out patch/app distribution, highly mobile but poorly secured workforce). For better or for worse its become the best tool for a lot of shops, and its time to give it the respect it deserves.

At least the tools we use to make VDI serious today (VSAN/VCOPS/LogInsight/HorzionView6) are a lot more serious than the stuff I was using 4 years ago.

My apologies, for calling our Chris (which wasn’t really the point of this article) but I will thank him for giving me cause to reflect on the state of VDI “seriousness” today.

How does your organization view and depend on VDI today, and is there a gap in perception?

VMworld Day 1

I’m looking forward to this week and here are a few highlights of what I’ll be looking into.

On the tactical

1. Settling on a primary load balancing partner for VMware View. (Eying Kemp, anyone have any thoughts?). I’ve got a number of smaller deployments (few hundred users) that need non-disruptive maintenance operations, and patching on the infrastructure and are looking to take their smaller pilots or deployments forward.

2. Learn more about VDP-A designs, and best practices. I’ve seen some issues in the lab with snapshots not getting removed from the appliance and need to understand the scaling and design considerations better.

3. Check out some of the HOL updates. Find out if a VVOL lab in the office is worth the investment.

On the more general strategic goals.

Check out cutting edge vendors, and technologies from VMware.

CloudVolumes – More than just application layering. Server application delivery, Profile abstraction thats fast and portable, and a serious uplift to persona and ThinApp. Really interested in use cases having it used as a delivery method for Thinapp.

DataGravity – In the era of software defined storage this is a company making a case that an array can provide a lot of value still. Very interesting technology but the questions remain. Does it work? Does it Scale? Will they add more file systems, and how soon will EMC/HP buy them to bolt this logic into their Tier 1 arrays. Martin Glassborow has made a lot of statements that new vendors don’t do enough to differentiate, or that we’ve reached peak features (Snaps, replication, cache/tier flash, data reduction etc) but its interesting to see someone potentially breaking outside of this mold of just doing the same thing a little better or cheaper.

VSAN – Who is Marvin? What happened to Virsto? I’ve got questions and I hope someone has answers!

LSI 2008 Dell H310 VSAN rebuild performance concerns

Just a quick note for anyone seeing VSAN performance issues with Dell H310 or LSI 2008 controllers. Its not a secret that the LSI 2008 and H310 Dell with stock firmwares have a very shallow queue depth of 25 (a LSI 2208 in comparison is 600, and a Dell H910 is 975). These are some of the weakest cards to be certified on the HCL and for a small ROBO deployment, or a low VM count with low contention should be fine. Remember part of the benefit of SDS is you can scale down.

For a quick look at firmware queue depths check out Duncan’s article on this.

Now one user tried to push things a little to far, running 5 hosts, with only 3 with storage with 70+ VM’s using Dell H310’s. Performance and experience was fine, until he lost one node and a rebuild kicked off. Running 70 VM’s on 2 hosts combined with the replication overhead was too much and caused an interruption of service (but no data corruption). VMware support tied it back to poor performance of the H310 and heavy load on a degraded 2 Node system trying to rebuild.

In Synchronet’s Lab’s one my earlier VSAN lab builds ran into some odd write latency, that I had initially suspected was the result of a excessively cheap 10Gbps switch. While building out a validation of a solidly performing SMB bundle this week I sought out to get to the root cause. A quick test last Friday (running the vCenter Appliance install wizard) showed at ~250 IOPS a perfectly good Intel S3700 200GB SSD drive spiking over 30ms of write latency and continuing upward. This test was performed against a regular SSD and not a VSAN based datastore isolating the issue. Previous IOMeter workloads showed thousands of read IOPS but write IOPS choking pretty quickly with high latency and low cache rates. Subsequent lab equipment did not have this issue, but had used newer switches muddling the root cause.

We have upgraded our LSI cards to the current firmware, and as of this evening confirmed that the queue depth on the 2008 is increased to 600. Now technically while not supported the H310 is an LSI 2008 controller, and if this is for a lab or a POC or your feeling brave you can follow this guide here. (Note this is not endorsed or supported by VMware/Dell). I’ll see how far I can push writes this afternoon but I expect this should have made our similar issues go away. Alternatively upgrading to something with a queue depth of at least 600 should help fix this (its generally ~$150 per host for a mediocre HBA/RAID Controller that supports decent queue depths).

This is a quick flash (And reminder) of why its a good idea to work with a VSAN partner who validates their builds, understands the technology, and is ready to support you from architecture to implementation (Ok thats my quick advertisement). One thing I am offering right now is anyone interested in a quick architecture call, and a setup of the VSAN assessment tool (TM) I can hopefully help get you the raw information you need to make intelligent decisions on things like Queue Depth, and cache size, and understanding things like the data skew that is important to setting a foundation for a solid VSAN design.

As with any new technology I encourage everyone to do their homework. There’s a lot of FUD going around (and just lack of training and knowledge from a lot of vendors) and issues like this is part of why I’ve been happy to work with VMware’s software defined storage team and the hardware OEM’s for our customer builds. Greater flexibility and response on helpful information (like JBOD mode on LSI 2208’s in a previous post, or adjustments to the HCL, or documentation and firmwares to fix the queue depth issue). For those of you looking for a quick summery. VSAN is a great product and very powerful. Remember a configuration that would be fine for 5 VM’s in a branch isn’t quite going to look the same as for 100 VM’s in a datacenter, or 1000 in a VDI farm. Trying to build the cheapest VSAN configuration that has enough capacity should not be a goal, and cutting corners in the wrong areas can sneak up on you. I do expect some updated guidance from support and the HCL on this. I am hearing 256 but realistically given how cheap the 600 Queue Depth, and how annoying it is to swap out an HBA and re-cable things I’d encourage everyone to start at this point if it makes sense.

Special thanks to…

VMware – for getting a quick Root Cause Analysis, and from reading the story provided a steady voice of reason on support so nothing further drastic was done and stayed involved until the situation was resolved.
JasonGill – for providing us a good read and not jumping to conclusions that this was a software bug.
Brandon Wardlaw – for braving the controller upgrades and somehow not bricking any of my old lab gear.
LSI – For making an updated firmware for their old controllers and not hiding it.

Note, Expect follow up posts and edits. As I get more benchmarks and numbers about various queue depth’s I’ll post them (or if anyone has any send them to me!). I’m also going to be benchmarking some different switches (Cisco, Juniper, Brocade, Netgear) over the following weeks and hopefully if I have time publish some of our results on things like a 140mpps 1Gbps Brocade vs. terrifyingly cheap Netgear 10Gbps switch.

*UPDATE*

We did some testing in the lab without VSAN, just doing a basic vCenter Server install to SSDs (Intel 3500) on this datastore. With the 25 queue depth default firmware we saw 30ms of write latency at 352 write iops. Using an LSI based upgrade firmware with 600 queue depth we pushed 1500 iops at a sub 1ms of latency (.17 ms). Its pretty clear that outside of light/ROBO usage the H310 controller is unsuitable for VSAN usage until Dell supports the LSI code upgrade. What is very concerning is that 4 out of the8 of the Dell VSAN nodes are based on this underpowered HBA. This includes one with 15K drives that I’m guessing is meant to be a performance option. This raises a few questions.

1. Does Dell have a firmware upgrade from LSI to push out (We know it exists) to help resolve this.
2. Did Dell run any benchmarks or have any storage architects look at these configs before they submitted them as a VSAN ready build?
3. I’m hearing reports from the field that Dell reps are saying that VSAN isn’t supported for production workloads. Is this underpowered config partly to blame for this?

DellVSANhuh

VSAN build #2 Part 1 JBOD Setup and Blinkin Lights

(Update, the SM2208 controller in this system is being removed from the HCL for pass through.  Use RAID 0)

Its time to discuss the second VSAN build. This time we’ve got something more production ready, properly redundant on switching and ready to deliver better performance. The platform used is the SuperServer F627R2-F72PT+

The Spec’s for the 4 node’s

2 x 1TB Seagate Constellation SAS drives.
1 x 400GB Intel SSD S3700.
12 x 16GB DDR3 RAM (192GB).
2 x Intel Xeon E5-2660 v2 Processor Ten-Core 2.2GHz
The Back end Switches have been upgraded to the more respectable M7100 NetGear switches.

Now the LSI 2208 Controller for this is not a pass through SAS controller but an actual RAID controller. This does add some setup, but it does have a significant queue depth advantage over the 2008 in my current lab (25 vs 600). Queues are particularly important when dropping out of cache bursts of writes to my SAS drives. (Say from a VDI recompose). Also Deep queues help SSD’s internally optimize commands for write coalescence internally.

If you go into the GUI at first you’ll be greeted with only RAID 0 as an option for setting up the drives. After a quick email to Reza at SuperMicro he directed me to how to use the CLI to get this done.

CNTRL + Y will get you into the Megaraid CLI which is required to set JBOD mode so SMART info will be passed through to ESXi.

$ AdpGetProp enablejbod -aALL // This will tell you the current JBOD setting
$ AdpSetProp EnableJBOD 1 -aALL //This will set JBOD for the Array
$ PDList -aALL -page24 // This will list all your devices
$ PDMakeGood -PhysDrv[252:0,252:1,252:2] -Force -a0 //This would force drives 0-2 as good
$ PDMakeJBOD -PhysDrv[252:0,252:1,252:2] -a0 //This sets drives 0-2 into JBOD mode

They look angry don't they?

They look angry don’t they?

Now if you havn’t upgraded the firware to at least MR5.5 (23.10.0.-0021) you’ll discover that you have red drive lights on your drives. You’ll want to grab your handy dos boot disk and get the firmware from SuperMicro’s FTP.

I’d like to thank Lucid Solution’s guide for ZFS as a great reference.

I’d like to give a shout out to the people who made this build possible.

Phil Lessley @AKSeqSolTech for introducing me to the joys of SuperMicro FatTwin’s some time ago.
Synchronet, for continuing to fund great lab hardware and finding customers wanting to deploy revolutionary storage products.