Skip to content

How to handle isolation with scale out storage

I would like to say that this post was inspired by Chad’s guide to storage architectures. When talking to customers over the years a recurring problem surfaced.  Storage historically in the smaller enterprises tended towards people going “all in” on one big array. The idea was that by consolidating the purchasing of all of the different application groups, and teams they could get the most “bang for buck”.  The upsides are obvious (Fewer silo’s and consolidation of resources and platforms means lower capex/opex costs). The performance downsides were annoying but could be mitigated. (normally noisy neighbor performance issues). That said the real downside to having one (or a few) big arrays are often found hidden on the operational side.

  1. Many customers trying to stretch their budget often ended up putting Test/Dev/QA and production on the same array (I’ve seen Fortune 100 companies do this with business critical workloads). This leads to one team demanding 2 year old firmware for stability, and the teams needing agility trying to get upgrades. The battle between stability and agility gets fought regularly in the change control committee meetings further wasting more people’s time.
  2. Audit/regime change/regulatory/customer demands require an air gap be established for a new or existing workload. Array partitioning features are nice, but the demands often extend beyond this.
  3. In some cases, organizations that had previously shared resources would part ways. (divestment, operational restructuring, budgetary firewalls).
Feed me RAM!

“Not so stealthy database”

Some storage workloads just need more performance than everyone else, and often the cost of the upgrade is increased by the other workloads on the array that will gain no material benefit. Database Administrators often point to a lack of dedicated resources when performance problems arise.  Providing isolation for these workloads historically involved buying an exotic non-x86 processor, and a “black box” appliance that required expensive specialty skills on top of significant Capex cost. I like to call these boxes “cloaking devices” as they often are often completely hidden from the normal infrastructure monitoring teams.

A benefit to using a Scale out (Type III)  approach is that the storage can be scaled down (or even divided).  VMware VSAN can evacuate data from a host, and allow you to shift its resources to another cluster. As Hybrid nodes can push up to 40K IOPS (and all flash over 100K) allowing even smaller clusters to hold their own on disk performance. It is worth noting that the reverse action is also possible. When a legacy application is retired, the cluster that served it can be upgraded and merged into other clusters. In this way the isolation is really just a resource silo (the least threatening of all IT silos).  You can still use the same software stack, and leverage the same skill set while keeping change control, auditors and developers happy. Even the Database administrators will be happy to learn that they can push millions of orders per minute with a simple 4 node cluster.

In principal I still like to avoid silos. If they must exist, I would suggest trying to find a way that the hardware that makes them up is highly portable and re-usable and VSAN and vSphere can help with that quite a bit.

 

Upcoming Live/Web events…

Spiceworks  Dec 1st @ 1PM Central- “Is blade architecture dead” a panel discussion on why HCI is replacing legacy blade designs, and talk about use cases for VMware VSAN.

Micron Dec 3rd @ 2PM Central – “Go All Flash or go home”   We will discuss what is new with all flash VSAN, what fast new things Micron’s performance lab is up to, and an amazing discussion/QA with Micron’s team. Specifically this should be a great discussion about why 10K and 15K RPM drives are no longer going to make sense going forward.

Intel Dec 16th @ 12PM Central – This is looking to be a great discussion around why Intel architecture (Network, Storage, Compute) is powerful for getting the most out of VMware Virtual SAN.

VSAN is now up to 30% cheaper!

Ok, I’ll admit this is an incredibly misleading click bait title. I wanted to demonstrate how the economics of cheaper flash make VMware Virtual SAN (and really any SDS product that is not licensed by capacity) cheaper over time. I also wanted to share a story of how older slower flash became more expensive.

Lets talk about a tale of two cities who had storage problems and faced radically different cost economics. One was a large city with lots of purchasing power and size, and the other was a small little bedroom community. Who do you think got the better deal on flash?

Just a small town data center….

A 100 user pilot VDI project was kicking off. They knew they wanted great storage performance, but they could not invest in a big storage array with a lot of flash up front. They did not want to have to pay more tomorrow for flash, and wanted great management and integration. VSAN and Horizon View were quickly chosen. They used the per concurrent user licensing for VSAN so their costs would cleanly and predictably scale. Modern fast enterprise  flash was chosen that cost ~$2.50 per GB and had great performance. This summer they went to expand the wildly successful project, and discovered that the new version of the drives they had purchased last year now cost $1.40 per GB, and that other new drives on the HCL from their same vendor were available for ~$1 per GB. Looking at other vendors they found even lower cost options available.  They upgraded to the latest version of VSAN and found improved snapshot performance, write performance and management. Procurement could be done cost effectively at small scale, and small projects could be added without much risk. They could even adopt the newest generation (NVMe) without having to forklift controllers or pay anyone but the hardware vendor.

Meanwhile in the big city…..

The second city was quite a bit larger. After a year long procurement process and dozens of meetings they chose a traditional storage array/blade system from a Tier 1 vendor. They spent millions and bought years worth of capacity to leverage the deepest purchasing discounts they could. A year after deployment, they experienced performance issues and wanted to add flash. Upon discussing with the vendor the only option was older, slower, small SLC drives. They had bought their array at the end of sale window and were stuck with 2 generations old technology. It was also discovered the array would only support a very small amount of them (the controllers and code were not designed to handle flash). The vendor politely explained that since this was not a part of the original purchase the 75% discount off list that had been on the original purchase would not apply and they would need to pay $30 per GB. Somehow older, slower flash had become 4x more expensive in the span of a year.  They were told they should have “locked in savings” and bought the flash up front. In reality though, they would  locking in a high price for a commodity that they did not yet need. The final problem they faced was an order to move out of the data center into 2-3 smaller facilities and split up the hardware accordingly.  That big storage array could not easily be cut into parts.

There are a few lessons to take away from these environments.

  1. Storage should become cheaper to purchase as time goes on. Discounts should be consistent and pricing should not feel like a game show. Software licensing should not be directly tied to capacity or physical and should “live” through a refresh.
  2. Adding new generations of flash and compute should not require disruption and “throwing away” your existing investment.
  3. Storage products that scale down and up without compromise lead to fewer meetings, lower costs, and better outcomes. Large purchases often leads to the trap of spending a lot of time and money on avoiding failure, rather than focusing on delivering excellence.

Veeam On (Part 2)

It has been a great week.  VeeamOn has been a great conference and its clear Rick and the team really wanted to have a different spin on the IT conference.  While there are some impressive grand gestures, speakers and sessions (I Seriously feel like I’m in a spaceship right now) its the little details that stand out too.

My favorite little things so far.

  • General Session does not start at 8AM.
  • Breakfast runs till 9AM not 7:45AM like certain other popular conferences. (Late night networking and no food going into a General Session is not a good idea!).
  • Day 2 keynotes are by the partners, giving me a mini-taste of VMware, HP, Cisco, Netapp’s and Microsoft’s “Vision” for the new world of IT.
  • A really interesting mix of attendees.  I’ll go from having a conversation with 10TB to someone else with 7PB’s and some of the challenges we are discussing will be the same.
  • LabWarz  is easy to wander in and tests you skills in a far more “fun” and meaningful way than trying to dive a certification test between sessions.
  • The vendor expo isn’t an endless maze of companies but are companies (and specifically only the products within them) that are relevant to Veeam users.

Veeam On! (Part 1)

IMG_5267

I’m in Vegas rounding out the conference tour (VMworld,SpiceWorld,VMworld,DellWorld) for what looks to be a strong finish. This is my first time at VeeamOn and I’m looking forward to briefings across the full Veeam portfolio. I’m looking forward to being shamed by the experts in Lab Warz and getting my hands dirty with the v9.

More importantly I”m looking forward to some great conversations. The reason why I value going to conferences goes beyond great sessions and discussions with vendors at the solutions expo. The conversations with end users (small, large and giant) help you learn where the limits are (and how to push past them) in the tools you rely on. I’ve had short conversations over breakfast that saved me six months of expensive trial and error that others had been through. A good conference will attract both small and massive scale customers and bring together great conversations that will help everyone change their perspective and get things done.

All good things…

I started my IT career as a customer.  It was great having complete ownership of the environment but eventually I wanted more.  I moved to the partner side and the past five years have been amazing. I have worked with more environments than I can count.  It exposed me to diverse technical and operational challenges. It gave me the opportunity to see first hand past the marketing what worked and what did not work. I would like to thank everyone (customers, co-workers) and all of the people who I was able to directly work with who helped me reach this point in my career. I also want to thank people who freely share to the greater community. Their blogs, their words of caution, their advice, their presentations at conferences all contributed in helping me succeed. I will miss the amazing team at Synchronet but it was time for change.

Starting today, I will be in a new role at VMware in Technical Marketing for VMware VSAN. I am excited for this change, and look forward to the challenges ahead. In this position I hope to learn and give back to the greater community that has helped me reach this point. I will still blog various musings here, but look for VSAN and storage content at Virtual Blocks.

I look forward to the road ahead!

 

Time to check the log…

You can see from the year 5 rings that there was great budget, and much storage was added!

Any time you open a ticket with VMware (or any vendor) the first thing they generally want you to do is pull the logs and send them over.  They then use their great powers (of grep) to try to find the warning signs, or results of a known issue (or new one!).  This whole process can take quite some time, and frustratingly some issues roll out of logs quickly, are buried in 10^14 of noise, or can only be found with an environment that is down and has not been rebooted.  I recently had a conference call with a vendor where they instructed a customer that we would have to wait for one (or more!) complete crashes to their storage array before they would be able to get the logs to possibly find a solution.

This is where LogInsight comes to the rescue.  With real time indexing, graphs that do not require you learn ruby to make, and machine learning to auto group similar messages you can find out why your data center has crashed in 15 minutes instead of 15 days.

Recently while deploying a POC I had a customer who complained of intermittent performance issues on a VDI cluster they couldn’t quite pin down.  Internal teams were arguing (Storage blamed network, network blamed AD, Windows/AD blamed the VMware admin).  A quick search for “error*,crit*,warn*” across all infastruture on the farm (Firewall/Switch/Fabric/DiskArray/Blades/<infinate number of locations View hides logs> returned thousands of unrelated errors for internal certificates not being signed and other non-interesting events.   LogInsight’s auto grouping allowed for quick filtering of the noise to uncover the smoking gun. A Fibre Channel connection inside of a blade chassis was flapping (from a poorly seated HBA).  IT was not bad enough to trigger port warnings on the switches, or an all paths down error, but it was enough to impact user experience randomly.  This issue was a ghost that had been plaguing them for two weeks at this point.  LogInsight found it in under 15 minutes of searching.  It was great to have clear evidence so we could end internal arguing as well as hold the vendor accountable so they couldn’t deflect blame to VMware or another product.

I’d encourage everyone to download a free trial and post back in the comments what obscure errors or ghosts in the machine you end up finding.

HDS G400/600 “It is required to install additional shared memory”

I have some DIMMS laying around here somewhere...

I have some DIMMS laying around here somewhere…

Quick post here! If your setting up a new Hitachi H800 (G400/600) and are trying to setup a Hitachi Dynamic Tiering pool you may get the following error. “To use a pool with the Dynamic Tiering function enabled, it is required to install additional shared memory.”

You will need to login to the maintenance utility (This is what runs on the array directly). Here is the procedure.

The first step is figuring how much memory you need to reconfigure. This will be based on how much capacity is being dedicated to Dynamic Provisioning Pools.  As the documents reference Pb (little b which is a bit odd) these numbers are smaller than they first appear.

  • No Extension DP – .2Pb with 5GB of Memory overhead
  • No Extension HDT – .5Pb with 5GB of Memory overhead
  • Extension 1 – 2Pb  with 9GB of Memory overhead
  • Extension 2 – 6.5Pb with 13GB of Memory overhead

There are also  extensions 3 and 4 (which use 17GB and 25GB respectively) however I believe they are largely needed for larger Shadow Image, Volume Migrations, Thin Image, and TrueCopy configurations.
In the Maintenance Utility window, click Hardware > Controller Chassis. In the Controller Chassis window, click the CTLs tab. Click Install list, and then click Shared Memory. In the Install Shared Memory window pick which extensions you need and select install (and grab a cup of coffee because this takes a while).  This can be done non-disruptively, but it would be best to do at lower IO as your robbing cache from the array for the thin provisioning lookup table.

You can find all this information on page 171 of the following guide.

Screen Shot 2015-07-27 at 9.13.09 AM

 

LSI Firmware VSAN

I’ve been talking to LSI over the past couple months in relation to VSAN and have a couple updates on issues and thoughts.

 

1. LSI support does not support their driver if it is purchased through an OEM.  They will not accept calls from VMware regarding this driver in this case either.  If you want LSI to support the VMware driver stack, you must buy direct from them.

2. LSI branded MegaRAID cards do not support JBOD (I understand that it is on the roadmap).  Dell and others are offering alternative firmwares that allow this, but they have no comment or support statement on this.

3. MegaRAID CLI can be used with RAID 0 to manage cards (i’ll release a guide if there is interest) and performance is comparable and on supported systems is very stable.  Don’t rule it out, and with all the back and forth on support for JBOD it strangely might be the safer until I get full testing reports from the Perc730 next week.

4. The Dell Perc730 has JBOD support now.  Despite being a MegaRAID I’m hearing good things in the field so far (I’ll update if I hear otherwise).

5. LSI prefers dealing with hardware vendors, and largely being a back end chip-set manufacturer.  A stronger relationship with VMware is needed (especially with PCI-Express networking on the horizion).

6. HP is switching to Adaptec for controllers.  Hopefully this should bring their JBODs onto the VSAN HCL and allow for supplier diversity.

7. I’ve heard statements from Dell that VMware is intensifying the testing procedures for VSAN.  It looks like this will catch H310/2208 type issues first.

8. Ignore the SM2208 on the HCL for pass through.  Neither VMware nor LSI will support it.

The use cases for a Synology

I often run into a wide mix of high and low end gear that people use to solve challenges. Previously I wrote on why you shouldn’t use a Synology or cheap NAS device as your primary storage system for critical workloads, but I think its time to clarify where people SHOULD consider using a Synology in a datacenter enviroment.

A lot has been written about why you shouldn’t apply the same performance SLA to all workloads
, but I’d argue the bigger discussion in maintaining SLA’s without breaking the budget is treating up-time SLA’s the same way. Not all workloads need HA, and not all workloads need 4 hour support agreements. There is a lot of redundant data, and ethereal data in the data center and having a device that can cheaply store that data is key to not having to make compromises to those business critical workloads that do need it. I see a lot of companies evaluating start-ups, scaling performance and flash usage back, under-staffing IT ops staff, cutting out monitoring and management tools, and other cost saving but SLA crushing actions actions in order to free up the budget for that next big high up-time storage array. Its time for small medium enterprises to quit being fair to storage availability. It is time to consider that “good enough” storage might be worth the added management and overhead. While some of this can be better handled by data reduction technologies, and storage management policies and software some times you just need something cheap. While I do cringe when I see RAID 5 Drobo’s running production databases, there are use cases and here’s a few I’ve found for the Synology in our datacenter over the years.

But my testing database needs 99.999999% uptime!

But my testing database needs 99.999999% uptime!

1. Backups, and data export/import – In a world where you often end up with 5 copies of your data (Remote Replica’s for DR, Application team silo has their own backup and archive solution) using something thats cheap for bulk image level backups isn’t a bad idea. The USB and ESATA ports make them a GREAT place for transferring data by mail (Export or import of a Veeam seed) or for ingesting data (Used ours on thanksgiving to import a customer’s VM that was fleeing the abrupt shutting down of a hosting provider in town). While its true you can pass USB through to a VM, I’ve always found it overly complicated, and generally slower than just importing straight to a datastore like the Synology can do.

2. Swing Migrations – For those of us Using VMware VSAN, having a storage system that can cross clusters is handy in a pinch, and keeps downtime and the need to use Extended vMotion to a minimum.  A quick and dirty shared NFS export means you can get a VM from vCenter A to vCenter B with little fuss.

 

Screen Shot 2014-12-28 at 4.51.55 PM3. Performance testing – A lot of times you have an application that runs poorly, and before you buy 40-100K worth of Tier1 flash you want to know if it will actually run faster, or just chase its tail. A quick and dirty datastore on some low price Intel flash (S3500’s or S3700 drives are under $2.50 a GB) can give you a quick rocket boost to see if that application can soar! (Or if that penguin will just end up CPU bound). A use case I’ve done is put a VDI POC on the Synology to find out what the IOPS mix will look like with 20-100 users before you scale to production use for hundreds or thousands of users. Learning that you need to size heavy because of that terrible access database application before you under invest in storage is handy.

4. A separate failure domain for network and management services – For those of us who live in 100% VMware environments, having something that can provide a quick NTP/DHCP/Syslog/SMTP/SMS/SSH service. In the event of datacenter apocalypse (IE an entire VMware cluster goes offline) this plucky little device will be delivering SMTP and SMS alerts, providing me services I would need to rebuild things, give me a place to review the last screams (syslog). While not a replacement for better places to run some of these services (I generally run DHCP off the ASA’s and NTP off of the edge routers) in a small shop or lab, this can provide some basic redundancy for some of these services if the normal network devices are themselves not redundant.

4. Staging – A lot of times we will have a project that needs to go live in a very short amount of time, and we often have access to the software before the storage or other hardware will show up. A non-active workload rarely needs a lot of CPU/Memory and can leech off of a no reservation resource pool, so storage is often the bottleneck. Rather than put the project on hold, having some bulk storage on a cheap NAS lets you build out the servers, then migrate the VM’s once the real hardware has arrived, collapsing project time lines by a few days or week so your not stuck waiting on procurement, or the SAN vendor to do an install. For far less than cost to do a “rush Install” of a Tier 1 array I can get a Synology full of drives onsite, setup before that big piece of disk iron comes online.

5. Tier 3 Workloads – Sometimes you have a workload that you could just recover, or if it was down for a week you wouldn’t violate a Business SLA. Testing, Log dumps, replicated archival data, and random warehouses that it would take more effort to sort through than horde are another use case. Also the discussion of why you are moving it the Synology opens up a talking point with the owner of why they need the data in the first place (and allows for bargaining, such as “if you can get this 10TB of syslog down to 500GB I’ll put it back on the array”). Realistically technology like VSAN and array auto-tier has driven down the argument for using these devices in this way, but having something that borders on being a desktop recycling bin.