I have some DIMMS laying around here somewhere…
Quick post here! If your setting up a new Hitachi H800 (G400/600) and are trying to setup a Hitachi Dynamic Tiering pool you may get the following error. “To use a pool with the Dynamic Tiering function enabled, it is required to install additional shared memory.”
You will need to login to the maintenance utility (This is what runs on the array directly). Here is the procedure.
The first step is figuring how much memory you need to reconfigure. This will be based on how much capacity is being dedicated to Dynamic Provisioning Pools. As the documents reference Pb (little b which is a bit odd) these numbers are smaller than they first appear.
- No Extension DP – .2Pb with 5GB of Memory overhead
- No Extension HDT – .5Pb with 5GB of Memory overhead
- Extension 1 – 2Pb with 9GB of Memory overhead
- Extension 2 – 6.5Pb with 13GB of Memory overhead
There are also extensions 3 and 4 (which use 17GB and 25GB respectively) however I believe they are largely needed for larger Shadow Image, Volume Migrations, Thin Image, and TrueCopy configurations.
In the Maintenance Utility window, click Hardware > Controller Chassis. In the Controller Chassis window, click the CTLs tab. Click Install list, and then click Shared Memory. In the Install Shared Memory window pick which extensions you need and select install (and grab a cup of coffee because this takes a while). This can be done non-disruptively, but it would be best to do at lower IO as your robbing cache from the array for the thin provisioning lookup table.
You can find all this information on page 171 of the following guide.
HDS has been on a long journey that has led us to “Year Z”. Almost four years ago the road map leaked to The Register that HDS would eventually merge their Modular (Then AMS, now HUS) with their Enterprise (VSP, now VSP G1000). The promise of a single block operating system, with a unified file, object, branch NAS and block management suite was a long way off.
Previously customers had to choose between platforms based on capacity, features, cost, up-time SLA’s, and performance. Often times a single feature requirement like storage virtualization would add a six figure amount to a bill of materials. Dependencies on High end ASIC’s made scaling down the costs of the VSP impossible. Today Hitachi has solved these problems and delivered a single platform that allows product selection to largely be done entirely based on capacity and performance needs. Features to seamlessly flow from the smallest to the largest platform on the line card. The G200, G400, G600, G800, G1000 provides a lot of sizes and price points without the confusion that multiple operating systems and system architectures provide. As other vendors add more platforms and OS variants to address different markets, its interesting seeing HDS consolidate product families. I’m curious if Netapp has some dusty old “One Platform” marketing slides that HDS can borrow.
My hats off to the engineers who managed to get full ASIC emulation running on the Intel Processors so that we can have VSP functionality without a six figure price tag. While I love the ugly duckling that is SNM2, it is good to see Hitachi moving on to faster, fancier management tools.
Infrastructure Director and the new management tools looks to match the “pretty” GUI’s that modern storage managers are coming to expect, as well as powerful automation and provisioning workflows to make provisioning and management a largely automated task.
I’ve loved the HUS for providing “simple” but reliable storage. I’ve often called it the pet rock of storage (configure, present everything to VMware, and stay in VMware for your management all day every day). VVOL support allows for snapshot offloading (Faster Veeam backups!) and more granular feature management. Most importantly it eliminates Vmware/Storage team miscommunication from causing VM’s to not get replicated, protected etc.