It’s not always easy to make those “apples-to-apples” comparisons of storage systems, with all the useless jargon, deceptive claims, and vague references to obscure features with dubious value you find on websites, press releases and data sheets. And when it comes to working with VMware, some storage vendors are happy to cloud the facts even further to lay claim to being the best integrated storage with VMware. Unfortunately, VMware doesn’t exactly make it easier, with myriad products, tools, features and potential points of integration. It can be very frustrating to make much sense of it all.
I have spent a fair amount of time examining the VMware features that are associated with underlying storage systems. I’m no expert and I don’t know these features inside and out, but I thought I would take a moment to categorize these features so that VMware customers seeking complementary storage can quickly cut through the slick messaging to determine what is important, and what is not. Below, I answer a number of common questions that should be asked when considering storage for VMware.
Many storage vendors claim tight integration with VMware. What does it mean to be integrated, and how important is it?
Here’s a dirty little secret: the most critical storage related features in vSphere work seamlessly with anyone’s networked storage. When you consider features such as:
• Storage vMotion
• Storage I/O Control
• Thin Provisioning
• Storage Distributed Resource Scheduler
… there is no need to seek out the best storage integrator, because they all work the same!
On the other hand, there are a few integration points that allow a storage vendor to provide customization to VMware (thus allowing for claims of “tight integration”). Some provide more real benefit than others. Here are the primary integration points:
1) Multipathing. VMware provides a set of default multipathing drivers that work splendidly for most storage systems. For storage vendors seeking an opportunity to stand out, custom drivers can be installed. As of this writing, there are few customized multipathing drivers out there, mostly because the default drivers are sufficient for virtually all applications.
2) VAAI (vStorage API for Array Integration). The API for storage integration is designed to offload some tasks from the host to the storage system. The theory is that the storage system would be much more efficient in performing these tasks than the host. In large scale deployments that are heavily loaded and make frequent use of features such as Storage vMotion (i.e. cloud deployments), this feature will have some value. Most common deployments will experience little or no benefit from having it. Storage vendors are just now deploying VAAI integrated solutions.
3) VASA (vStorage API for Storage Awareness). This API enables supported storage systems to report storage configuration details to vCenter. This allows underlying storage details (i.e. RAID level) to be associated with Data Stores. Likely of benefit only to larger deployments where manageability is an issue. This API is brand new in vSphere 5, so only a few vendors will support this API out of the chute.
4) SRM integration. For some higher end deployments, this is an integration point that can provide real benefit. Customers that want to embrace VMware’s strategy of disaster recovery through mirrored storage across multiple sites will certainly need a storage system that has these customizations built in. VMware has hinted that future versions of vSphere (5.0, maybe) will have host based replication available, thus marginalizing the necessity to have array based replication to utilize SRM.
5) vCenter Management plug-in. vCenter offers a pluggable architecture, allowing third party vendors to develop management tools that integrate into the vCenter interface. In most real world implementations, the plug-in is not so much for “management”, as it is for “monitoring” (operators are allowed to view, but not change the storage configuration).
In most cases, these integration points provide the most benefit to large scale deployments, where resources are pushed to their limit, and manageability of many devices is vitally important.
If VMware integration is not necessarily a factor, what should I look out for when selecting storage for a VMware environment?
The number one thing to consider when reviewing storage options in a VMware environment is VMware Certification. The storage you purchase absolutely must be Certified by VMware as being fully compatible with vSphere. Otherwise, all you will get from VMware is a busy signal when you want support for your solution.
In addition to peace of mind, the certification process identifies how the storage system connects with vSphere; specifically the default multipathing modules to use and load balancing policy.
You can tell if a storage product is certified with vSphere by examining the Hardware Compatibility List on VMware’s website. If it’s not there, then it’s not Certified.
Aside from certification, the value that the storage system itself provides should be of paramount concern. Look for features that demonstrate quality, reliability, availability and performance.
Lastly, the integration points may be considered for targeted deployments. For example, deployments that will leverage the benefits of SRM, storage array support for SRM is an absolute must.
Having VMware between my storage and my application seems inefficient. Does VMware slow the storage down?
Intuitively, one would think that having more processing layers in a stack would naturally cause things to slow down. I have spent a fair amount of time testing the performance of VMware with our storage systems, and compared those results to equivalent tests on stand-alone Operating Systems. The results indicate that performance is NOT compromised in VMware environments, but with some important caveats. You will want to review my whitepaper on the subject to get all the details.
Do I need to do anything special with my storage configuration to make sure I’m getting the most from my solution?
Networked storage should always be configured with redundancy and performance in mind (multiple paths to storage, redundant components…). Aside from that, there are a few VMware specific Best Practices to consider when deploying storage in a vSphere environment. My whitepaper has the skinny on that.
Where is this whitepaper you keep talking about? I thought you’d never ask:
If you have more questions, look me up at VMworld next week in Las Vegas. I will be in booth #221.
Article Contributed by: Matt Alsip
Technical Marketing Manager, Dot Hill Systems