Thursday, August 30, 2012

VMworld 2012: VMware + EMC Storagethe best gets better

Wikibon just refreshed their study (2 years in a row, so it's not quite "annual" yet) on VMware and Storage – and one thing they do is reach out to the vendors, and analyze their relative VMware integration. I'd encourage you to check it out yourself<http://wikibon.org/wiki/v/VSphere_5_Storage_Integration_chips_away_at_Management_Overhead> (come to your own conclusion), but here's the summary – EMC = #1! Also – amazingly this was a count of the degree of integration – but when the respondents were asked "who is the best", EMC was highlighted as the best by 3x times more than the next closest!

If you think this is "one analyst" kind of thing – check out the latest Goldman Sachs Strategic IT spending study (July), which also showed EMC being selected 2.5x more than the next closest competitor when the use case is VMware centric – that's materially more EMC's general market share).

Here's the results for the major unified storage players:

[image]<http://virtualgeek.typepad.com/.a/6a00e552e53bd28833017c318ddf2d970b-pi>

Here are the results for the major block-only players (or Unified players, counting block integration points).

[image]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177446b8376970d-pi>

So… The best is getting even better.

Not only do we have a native vCenter Operations VNX connector now GA, with VNX Storage Analytics Suite in early adopter stage (more on that here), but there are two BIG additions in vSphere 5.1 that VNX and VNX customers will benefit from: 1) VAAI NFS Fast Copy for VMware View and vCloud Director use cases; 2) Native multipathing enhancements.

For more on these two things… Read on…

In vSphere 5.0, the first set of VAAI API calls for NFS arrived. vSphere 5.1 adds a new API – NFS Fast Clone – which can be used to accelerate VMware View and vCloud Director use cases. It requires that the NFS server is able to take file-level snapshots, and be able to do "snap of snap" (to accelerate the "base replica+snapshot" behavior used in these cases). This API is supported in EMC VNX (in the currently shipping code – VNX OE 32, aka "Inyo") and the upcoming EMC Isilon "Mavericks" code (which is the first release where Isilon NAS will be expressly targeted at VMware use cases). BTW – notice how coordinated we try to be – the feature shows up in the EMC array release that preceeds the corresponding vSphere release. Hint – there are already VM Granular Storage – more here – giblets in there.

Check it the new VAAI NFS Fast Copy in the demo below:

You can download a high-rez version of the demo here in MP4 format<https://vspecialist.emc.com/human.aspx?Username=Bloglink&Password=vgeekb1og&arg01=%20832817271&arg05=0/[DownloadAs_Filename]&arg12=downloaddirect&transaction=signon&quiet=true> and WMV format<https://vspecialist.emc.com/human.aspx?Username=Bloglink&Password=vgeekb1og&arg01=%20832758918&arg05=0/[DownloadAs_Filename]&arg12=downloaddirect&transaction=signon&quiet=true>.

BTW – It's worth restating - NFS is a great choice for customers using vSphere – I generally recommend using it in conjunction with VMFS. In the Virtual Geek annual survey I asked what protocols people use – here's the result:

[image]<http://virtualgeek.typepad.com/.a/6a00e552e53bd28833017c318ddfd7970b-pi>

Wikibon also asked the same question – and this is what they found:

[image]<http://virtualgeek.typepad.com/.a/6a00e552e53bd288330177446b841c970d-pi>

The other improvement is one of those "little things that means so much" – dramatically improved and simplified native multipathing for VMware and EMC VNX customers using block storage and VMFS.

This takes a bit of explanation. Note that these are "internals" in the sense that most customers don't muck around with this level of detail – but hey, it's good to know.

* EMC VNX is an array that uses an "active/passive LUN ownership" model (for now) – but is active/active (both storage processors support a load at any time). This is pretty common architectural model in arrays of it's generation and target market. A feature called "Asymmetric Logical Unit Access" aka "ALUA" is also pretty common of arrays of this type. What it does it present the LUN via both storage processors, where one set of paths is "non-optimal" because it has to cross the internal interconnect between "brains".
* EMC VNX also has for a long time had the ability to tresspass (move a LUN from one brain to another) non-disruptively. This is dependent on how fast it happens and how well the host multipathing works. LUN Tresspass and ALUA doesn't make VNX the same as true "active/active" models like EMC VMAX – but can come close. In recent VNX OE releases – LUN tresspasses are now super fast at scale.
* Like other implementations of this sort of architectural model – there are internal things (like metadata) that get owned by one storage processor or the other – so every LUN has:
* A "Default Owner" – this is the storage processor with all the metadata in "steady state". When the LUN is actually owned by it's default owner, the overall system load lowers, and performance is generally better.
* A "Current Owner" – while every LUN starts with being service by the "default owner" storage processor, the current owner can change after a trespass, which can be due to host failure, HBA failure, network failure, storage processor failure – or by an non-disruptive upgrade (where all LUNs move to one storage processor, the other gets upgraded, and then they tresspass the other way and get updated).
* Front-end-ports on the "Current Owner" – which, if ALUA is configured, show up as "optimized" paths in vSphere, and the other storage processor ports show up as "non-optimized" paths. vSphere sends I/O down non-optimized paths.

So… With all that said – a lot comes down to what the host multipathing does. For example, after a tresspass, does it eventually issue a restore command that will tresspass the LUN back to it's original "Default Owner"?

VMware and EMC found this was an area where we could make things better. In vSphere 5.0 and earlier – the NMP PSP for Fixed and MRU issue this auto-restore, but Round-Robin PSP does not. Also we found that in versions of the VNX OE that are earlier, we wouldn't respond to the vSphere issued auto-restore properly in all cases (corner case).

… So – in vSphere 5.1 this was improved, and we made some fixes that showed up in VNX OE 32. Here's a comparison (thanks for the work on this Clint) of vSphere 5.0, 4.x and vSphere 5.1 with VNX arrays running VNX OE 31 and 32:

You can download a high-rez version of the demo here in MP4 format<https://vspecialist.emc.com/human.aspx?Username=Bloglink&Password=vgeekb1og&arg01=%20832616805&arg05=0/[DownloadAs_Filename]https://vspecialist.emc.com/human.aspx?Username=Bloglink&Password=vgeekb1og&arg01=%20832616805&arg05=0/[DownloadAs_Filename]&arg12=downloaddirect&transaction=signon&quiet=true> and WMV format<https://vspecialist.emc.com/human.aspx?Username=Bloglink&Password=vgeekb1og&arg01=%20832767820&arg05=0/[DownloadAs_Filename]&arg12=downloaddirect&transaction=signon&quiet=true>.

Net? In vSphere 5.1 and VNX OE 5.1, we've worked to make the native multipathing work better – eliminating the need for tools, scripting, or anything like that. Heck – the Round Robin PSP has become the native PSP selected when you use EMC VNX and VMAX arrays. Simple.

EMC VNX customers – what do you think?

feedproxy.google.com [X] <http://feedproxy.google.com/~r/typepad/dsAV/~3/7lQNneIrZwo/vmworld-2012-vmware-emc-storagethe-best-gets-better.html>

________________________________

Original Page: http://feedproxy.google.com/~r/typepad/dsAV/~3/7lQNneIrZwo/vmworld-2012-vmware-emc-storagethe-best-gets-better.html

Sent from Feeddler RSS Reader

No comments:

Post a Comment