Archive for March, 2012

Vaai Unmap

Posted: March 30, 2012 in SAN, virtualisation, VMWARE

Chad muses on being more open here on providing information to partners and customers and then inadvertently proves it by succinctly identifying that Unmap for VAAI returns in update 1 but is no longer automatic, but instead a vmkfstools command.
Probably the only way to easily reclaim space on thin provisioned volumes right now without the long delays or timeouts.
No doubt VMware’s engineering effort will continue and we’ll see it added back as an automatic process in a later update.

Advertisement

I ran into an interesting symptom of presenting snapshotted vmfs volumes from my production vSphere4 clusters to my isolated lab environment which is a ground up build of vSphere5.
In looking at the resource allocations for virtual machines imported into the inventory I found that the machines had memory limits applied that matched the configured vRam setting. Checking my production environment I confirmed no limits had been applied and creating new virtual machines in my lab were set to unlimited.

Not sure why this occurs, hopefully it doesn’t occur in an upgrade!

Anyone looking at SanSymphony should read this article if they are considering stretching mirrored SanSymphony volumes across two data centers.

We’ve implemented this very scenario at the port with a slight difference around mitigating the impact if god forbid a full split brain disaster should occur. We’re currently using vSphere4 so we can’t utilize the new features of vSphere5 that allow stretched esx clusters either.

Oliver Krehan’s article deals with a vSphere5 stretched esx cluster scenario and is a must read. He does suggest one vmfs volume per vm which I’m always loathed to use, as it totally negates the point of vmfs. I would add that doing everything possible to avoid a split brain scenario at the design phase is critical. I’m putting together a post soon to cover stretched SamSymphony clusters.

SanSymphony implements a primary/secondary node structure that means that it will direct IO to the primary path, (Ignoring ALUA, which is a whole different discussion) unless that node’s paths are dead.

Effectively at the port we have created two esx clusters, one in each data center which are mapped to all the vmfs volumes available at both data centers. Virtual Machines are then run on the cluster that is in the same data center as the primary node volume. This means that in normal operation, IO stays within the data center and only crosses the ISL links in a failure. In a full split brain scenario where no LAN or SAN connectivity is available between data centers, we should have changes being made only at the primary node and not at the secondary.

There are a couple of limitations, one is that a complete failure of a data center requires manual intervention (or a partial or full automated script) to restart virtual machines on the other cluster. The other is that careful management of virtual machine placement is needed to ensure virtual machines are correctly aligned on the correct cluster and vmfs volume.

vCloud Director across sites

Posted: March 24, 2012 in Uncategorized

A good summary into using vCloud Director across multiple sites and thus what VMWARE supports. Simple answer is if your sites have low latency links (20ms or lower) then you can leverage one vCloud installation

EMC ProSphere

Posted: March 23, 2012 in EMC, SAN, virtualisation, VMWARE

If you’re managing any reasonable sized VMWARE farm and using EMC storage then this is for you.

http://virtualgeek.typepad.com/virtual_geek/2012/03/emc-prosphere-15-play-learn-try.html

The biggest management headache we face as virtualisation admins is identifying storage performance issues in what can be a complicated environment. I’ve always lamented the lack of a tool that can discover and link all the components together and get one view. Finally EMC have delivered by the looks of it, I haven’t yet tried it out yet, but once I have I’ll post my thoughts.

Hint, hint EMC, maybe develop this further to work with other vendor’s arrays?

An interesting issue popped up recently when we started to allocate LUN’s from our existing EMX CX-320 arrays to our Sansymphony SAN virtualisation servers.

At it’s simplist, it has to do with any storage array that implements active/passive controllers and trying to connect them to SanSymphony as backend storage.

I’ve got another post in the pipeline which details how we’ve used Sansymphony in our environment but I felt this is an interesting issue to highlight for those either using or going to use SanSymphony to virtualise mid range storage array’s like the Clariions that use active/passive designs.

For those people who haven’t had a lot of exposure to Sansymphony, you need to be aware that SanSymphony utilises it’s own FC HBA driver to implement various features that otherwise aren’t available in a normal Windows server HBA driver; like acting as a target. So because SanSymphony uses its own driver out of the box, other MPIO utilities like Powerpath aren’t able to be used on these storage servers. (Well not totally true but that comes later)

It also means that SanSymphony will actively try and use all paths to a backend target for active IO.In most cases active/passive arrays will signal that a passive path is up but when an IO request is sent down the path, the array will signal back with “Not Ready”. SanSymphony then appears to retry the command down the same path over and over. This leads to a situation where when you first try and add a new back-end LUN to SanSymphony you’re quite likely to discover that it can’t be deiscovered. Even worse is if for some reason you do manage to add it to a disk pool it could end up unavailable should the LUN be transitioned to the alternate storage processor.

According to the Datacore Tech bulletin 1302, the recommendation is to set aside HBA controllers to connect to these SAN arrays and use the array’s supported software/drivers instead of SanSymphony’s drivers on the back end port. By doing this you won’t be able to use those ports for any other SanSymphony function (such as front end or mirror ports). The only alternative is to only zone in and register paths to only one of the back end array’s storage controllers and set the host and LUN to auto trespass to that controller. I’d caution that this is really only a stop gap measure while you migrate volumes off of the array and not a permanent solution.

As always with SanSymphony it’s best to plan your connection options carefully at the start, messing around with drivers with a storage controller in production is asking for trouble if it’s done wrong.