top of page

Remounting VMFS volumes after transition or async-replicate to another FlashArray

[ NOTE: machine translation with the help of DeepL translator without additional proofreading and spell checking ]


When working with disaster recovery (DR) tasks, it is sometimes necessary to manually mount VMFS datastores. Volumes that are not automatically mounted on other hosts at the DR site must be mounted manually. Ideally, VMware Site Recovery Manager (SRM) takes care of this, but since not everyone has such a DR automation tool, in this post I will show how to do this manually. SRM is rarely found even in enterprise environments in Germany due to the active-active strategy, but in America - due to large distances - SRM is almost common standard.


To begin with, we must briefly pay attention to the theory of VMFS and discuss its signatures. If a cloned VMFS volume is attached to a host, it is possible in principle to retain the existing VMFS signature or to assign a new signature (resignature).


Keep the existing signature


Basically, two VMFS datastores with the same signature/UUID (the UUID is the content of the signature) cannot be mounted on the same host. VMware ESXi uses the UUID to point to the volume-a unique reference. However, you can unmount the initial datastore and mount the cloned volume as a datastore with the same UUID/signature at any time. In addition, you can mount a cloned/replicated volume to another host at any time, which previously did not have access to the original datastore (used in DR operations).


Assign a new signature


Assigning a new signature changes the UUID. There are a few things to note here: creating a new signature is irrevocable, once the signature change has been confirmed there is no way back. Also, resigned datastores and its virtual machines (VMs) must be re-referenced with the datastore in their configuration files and the VMs must be re-registered in vCenter or on the standalone hosts.


We now turn to manual mounting at the DR site to a dedicated DR standalone host. There is no vCenter attachment to the DR site. I have visualized the individual steps in the following graphic:

The steps shown can be reproduced in text form as follows:

  1. Setting up a Protection Group (PG) "REPL-Test" on the source array.

  2. Assigning the volume(s) of the created PG

  3. Defining the "Snapshot Schedule" and "Replication Schedule

  4. Create a snapshot clone/volume on the async target (Target-FlashArray) at the DR site and mount it to the DR-ESXi

  5. Registering the VM "PURE-HANA" from the mounted datastore.

 

ATTENTION: incorrect operation when mounting cloned datastores can lead to significant impairments of the productive environment!

 

Setup Protection Group (Source Array)


(The screenshots of the FlashArray systems are based on Purity 6.0! Deviations in the GUI display are possible to older Purity versions).


First we create a new Protection Group "REPL-Test" and add the volume to be backed up/replicated.

After that we link the replication target (this must already be configured - the configuration is not considered in this post. The targets are configured via Storage > Array > Array Connections).

Now the snapshot and replication schedule must be configured. This is done according to the DR specifications/strategy.


Local snapshots are created on the production/source target every 5 minutes, which remain here at full granularity for 3 days. So the RPO on the local system is 5 minutes. After that, 1 snapshot per day is kept for another 7 days.

Replication is configured separately. A continuous replication interval of 10 minutes has been defined here. RPO max. 10 minutes. At the target FlashArray, snapshots are kept at full granularity for 7 days. After 7 days, one snapshot each is kept for 14 more days.


After activating the snapshot schedule, a baseline snapshot is initially created. After that, a snapshot is created every 5 minutes as set up.

The configuration of the replication on the storage layer is now complete.


The production volume with its content "PURE-HANA-1"/VM is now replicated.

If we take a look at the Target FlashArray and its volumes, we will not find the replicated volume "REPL-Test" for the time being, since this is currently still in the snapshot format of the PG "REPL-Test" and does not yet represent a volume. Likewise, we take a look at our DR-ESXi and also see no datastore and no storage devices here yet.


Make replication volume available


First we have to create a full volume based on the transferred PG snaps. To do this, we go to Protection > Protection Groups > Target Protection Groups and select the PG "REPL-Test" (in the PG name, the addition of the source array "PURE-X50-1" now also appears).

Within the PG "REPL-Test", the recovery point must now be selected from the available snapshots - based on the time stamp (usually the last possible status). One selects the desired snapshot and has now the possibility to start the copy process. I create the volume with the same name of the source volume "REPL-Test".

Now we can also find the volume on the target FlashArray under Storage > Volumes.

On the storage we now have to do the host mapping to our DR-ESXi "PURE-ESXi-1". A LUN ID is automatically assigned and we can also see that the volume "REPL-Test" comes from the source/snapshot "PURE-X50-1:REPL-Test" (but we have no dependency to the snapshot here - means a deletion of the snapshot is possible at any time).


Mount clone volume/VMFS


Let's take a look at the GUI of the DR-ESXi.


We notice - after HBA rescan - that the cloned volume already appears among the storage devices. There is no way to mount volumes with existing VMFS signatures via the ESXi GUI. This is a security feature within vSphere. If you have a vCenter at the DR site, you can perform the activities through the vCenter GUI.

So we need to work through the ESXi CLI here.


The command shows us the name of the VMFS datastore and its UUID:

esxcfg-volume -l 

If the volume is not displayed, this may be because you have not yet mounted it to the ESXi or a rescan of the storage adapter is pending. You can trigger this process via the GUI or CLI:

esxcfg-rescan <vmkernel adapter name>

To assign a new signature, use the following command (be aware of the effects mentioned above):

esxcfg-volume -r <UUID(replace with the previously read UUID)>  

To mount a datastore with the existing signature, use:

esxcfg-volume -M <UUID(replace with the previously read UUID)>  

The -M addition mounts a volume persistently. If you use -m, the volume is mounted only temporarily and is automatically unmounted on an ESXi reboot.

So our volume copy has now been persistently mounted from the target FlashArray to the DR-ESXi. We can check this via the GUI or the CLI.

If we look at the ESXi CLI, we have the possibility to check the operation via two commands:

1.) Generates an overview for all volumes with UUID and device name:

esxcli storage vmfs extent list

or:

2.) Lists all volumes that the ESXi host can access. The output includes the file system type, volume information along with the volume name, path, and UUID:

esxcli storage filesystem list

Registering the Virtual Machine


Finally, we have to make our VM/VMs available from the datastore. To do this, we can register the virtual machine via the datastore browser (datastore browser via right-click on the corresponding datastore) and the respective VM folder. This is done via a right click on the VM VMX file.

The shown tasks seem to be quite extensive at first...but this is deceptive due to the comprehensiveness of this article. It should be said during the actual hands-on, the activities are quickly performed. If you are thinking about automation at this point, you can't avoid the implementation of SRM at this point.


HOWEVER?!: Pure Storage ActiveDR will also help you here, because with ActiveDR a host PreConnect is integrated and the manual clone/mount tasks are obsolete. More on this in the coming weeks.


More info - Links


All officially published setting options in the GUI but also CLI can be read via the "on-board" user guides of the Pure Storage systems.


Click on "Help" in the Purity main menu.


The User Guide is structured like the main menu and can be opened downwards. A search function is also integrated - within here you can also search for keywords.

WEB: Pure Storage (Pure1) support portal - Ticket system and support *(requires registered FlashSystems)

PHONE: Pure Storage phone support: GER - (+49) (0)800 7239467; INTERNATIONAL - (+1) 650 7294088

WEB: Pure Storage community

WEB: Pure Storage OFFICIAL blog

The blog lives from your questions, wishes and suggestions...every comment is welcome. I am very grateful for feedback.

900 views0 comments
bottom of page