top of page

FlashArray und FlashBlade: Snap to NFS / Snap to FlashBlade

[ NOTE: machine translation with the help of DeepL translator without additional proofreading and spell checking ]


With Purity for FlashArray 5.1 a new feature "Snap to NFS" was released, one of the features based on Pure Storage's portable snapshot technology.


This makes the encapsulation of metadata together with data blocks possible, a snapshot of a Pure Storage FlashArray can be transferred to any heterogeneous/any vendor NFS storage target (i.e. not only Pure Storage technology) and restored to a FlashArray at any time. As already known, snapshots are also used for test/dev scenarios or cloning operations. With "Snap to NFS" this functionality is extended in order to not "burden" your systems with snapshot capacity and to offload this capacity to cheap (Tier-2/3) storage. The transferred snapshots are compressed (not deduplicated) but in a user/application format that is not directly readable. However, a FlashArray system is required to restore snapshots. A restore is therefore also possible to any supported FlashArray system (also other model, but min. Purity 5.1.x).

As always, no additional software licenses/costs are required to use this functionality.


Snap to NFS works agentless, i.e. no Pure Software is required on the NFS target. Data compression is already performed during the transfer, saving network bandwidth and increasing the efficiency of the target. After the initial transfer of the baseline, only deltas of the subsequent volume snapshots (incremental snapshot) are transferred, this process of matching is done within Purity.


During a restore, Purity already knows which data blocks are present on the FlashArray and only needs to transfer changed/missing blocks. Likewise, deduplication-optimized restores are performed, meaning restored data from the offload target is deduplicated directly during the transfer and thus does not take up valuable space.


"Snap to NFS" is an app/integration which runs in the "heads" of the FlashArray controllers - known as PurityRUN. The overhead required for this is minimal and does not have a large impact (max. 10% of performance) on the primary storage traffic.

A reservation (with lower prio) of 4-8 cores and 8-16 GB RAM is created. If the load on the system can no longer guarantee proper operation of the front IO traffic, the PurityRUN functionalities are throttled.


The practice


Snap to NFS can be managed through the FlashArray GUI or CLI, but also monitored through Pure1. Of course, supported tools can also control operations to the arrays via the REST API.


Similar to the asynchronous Pure Storage replication, Snap to NFS uses so-called "Protection Groups".


System requirements


An NFS storage system is required, which can be from any manufacturer, but supports NFS version 3 or 4 and NLM. As of today: no Windows NFS servers are supported.


In principle, any NFS V3-V4 storage is supported, but there are recommendations: you should already have enough performance.

A slow NFS server drags out backup and restores (therefore 10G connection recommended). The NFS target storage should be appropriately secured against data loss, e.g. by a RAID.


The NFS share provided needs read/write/execute rights (r/w/e), in short full access.

If necessary, access to the NFS share can of course be restricted based on the IP address.


Base - Setup


The initial configuration must be done by Pure Storage Support. To do this, simply create a ticket (subject: "Pure Storage PurityRUN Snap to NFS enablement"). In advance you should prepare a free IP address (with connection to the network of the NFS target). This IP address is needed by the support for setting up the offload.


After enabling the Remote Assist/RA, the support staff can now perform the configuration. The prepared IP address is placed as a virtual interface over a replication interface of both controllers (ct0-eth2, ct1-eth2). Here it is important to know: this has no influence on the operation of functionalities like ActiveCluster!

Finally, both controllers have to be restarted one after the other (no downtime) and "Snap to NFS" is fully usable.


 

INFO: since Purity 5.2.0 PurityRUN* is already active by default and contains prepared but deactivated apps. So no resources are wasted (when not in use).


In the Settings > Software > App Catalog tab, two prepared apps are displayed by default. However, you can install them, but not configure them.



PurityRUN* = a KVM virtualization platform for deploying integrations/apps on the Pure Storage system.

 

Configuration NFS Target / FlashBlade


In the course of this block contribution we use a FlashBlade as NFS storage target. This is addressed via the default data interface. You could also consider creating a dedicated virtual interface in a separate network.


If you don't have this IP address at hand, you can easily find it out via Settings > Network:

In the next step we create our own file system/share as offload destination. I have created the share clearly with the name "offload-FROM-PURE-X50-2" and the NFS service without access restrictions (if the export rules are left empty, r/w/e automatically).



If there are no network restrictions between the systems, this completes the preparation for FlashBlade release.


Configuration FlashArray


Integration Offload Target / FlashBlade


The prepared offload storage must now be connected to the Flash array. This is done via Storage > Array > "Connect to NFS Offload Target":

An alias must be specified: I am - as always

- a fan of unique names, so I chose "offload-TO-FB-01".

In addition, the IP address of the NFS target and the mount point must be specified. The mount options remain empty here (possible customization like NFS version, port numbers, R/W block sizes, protocols (TCP/UDP)).


In the example of the FlashBlade used, the Mount Point is to be specified with /FILE SYSTEM-Name -> /offload-FROM-PURE-X50-2


The connection is established with "Connect" and the FlashArray has a connection to the NFS target. The status is shown in Purity in the overview. In case of problems you should check the accessibility between the systems and the set permissions.



Configuration Snapshot-Offload-Job


As mentioned earlier, Snap to NFS is based on Protection Groups. All volumes within a Protection Group (hereafter PGROUP) can be replicated to one or more defined targets. Within a PGROUP, volumes, snapshot schedules, replication targets/schedules/periods/windows can be defined.


Therefore, first we create a new PGROUP via Storage > Protection Groups > Create Protection Group (PGROUP must not be a member of a container) with a unique name: "PGROUP-offload-TO-FB-01".


Then we define the volumes to be replicated, the NFS target, snapshot plans and the replication plan.


Set up Protection Group



Customization Protection Group



The snapshot plan was created according to the following pattern:


A local snapshot is taken every 6 hours (4 snapshots daily), which remains on the source/FlashArray for one day. Again, a daily snapshot remains on the local system for another 7 days.


So only a few "short-usage" snapshots remain on the FlashArray.

The replication schedule, on the other hand, is intended for "long-usage":


every 4 hours (smallest possible value) it is checked whether new snapshots have to be transferred. An excluded period is not defined.

On the NFS target, the snapshots remain for 14 days PLUS a daily snapshot for another 7 days.

After defining the snapshot plan, the baseline snapshot is automatically taken directly, after completion only incremental snapshots mentioned above are created.


Restrictions


Unfortunately, no volumes of a volume group/PODs/containers can currently be members of a protection group. Conversely, these volumes cannot currently be "offloaded". I assume that this functionality will be integrated soon.


The replication interval cannot be set to less than 4 hours with Purity 5.2.3 at the time of this blog.


Manual/Manual Backups


Regardless of the planned snapshot and replication schedules, PGROUP snapshots can be created and replicated at any time as required. To do this, switch to the respective PGROUP (Storage > Protection Groups) and create a snapshot via "+". Optionally, a suffix can be created, whether the snapshot should be included in the regular PGROUP snap interval and whether it should be replicated.


I had created two manual snapshots here. One with suffix, the other without suffix.


Monitoring


Besides the FlashArray GUI you can easily monitor the snapshots and offloads via Pure1. In the tab > Snapshots you can list all snapshots via the timeline and get all relevant information (size, target, creation time ...).


It is also possible to work with filters on multiple systems and break down to all available categories granularly.


Via the Protection > Protection Groups tab and the timeline of the respective PGROUP it is possible to view more information about a snapshot/PGROUP.


Statistics such as latencies, IOPS and bandwidth can be viewed either via Pure1, or as usual on the Flash systems themselves (hereafter: FlashBlade GUI).




NFS Snap Recovery


Should it now really come to restoring a storage snapshot from the NFS storage, this process can be done quite simply (for "short term restores"). If the snap exists locally on the FlashArray, the NFS snap cannot be used (would also not make sense as a rule). If you do need to restore from the NFS target, the local snapshot must be removed beforehand (with the "immediate" option).


The restore process looks like this:


1. restore the snapshot from NFS storage. In the background, a local copy of the snapshot is created on the FlashArray.

2. copying the local snapshot to a new volume or overwriting an existing volume.

3. Mount the volume to the host and access the data.


"Let's restore"


We go to Storage > Array > Offload Targets and select the respective NFS target:


Inside the target we can see the snapshots it contains and by clicking on the Download button we can make a copy of the volume (optionally entering a suffix). With the automatic suffix, the volume is copied according to the following naming pattern: SourceArrayName:VolumName.restore-of.SnapshotName.


We can then find the backup volume under: Storage > Volumes.

Now we can copy the volume or directly overwrite the source volume.

First we create a copy of the snapshot, a volume name and a container (optional) must be specified. In addition, you can overwrite existing volumes with "overwrite". I chose as volume name "copy-FROM-SNAP-local-VOL1".

Then you could connect the volume directly to a host.

When restoring a snapshot, there is no way to adjust settings; the volume is simply overwritten irrevocably.

Afterwards, this volume can be directly connected to a host.

The snapshot copy is NOT removed automatically. This operation must be performed manually. The copy is not removed from the NFS target. The deletion takes place only within the local FlashArray and is reserved (quite regularly) for 24h in the recycle bin.

If you want to view the contents of the share, you can also mount the NFS share (for Windows: requires installed feature NFS client).



More info - Links


All officially published setting options in the GUI but also CLI can be read via the "on-board" user guides of the Pure Storage systems.


Click on "Help" in the Purity main menu.


The User Guide is structured like the main menu and can be opened downwards. A search function is also integrated - within here you can also search for keywords.

WEB: Pure Storage (Pure1) support portal - Ticket system and support *(requires registered FlashSystems)

PHONE: Pure Storage phone support: GER - (+49) (0)800 7239467; INTERNATIONAL - (+1) 650 7294088

WEB: Pure Storage community

WEB: Pure Storage OFFICIAL blog

The blog lives from your questions, wishes and suggestions...every comment is welcome. I am very grateful for feedback.

1,702 views0 comments
bottom of page