Create an external replication

This page describes how to create an external replication.

Before you begin

Before setting up an external replication, we recommend that you review the external replication workflow. The external replication process starts by creating a destination volume and specifying the source system details. This action creates a destination volume resource and a replication child resource within NetApp Volumes for managing the replication.

Considerations

  • The following features aren't supported for destination volumes during the external replication process:

    • Auto-tiering

    • Volume replication

    • Flex service level

  • You must use manual backups when doing an integrated backup of NetApp Volumes based destination volumes. If you try to assign a backup policy to a destination volume, it will fail.

  • Select the correct storage pool and make sure that the destination volume is large enough to accommodate the logical size (not the physical size) used by your ONTAP source volume.

  • Specify the correct share name and protocol types. The share name must match the source, and the protocol types must be chosen carefully as they can't be changed after volume creation. The protocol settings you choose also map to volume security styles. Make sure these settings are consistent.

  • Before creating an external replication, make sure you have CLI access and the necessary permissions on the source ONTAP system. You need to run CLI commands on the source ONTAP system within one hour of the replication process.

Create an external replication

Use the following instructions to create an external replication using the Google Cloud CLI.

gcloud

To create an external replication:

gcloud netapp volumes create VOLUME_NAME --location=LOCATION \
  --capacity=CAPACITY --protocols=PROTOCOL \
  --share-name=SHARE_NAME --storage-pool=STORAGE_POOL \
  --hybrid-replication-parameters=hybrid-replication-type=ONPREM_REPLICATION,peer-cluster-name=PEER_CLUSTER_NAME,peer-ip-addresses=PEER_IP_ADDRESSES,peer-svm-name=PEER_SVM_NAME,peer-volume-name=PEER_VOLUME_NAME,replication=REPLICATION,replication-schedule=REPLICATION_SCHEDULE,cluster-location=CLUSTER_LOCATION,description=DESCRIPTION,labels=LABELS

The hybrid-replication-parameters block starts a replication workflow.

Replace the following information:

  • VOLUME_NAME: the name of the volume. This name must be unique per location.

  • LOCATION: the location for the volume.

  • CAPACITY: the capacity of the volume. It defines the capacity that NAS clients see.

  • PROTOCOLS: the NAS protocols the volume is exported with.

  • SHARE_NAME: the NFS export path or SMB share name of the volume.

  • STORAGE_POOL: the storage pool to create the volume in.

  • HYBRID_REPLICATION_TYPE: for external replication, specify ONPREM_REPLICATION.

  • PEER_CLUSTER_NAME: the name of the ONTAP cluster hosting the source volumes.

  • PEER_IP_ADDRESSES: the InterCluster-LIF IP addresses of the ONTAP cluster. The source cluster must provide one IC-LIF per node, separated by # signs. Make sure to specify them all.

    The following example shows you how to add multiple IC-LIF IP addresses of the ONTAP cluster:

    peer-ip-addresses=10.0.0.25#10.0.0.26
  • PEER_SVM_NAME: the name of the storage virtual machine (SVM), also known as vserver that owns the source volume.

  • PEER_VOLUME_NAME: the name of the source volume.

  • REPLICATION: the name of the replication resource to be created.

  • LARGE_VOLUME_CONSTITUENT_COUNT: this parameter is only required when your source volume is a FlexGroup. For more information, see FlexGroups and Large Volumes before you proceed.

    To create a large volume, specify --large-volume true and --multiple-endpoints true as create parameters too.

  • REPLICATION_SCHEDULE: Optional: you can set the replication schedule to one of the following intervals:

    • EVERY_10_MINUTES

    • HOURLY

    • DAILY

    The default is HOURLY. Large volumes won't offer EVERY_10_MINUTES.

  • CLUSTER_LOCATION: Optional: the description of the source cluster location.

  • DESCRIPTION: Optional: the description text for the replication resource.

  • LABELS: Optional: labels for the replication resource.

Example invocation:

$ gcloud netapp volumes create ok-destination --location australia-southeast1 \
--capacity 100 --protocols=nfsv3 \
--share-name ok-destination --storage-pool okrause-pool \
--hybrid-replication-parameters=hybrid-replication-type=ONPREM_REPLICATION,peer-cluster-name=au2se1cvo2sqa,peer-ip-addresses=10.0.0.25#10.0.0.26,peer-svm-name=svm_au2se1cvo2sqa,peer-volume-name=okrause_source,replication=okrause-replication,replication-schedule=HOURLY

To meet your volume requirements, specify all applicable optional parameters. For example, an NFS volume might require an export policy.

Look up all options:

gcloud netapp volumes create --help

After creating the destination volume and the replication resource, NetApp Volumes tries to peer with your source ONTAP system. This peering process serves as an authentication and authorization step, and protects your source cluster from malicious SnapMirror requests. Therefore, make sure you only peer with trusted systems.

Look up the next steps:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION

The current authentication status can be printed at any time. However, the state changes might take up to five minutes after an action advances the process to the next step.

A successful peering consists of the following steps:

  • The NetApp Volumes destination volume pings your source system using the specified peer-ip-addresses.

  • If cluster peering isn't already established, NetApp Volumes prints the cluster peering commands you must run on the source system.

  • Also, if SVM peering isn't already established, NetApp Volumes prints the vserver peering commands you must run on the source system.

The steps that have been completed previously are skipped, and the process automatically continues with the next step.

Network connectivity check

NetApp Volumes tries to send an ICMP (ping) request to the IC-LIFs you specified under peer-ip-addresses. If it fails, stateDetails displays Cluster peering failed, please try again, indicating a network issue. For more information, see Network connection to Google Cloud project. You can't proceed further until you establish a network connectivity between the source system and NetApp Volumes. For debugging purposes, try to ping the gateway IP of the /28 CIDR that hosts the NetApp Volumes IC-LIFs.

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.subnetIp)"

This prints the CIDR. Ping the first IP of that network from the source ONTAP system, using one of your source IC-LIFs.

Example:

ONTAP> ping -lif=YOUR_IC_LIF -vserver=VSERVER_HOSTING_SOURCE_VOLUME -destination=FIRST_IP_OF_SUBNET_IP

Cluster peering:

If ICMP works, the process proceeds to cluster peering. The status PENDING_CLUSTER_PEERING displays if peering has not yet been established.

Look up cluster-peering instructions:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.command,hybridPeeringDetails.passphrase)"

This process outputs the command and required passphrase for execution. Copy and paste the cluster peer create command onto your source cluster and run it. You will be prompted to enter the passphrase twice.

SVM peering:

The cluster peer create command from the previous step is expected to also perform the SVM peering automatically. If this doesn't occur, the state changes to PENDING_SVM_PEERING after a few seconds.

Verify the SVM peering:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION

If the state is PENDING_SVM_PEERING, run the vserver peering command:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.command)"

After a few seconds, the state changes to Ready, and mirrorState to Preparing which indicates that the baseline transfer has started. After the baseline transfer is finished, the mirrorState changes to Mirrored. Incremental transfers are initiated based on the defined replication schedule, indicated by mirrorState as Transferring.

What's next

Manage external replications.