[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-17。"],[],[],null,["# Volume migration overview\n\nThis page provides an overview of the volume migration feature.\n\nAbout volume migration\n----------------------\n\nThe volume migration feature lets you migrate volumes from ONTAP-based sources\nto Google Cloud NetApp Volumes using a SnapMirror-based migration. SnapMirror works\non a per-volume level and can replicate a source volume to a destination volume\non a different system.\n\nSnapMirror offers many advantages over conventional data copy methods:\n\n- It operates over any IP network and is resilient to network issues, supporting\n a wide range of network speeds and latencies.\n\n- It copies only used data.\n\n- After an initial baseline data transfer, the subsequent transfers are\n incremental, copying only changed data indefinitely. The calculation of\n changes for incremental transfers is exceptionally fast and independent of the\n data type stored in the volume.\n\n- The transfers retain storage efficiency. If the source volume contains\n deduplicated or compressed data, these efficiencies are carried over, reducing\n the amount of data to transfer.\n\n- All transfers are encrypted in transit.\n\n- You can use the source volume without any noticeable performance impact.\n\n- You can use the destination volume in a read-only state after the baseline\n transfer is completed.\n\n- All data, including metadata such as complex access control lists (ACL) and\n locked files is transferred.\n\nSnapMirror transfers volumes between ONTAP systems, even across different\ngeographical locations.\n\nNetApp Volumes already uses SnapMirror for its\n[volume replication](/netapp/volumes/docs/protect-data/about-volume-replication)\nfeature, which allows replication of NetApp Volumes between\ndifferent Google regions. A new subtype of volume replication, called\n*hybrid replication*, now supports the migration of ONTAP volumes into\nNetApp Volumes.\n| **Note:** Hybrid replication is designed for migration, but not for ongoing replication, and it doesn't support reversing the mirror direction.\n\nOverview of migration process\n-----------------------------\n\nHybrid replication ensures fast, consistent, and complete data migrations from\nsource to destination with minimum impact on your production. This process\nconsists of the following phases:\n\n1. [Authentication](#authentication)\n\n2. [Baseline transfer](#baseline_transfer)\n\n3. [Incremental transfers](#incremental_transfers)\n\n4. [Cutover](#cutover)\n\n5. [Cleanup](#cleanup)\n\n### Authentication\n\nDuring the authentication phase, the storage administrators of the source ONTAP\nsystem must grant NetApp Volumes permission to fetch a volume\nfrom the source system. This is achieved through administrative steps on the\nsource ONTAP system, called [*cluster peering*](https://docs.netapp.com/us-en/ontap/peering/peering-basics-concept.html) and *SVM peering*. The volume migration process\ngenerates the ONTAP commands that administrators must run on the source system.\n\n### Baseline transfer\n\nAfter you set up a migration, a snapshot creates a consistency point on the\nsource system. All data captured from this snapshot, including older snapshots,\nis then transferred to NetApp Volumes during an initial phase\ncalled a *baseline transfer*.\n\nA baseline transfer can take minutes, hours, days, or weeks. This duration\ndepends on the following:\n\n- The amount of data in the snapshot.\n\n- The network speed between your ONTAP source system and NetApp Volumes.\n\n- The throughput setting of your NetApp Volumes.\n\nDuring the baseline transfer, your source volume continues to serve your\nworkload and data is added, changed, or deleted. These changes don't affect the\nsnapshot used for the baseline's consistency point. While the baseline is in\nprogress, the destination volume isn't available to clients. After the baseline\ncompletes, the destination volume becomes online and available for client access\nin read-only mode. Note that the destination volume will have a different IP\naddress.\n\nUnlike volume replication, volume migration can't read the source volume\nparameters such as size, protocol choices, and export or snapshot policies.\nTherefore, you must configure these settings correctly for the destination\nvolume.\n\nYou can now start mounting or mapping the destination volume to VMs to prepare\nfor the end of the migration.\n\n### Incremental transfers\n\nAfter the baseline transfer completes, the migration triggers hourly incremental\ntransfers.\n\nEach incremental transfer performs the following actions:\n\n1. Takes a new snapshot of your source volume.\n\n2. Calculates the data changes between the current and the previous snapshots.\n\n3. Starts transferring these changes to the destination.\n\nIf a significant volume of changes occur since the baseline snapshot, and an\nincremental transfer is still running when the next hourly transfer is\nscheduled, this transfer is skipped. The next incremental transfer captures a\nnew source snapshot, deletes the oldest SnapMirror snapshot, calculates the\nchanges, and transfers them.\n\nClients that mount the destination volume see a read-only view with static\ncontent. However, once an incremental transfer is complete, the volume's\ncontents are instantly updated from the previous replication snapshot to the\nlatest one through a single, atomic operation.\n\nUnless the amount of new data added to the source volume exceeds what can be\ntransferred within an hour, the size of the incremental transfer decreases with\neach successful transfer. This process continues until it stabilizes at a rate\ndefined by the source volume's hourly change rate, which might take a few\niterations. Once this steady state is reached, you can schedule a cutover. To\nminimize the required downtime during the cutover, the objective is to reduce\nthe changes between the source and destination volumes.\n\n### Cutover\n\nDuring a cutover, you move your workloads from the source volume to the\ndestination volume without data loss (RPO = 0) and minimal downtime (RTO). The\ncutover process consists of the following substeps:\n\n1. [Stop modifications](#stop_modifications)\n\n2. [Wait for current transfer](#wait_for_current_transfer)\n\n3. [Perform a manual incremental transfer](#perform_a_manual_incremental_transfer)\n\n4. [Stop replication](#stop_replication)\n\n5. [Reconfigure and restart applications](#reconfigure_and_restart_applications)\n\n#### Stop modifications\n\nSince incremental transfers are asynchronous, your source volume might contain\nchanges that are not yet reflected on the destination volume. To synchronize,\nstop all modifications on the source volume by:\n\n- Stopping all applications that modify data.\n\n- Optional: change the volume permissions to read-only to prevent any client\n from modifying the data.\n\n#### Wait for current transfer\n\nMake sure that any running incremental transfers are completed.\n\n#### Perform a manual incremental transfer\n\nPerform a manual incremental transfer to send the latest data to the destination\nsystem. This should take only a few seconds to minutes, depending on the volume\nof data changed since the last transfer, network speed, and the throughput\nlimits of the destination volumes.\n\nAfter the manual incremental transfer completes, the latest data is available at\nthe destination.\n\n#### Stop replication\n\nRun the stop operation on the replication to make your destination volume\nread-writeable. This completes your data migration.\n\n#### Reconfigure and restart applications\n\nReconfigure your applications to use the destination volume and then restart\nthem. Make sure that all data access to the source volume is stopped to\nprevent any application from accidentally using the source volume.\n\n### Cleanup\n\nIf the cutover is successful, you can perform the following cleanup steps:\n\n1. **Delete stopped replication**: when you delete the stopped replication, the\n replication resource is deleted but the destination volume isn't deleted.\n This process also deletes the SnapMirror relationship used in the backend\n with your source system.\n\n2. **Remove cluster peering**: if this was the last SnapMirror relationship\n between NetApp Volumes and your source cluster, you can remove\n the cluster peering from the source ONTAP system. Additionally, you can\n remove any networking configured only for migration purposes between the\n source and destination.\n\nWhat's next\n-----------\n\n[Plan the migration](/netapp/volumes/docs/migrate/ontap/plan-the-migration)."]]