[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-17。"],[],[],null,["# Plan the migration\n\nThis page provides details about how to plan your migration.\n\nEstimate migration duration for a single volume\n-----------------------------------------------\n\nThe volume migration duration is affected by several factors:\n\n- **Speed of your source volume**: SnapMirror traffic has a lower priority than\n NFS and SMB traffic. High workloads on your source volume can reduce the\n performance of the outgoing SnapMirror traffic.\n\n- **NetApp Volumes throughput**: the service level and volume\n size define its throughput. NFS or SMB traffic on the volume might also reduce\n SnapMirror performance.\n\n- **Network connection throughput**: SnapMirror tries for maximum speed and\n might consume bandwidth shared with other users on the network connection\n between the source ONTAP system and NetApp Volumes.\n\n- **Amount of data used in the source volume**: the larger data volumes require\n more time to transfer.\n\n- **Source volume data change rate**: a higher rate of data changes during the\n migration process increases the time needed for incremental transfers to\n synchronize.\n\nA rough estimate of the migration duration can be calculated with a\nrule-of-thumb.\n\n**Example**\n\nConsider the following scenario for migration duration calculation:\n\n- **Source volume**: 15 TiB capacity with 12 TiB of data used.\n\n - Data to transfer: 12 TiB.\n\n - ONTAP storage efficiency can reduce the transfer size, but you can ignore\n that for this exercise.\n\n - Assume performance capabilities aren't a limiting factor.\n\n- **Change rate**: 10% per day.\n\n - Daily data change rate: 1.2 TiB.\n\n - 10% is an assumption for this example; typical change rates are usually\n much lower.\n\n- **Network connection**: On-premises infrastructure is connected to Google\n using a 10 Gbps interconnect.\n\n - Effective TCP bandwidth: approximately 1000 MiBps, which can be exclusively used.\n- **Destination volume**: 12 TiB volume with a Premium service level.\n\n - Throughput cap: 12 × 64 MiBps = 768 MiBps.\n\n**Calculation**\n\nIn this example, the limiting factor is the destination volume's throughput cap\nof 768 MiBps. The source volume's performance is considered *unlimited*, and\nnetwork bandwidth is 1000 MiBps.\n| **Note:** This calculation represents a best-case scenario. Actual times might vary due to network conditions and other factors.\n\n**Baseline transfer**\n\n- Data to transfer: 12 TiB\n\n- Throughput cap: 768 MiBps\n\n- Time calculation: (12 TiB x 1024\\^2 MiB/TiB) / 768 MiBps = 16384 seconds\n\n- Total time for baseline transfer: 4.6 hours\n\n**First incremental transfer**\n\n- Time elapsed since baseline transfer: 5 hours\n\n- Data change: (12 TiB x 1024 GiB/TiB) \\* 10% \\* (5h/24h) = 256 GiB\n\n- Time calculation:(256 GiB x 1024 MiB/GiB) / 768 MiBps = 341 seconds\n\n- Total time for first incremental transfer: \\~6 minutes\n\n**Second incremental transfer**\n\n- Time elapsed since first incremental transfer: 1 hour\n\n- Data change: (12 TiB x 1024 GiB/TiB) \\* 10% \\* (1h/24h) = 51.2 GiB\n\n- Time calculation: (51.2 GiB x 1024 MiB/GiB) / 768 MiBps = 68 seconds\n\n- Total time for second incremental transfer: \\~70 seconds\n\n**Subsequent incremental transfers**\n\nAfter the first incremental transfer, all subsequent transfers typically take\nless than one hour. Following the second incremental transfer, all subsequent\ntransfers will take approximately the same amount of time.\n\n**Cutover process**\n\nStart the cutover process shortly after an incremental transfer completes to\nminimize the accumulation of changed data.\n\nTotal migration time: approximately 4.7 hours.\n\nRun multiple migrations or external replications in parallel\n------------------------------------------------------------\n\nVolume migrations and external replications are managed in the API as two\nvariations of a *hybrid replication* and consume the same Google project quota.\n\nThe number of configured hybrid replications is limited by a region-specific\nproject quota, which is set to `1` by default. You can request a higher quota\nusing [Google Cloud console](https://console.cloud.google.com/iam-admin/quotas) for the\nNetApp Volumes API. The following are the relevant quotas:\n\n- `netapp.googleapis.com/standard_hybrid_replicated_volumes_per_region`\n\n- `netapp.googleapis.com/hybrid_replicated_volumes_per_region`\n\nIf you need to migrate more volumes than your current quota allows, you must\nperform these operations sequentially. It is recommended to group volumes that\nbelong to the same workload into batches for simultaneous migration, which also\nhelps in cutting them over together.\n\nFor external replication, your project quota must be sufficient to accommodate\nall configured external replications, in addition to any potential volume\nmigrations.\n| **Note:** Running multiple migrations or replications in parallel might over-utilize shared resources, such as a Google interconnect used for migration or replication. This can result in increased migration durations or make replications skip their scheduled replication intervals.\n\nWhat's next\n-----------\n\n[Prerequisites for ONTAP and NetApp Volumes](/netapp/volumes/docs/migrate/ontap/prerequisites-ontap-netapp-volumes)."]]