管理一致性群組


本文說明如何管理一致性群組。一致性群組是資源政策,可在相同區域或區域中,將多個磁碟的複製作業保持一致。

如要進一步瞭解一致性群組,請參閱「關於非同步複製」。

限制

  • 單一用戶群節點中的磁碟不支援一致性群組。
  • 一致性群組最多可包含 128 個磁碟。
  • 一致性群組中的所有磁碟都必須位於與一致性群組資源政策相同的專案中。
  • 對於區域磁碟,一致性群組中的所有磁碟都必須位於同一個可用區;對於區域磁碟,則必須位於同一個區域組合。
  • 一致性群組可以包含主要磁碟或次要磁碟,但不能同時包含兩者。
  • 磁碟正在複製時,您無法在一致性群組中新增或移除主要磁碟。如果您想在一致性群組中新增或移除主要磁碟,請務必先停止複製。您隨時可以新增或移除一致性群組中的次要磁碟。
  • 您最多可以將 16 個位於不同一致性群組的磁碟,或不屬於任何一致性群組的磁碟,連接至 VM。同一個一致性群組中的磁碟會計為 16 個磁碟的上限。

事前準備

  • 如果尚未設定,請先設定驗證機制。驗證是指驗證身分,以便存取 Google Cloud 服務和 API 的程序。如要在本機開發環境中執行程式碼或範例,您可以選取下列任一選項,向 Compute Engine 進行驗證:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. After installing the Google Cloud CLI, initialize it by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    2. Set a default region and zone.

    Terraform

    To use the Terraform samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      After installing the Google Cloud CLI, initialize it by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

建立一致性群組

如果您需要在多個磁碟之間對齊複製作業,請在主要磁碟所在的區域中建立一致性群組。如果您需要對齊磁碟克隆,請在次要磁碟所在的區域中建立一致性群組。

使用 Google Cloud 控制台、Google Cloud CLI、REST 或 Terraform 建立一致性群組。

主控台

如要建立一致性群組,請按照下列步驟操作:

  1. 前往 Google Cloud 控制台中的「非同步複製」頁面。

    前往非同步複製

  2. 按一下「一致性群組」分頁標籤。

  3. 按一下「建立一致性群組」

  4. 在「Name」欄位中,輸入一致性群組的名稱。

  5. 在「Region」(地區) 欄位中,選取磁碟所在的地區。如要將主要磁碟新增至一致性群組,請選取主要區域。如要將次要磁碟新增至一致性群組,請選取次要區域。

  6. 按一下 [建立]。

gcloud

使用 gcloud compute resource-policies create disk-consistency-group 指令建立一致性群組:

gcloud compute resource-policies create disk-consistency-group CONSISTENCY_GROUP_NAME \
    --region=REGION

更改下列內容:

  • CONSISTENCY_GROUP_NAME:一致性群組的名稱。
  • REGION:一致性群組的區域。如要將主要磁碟新增至一致性群組,請使用主要區域。如要將次要磁碟新增至一致性群組,請使用次要區域。

Go

import (
	"context"
	"fmt"
	"io"

	compute "cloud.google.com/go/compute/apiv1"
	computepb "cloud.google.com/go/compute/apiv1/computepb"
	"google.golang.org/protobuf/proto"
)

// createConsistencyGroup creates a new consistency group for a project in a given region.
func createConsistencyGroup(w io.Writer, projectID, region, groupName string) error {
	// projectID := "your_project_id"
	// region := "europe-west4"
	// groupName := "your_group_name"

	ctx := context.Background()
	disksClient, err := compute.NewResourcePoliciesRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewResourcePoliciesRESTClient: %w", err)
	}
	defer disksClient.Close()

	req := &computepb.InsertResourcePolicyRequest{
		Project: projectID,
		ResourcePolicyResource: &computepb.ResourcePolicy{
			Name:                       proto.String(groupName),
			DiskConsistencyGroupPolicy: &computepb.ResourcePolicyDiskConsistencyGroupPolicy{},
		},
		Region: region,
	}

	op, err := disksClient.Insert(ctx, req)
	if err != nil {
		return fmt.Errorf("unable to create group: %w", err)
	}

	if err = op.Wait(ctx); err != nil {
		return fmt.Errorf("unable to wait for the operation: %w", err)
	}

	fmt.Fprintf(w, "Group created\n")

	return nil
}

Java

import com.google.cloud.compute.v1.InsertResourcePolicyRequest;
import com.google.cloud.compute.v1.Operation;
import com.google.cloud.compute.v1.Operation.Status;
import com.google.cloud.compute.v1.ResourcePoliciesClient;
import com.google.cloud.compute.v1.ResourcePolicy;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class CreateConsistencyGroup {

  public static void main(String[] args)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String project = "YOUR_PROJECT_ID";
    // Name of the region in which you want to create the consistency group.
    String region = "us-central1";
    // Name of the consistency group you want to create.
    String consistencyGroupName = "YOUR_CONSISTENCY_GROUP_NAME";

    createConsistencyGroup(project, region, consistencyGroupName);
  }

  // Creates a new consistency group resource policy in the specified project and region.
  public static Status createConsistencyGroup(
      String project, String region, String consistencyGroupName)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (ResourcePoliciesClient  regionResourcePoliciesClient = ResourcePoliciesClient.create()) {
      ResourcePolicy resourcePolicy =
          ResourcePolicy.newBuilder()
              .setName(consistencyGroupName)
              .setRegion(region)
              .setDiskConsistencyGroupPolicy(
                  ResourcePolicy.newBuilder().getDiskConsistencyGroupPolicy())
              .build();

      InsertResourcePolicyRequest request = InsertResourcePolicyRequest.newBuilder()
              .setProject(project)
              .setRegion(region)
              .setResourcePolicyResource(resourcePolicy)
              .build();

      Operation response =
          regionResourcePoliciesClient.insertAsync(request).get(1, TimeUnit.MINUTES);

      if (response.hasError()) {
        throw new Error("Error creating consistency group! " + response.getError());
      }
      return response.getStatus();
    }
  }
}

Node.js

// Import the Compute library
const computeLib = require('@google-cloud/compute');
const compute = computeLib.protos.google.cloud.compute.v1;

// Instantiate a resourcePoliciesClient
const resourcePoliciesClient = new computeLib.ResourcePoliciesClient();
// Instantiate a regionOperationsClient
const regionOperationsClient = new computeLib.RegionOperationsClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// The project that contains the consistency group.
const projectId = await resourcePoliciesClient.getProjectId();

// The region for the consistency group.
// If you want to add primary disks to consistency group, use the same region as the primary disks.
// If you want to add secondary disks to the consistency group, use the same region as the secondary disks.
// region = 'europe-central2';

// The name for consistency group.
// consistencyGroupName = 'consistency-group-name';

async function callCreateConsistencyGroup() {
  // Create a resourcePolicyResource
  const resourcePolicyResource = new compute.ResourcePolicy({
    diskConsistencyGroupPolicy:
      new compute.ResourcePolicyDiskConsistencyGroupPolicy({}),
    name: consistencyGroupName,
  });

  const [response] = await resourcePoliciesClient.insert({
    project: projectId,
    region,
    resourcePolicyResource,
  });

  let operation = response.latestResponse;

  // Wait for the create group operation to complete.
  while (operation.status !== 'DONE') {
    [operation] = await regionOperationsClient.wait({
      operation: operation.name,
      project: projectId,
      region,
    });
  }

  console.log(`Consistency group: ${consistencyGroupName} created.`);
}

await callCreateConsistencyGroup();

Python

from __future__ import annotations

import sys
from typing import Any

from google.api_core.extended_operation import ExtendedOperation
from google.cloud import compute_v1


def wait_for_extended_operation(
    operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
    """
    Waits for the extended (long-running) operation to complete.

    If the operation is successful, it will return its result.
    If the operation ends with an error, an exception will be raised.
    If there were any warnings during the execution of the operation
    they will be printed to sys.stderr.

    Args:
        operation: a long-running operation you want to wait on.
        verbose_name: (optional) a more verbose name of the operation,
            used only during error and warning reporting.
        timeout: how long (in seconds) to wait for operation to finish.
            If None, wait indefinitely.

    Returns:
        Whatever the operation.result() returns.

    Raises:
        This method will raise the exception received from `operation.exception()`
        or RuntimeError if there is no exception set, but there is an `error_code`
        set for the `operation`.

        In case of an operation taking longer than `timeout` seconds to complete,
        a `concurrent.futures.TimeoutError` will be raised.
    """
    result = operation.result(timeout=timeout)

    if operation.error_code:
        print(
            f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
            file=sys.stderr,
            flush=True,
        )
        print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
        raise operation.exception() or RuntimeError(operation.error_message)

    if operation.warnings:
        print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
        for warning in operation.warnings:
            print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)

    return result


def create_consistency_group(
    project_id: str, region: str, group_name: str, group_description: str
) -> compute_v1.ResourcePolicy:
    """
    Creates a consistency group in Google Cloud Compute Engine.
    Args:
        project_id (str): The ID of the Google Cloud project.
        region (str): The region where the consistency group will be created.
        group_name (str): The name of the consistency group.
        group_description (str): The description of the consistency group.
    Returns:
        compute_v1.ResourcePolicy: The consistency group object
    """

    # Initialize the ResourcePoliciesClient
    client = compute_v1.ResourcePoliciesClient()

    # Create the ResourcePolicy object with the provided name, description, and policy
    resource_policy_resource = compute_v1.ResourcePolicy(
        name=group_name,
        description=group_description,
        disk_consistency_group_policy=compute_v1.ResourcePolicyDiskConsistencyGroupPolicy(),
    )

    operation = client.insert(
        project=project_id,
        region=region,
        resource_policy_resource=resource_policy_resource,
    )
    wait_for_extended_operation(operation, "Consistency group creation")

    return client.get(project=project_id, region=region, resource_policy=group_name)

REST

使用 resourcePolicies.insert 方法建立一致性群組:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/REGION/resourcePolicies
{
 "name": "CONSISTENCY_GROUP_NAME",
 "diskConsistencyGroupPolicy": {
  }
}

更改下列內容:

  • PROJECT:包含一致性群組的專案。
  • REGION:一致性群組的區域。如果您想將主要磁碟新增至一致性群組,請使用與主要磁碟相同的區域。如果您想將次要磁碟新增至一致性群組,請使用與次要磁碟相同的區域。
  • CONSISTENCY_GROUP_NAME:一致性群組的名稱。

Terraform

如要建立一致性群組,請使用 compute_resource_policy 資源

resource "google_compute_resource_policy" "default" {
  name   = "test-consistency-group"
  region = "us-central1"
  disk_consistency_group_policy {
    enabled = true
  }
}

如要瞭解如何套用或移除 Terraform 設定,請參閱「基本 Terraform 指令」。

查看一致性群組中的磁碟

使用 Google Cloud 控制台、Google Cloud CLI 或 REST,查看一致性群組中的磁碟。

主控台

如要查看一致性群組中的磁碟,請按照下列步驟操作:

  1. 前往 Google Cloud 控制台中的「非同步複製」頁面。

    前往非同步複製

  2. 按一下「一致性群組」分頁標籤。

  3. 按一下要查看磁碟的一致性群組名稱。系統隨即會開啟「Manage consistency group」(管理一致性群組) 頁面。

  4. 查看「一致性群組成員」部分,瞭解一致性群組中包含的所有磁碟。

gcloud

使用 gcloud compute disks list 指令查看一致性群組中包含的磁碟:

gcloud compute disks list \
    --LOCATION_FLAG=LOCATION \
    --filter=resourcePolicies=CONSISTENCY_GROUP_NAME

更改下列內容:

  • LOCATION_FLAG:一致性群組中磁碟的位置標記。如果一致性群組中的磁碟為地區磁碟,請使用 --region。如果一致性群組中的磁碟是區域磁碟,請使用 --zone
  • LOCATION:一致性群組中磁碟的區域或可用區。如為區域性磁碟,請使用區域。如果是區域磁碟,請使用區域。
  • CONSISTENCY_GROUP_NAME:一致性群組的名稱。

Go

import (
	"context"
	"fmt"
	"io"
	"strings"

	compute "cloud.google.com/go/compute/apiv1"
	computepb "cloud.google.com/go/compute/apiv1/computepb"
	"google.golang.org/api/iterator"
)

// listRegionalConsistencyGroup get list of disks in consistency group for a project in a given region.
func listRegionalConsistencyGroup(w io.Writer, projectID, region, groupName string) error {
	// projectID := "your_project_id"
	// region := "europe-west4"
	// groupName := "your_group_name"

	if groupName == "" {
		return fmt.Errorf("group name cannot be empty")
	}

	ctx := context.Background()
	// To check for zonal disks in consistency group use compute.NewDisksRESTClient
	disksClient, err := compute.NewRegionDisksRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewRegionDisksRESTClient: %w", err)
	}
	defer disksClient.Close()

	// If using zonal disk client, use computepb.ListDisksRequest
	req := &computepb.ListRegionDisksRequest{
		Project: projectID,
		Region:  region,
	}

	it := disksClient.List(ctx, req)
	for {
		disk, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return err
		}

		for _, diskPolicy := range disk.GetResourcePolicies() {
			if strings.Contains(diskPolicy, groupName) {
				fmt.Fprintf(w, "- %s\n", disk.GetName())
			}
		}
	}

	return nil
}

Java

列出一致性群組中的區域磁碟

import com.google.cloud.compute.v1.Disk;
import com.google.cloud.compute.v1.DisksClient;
import com.google.cloud.compute.v1.ListDisksRequest;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;

public class ListZonalDisksInConsistencyGroup {
  public static void main(String[] args)
          throws IOException, InterruptedException, ExecutionException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String project = "YOUR_PROJECT_ID";
    // Name of the consistency group.
    String consistencyGroupName = "CONSISTENCY_GROUP_ID";
    // Zone of the disk.
    String disksLocation = "us-central1-a";
    // Region of the consistency group.
    String consistencyGroupLocation = "us-central1";

    listZonalDisksInConsistencyGroup(
            project, consistencyGroupName, consistencyGroupLocation, disksLocation);
  }

  // Lists disks in a consistency group.
  public static List<Disk> listZonalDisksInConsistencyGroup(String project,
      String consistencyGroupName, String consistencyGroupLocation, String disksLocation)
          throws IOException {
    String filter = String
            .format("https://www.googleapis.com/compute/v1/projects/%s/regions/%s/resourcePolicies/%s",
                    project, consistencyGroupLocation, consistencyGroupName);
    List<Disk> disksList = new ArrayList<>();
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (DisksClient disksClient = DisksClient.create()) {
      ListDisksRequest request =
              ListDisksRequest.newBuilder()
                      .setProject(project)
                      .setZone(disksLocation)
                      .build();
      DisksClient.ListPagedResponse response = disksClient.list(request);

      for (Disk disk : response.iterateAll()) {
        if (disk.getResourcePoliciesList().contains(filter)) {
          disksList.add(disk);
        }
      }
    }
    System.out.println(disksList.size());
    return disksList;
  }
}

列出一致性群組中的區域磁碟

import com.google.cloud.compute.v1.Disk;
import com.google.cloud.compute.v1.ListRegionDisksRequest;
import com.google.cloud.compute.v1.RegionDisksClient;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutionException;

public class ListRegionalDisksInConsistencyGroup {
  public static void main(String[] args)
          throws IOException, InterruptedException, ExecutionException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String project = "YOUR_PROJECT_ID";
    // Name of the consistency group.
    String consistencyGroupName = "CONSISTENCY_GROUP_ID";
    // Region of the disk.
    String disksLocation = "us-central1";
    // Region of the consistency group.
    String consistencyGroupLocation = "us-central1";

    listRegionalDisksInConsistencyGroup(
            project, consistencyGroupName, consistencyGroupLocation, disksLocation);
  }

  // Lists disks in a consistency group.
  public static List<Disk> listRegionalDisksInConsistencyGroup(String project,
      String consistencyGroupName, String consistencyGroupLocation, String disksLocation)
          throws IOException {
    String filter = String
            .format("https://www.googleapis.com/compute/v1/projects/%s/regions/%s/resourcePolicies/%s",
                    project, consistencyGroupLocation, consistencyGroupName);
    List<Disk> disksList = new ArrayList<>();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (RegionDisksClient disksClient = RegionDisksClient.create()) {
      ListRegionDisksRequest request =
              ListRegionDisksRequest.newBuilder()
                      .setProject(project)
                      .setRegion(disksLocation)
                      .build();

      RegionDisksClient.ListPagedResponse response = disksClient.list(request);
      for (Disk disk : response.iterateAll()) {
        if (disk.getResourcePoliciesList().contains(filter)) {
          disksList.add(disk);
        }
      }
    }
    System.out.println(disksList.size());
    return disksList;
  }
}

Node.js

// Import the Compute library
const computeLib = require('@google-cloud/compute');

// If you want to get regional disks, you should use: RegionDisksClient.
// Instantiate a disksClient
const disksClient = new computeLib.DisksClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// The project that contains the disks.
const projectId = await disksClient.getProjectId();

// If you use RegionDisksClient- define region, if DisksClient- define zone.
// The zone or region of the disks.
// disksLocation = 'europe-central2-a';

// The name of the consistency group.
// consistencyGroupName = 'consistency-group-name';

// The region of the consistency group.
// consistencyGroupLocation = 'europe-central2';

async function callConsistencyGroupDisksList() {
  const filter = `https://www.googleapis.com/compute/v1/projects/${projectId}/regions/${consistencyGroupLocation}/resourcePolicies/${consistencyGroupName}`;

  const [response] = await disksClient.list({
    project: projectId,
    // If you use RegionDisksClient, pass region as an argument instead of zone.
    zone: disksLocation,
  });

  // Filtering must be done manually for now, since list filtering inside disksClient.list is not supported yet.
  const filteredDisks = response.filter(disk =>
    disk.resourcePolicies.includes(filter)
  );

  console.log(JSON.stringify(filteredDisks));
}

await callConsistencyGroupDisksList();

Python

from google.cloud import compute_v1


def list_disks_consistency_group(
    project_id: str,
    disk_location: str,
    consistency_group_name: str,
    consistency_group_region: str,
) -> list:
    """
    Lists disks that are part of a specified consistency group.
    Args:
        project_id (str): The ID of the Google Cloud project.
        disk_location (str): The region or zone of the disk
        disk_region_flag (bool): Flag indicating if the disk is regional.
        consistency_group_name (str): The name of the consistency group.
        consistency_group_region (str): The region of the consistency group.
    Returns:
        list: A list of disks that are part of the specified consistency group.
    """
    consistency_group_link = (
        f"https://www.googleapis.com/compute/v1/projects/{project_id}/regions/"
        f"{consistency_group_region}/resourcePolicies/{consistency_group_name}"
    )
    # If the final character of the disk_location is a digit, it is a regional disk
    if disk_location[-1].isdigit():
        region_client = compute_v1.RegionDisksClient()
        disks = region_client.list(project=project_id, region=disk_location)
    # For zonal disks we use DisksClient
    else:
        client = compute_v1.DisksClient()
        disks = client.list(project=project_id, zone=disk_location)
    return [disk for disk in disks if consistency_group_link in disk.resource_policies]

REST

使用查詢篩選器搭配下列任一方法,即可查看一致性群組中的磁碟:

  • 使用 disks.get 方法查看一致性群組中的區域磁碟:

    GET https://compute.googleapis.com/compute/v1/projects/PROJECT/zones/ZONE/disks?filter=resourcePolicies%3DCONSISTENCY_GROUP_NAME
    
  • 使用 regionDisks.get 方法查看一致性群組中的地區磁碟:

    GET https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/REGION/disks?filter=resourcePolicies%3DCONSISTENCY_GROUP_NAME
    

更改下列內容:

  • PROJECT:包含一致性群組的專案
  • ZONE:一致性群組中磁碟的可用區
  • REGION:一致性群組中磁碟的區域
  • CONSISTENCY_GROUP_NAME:一致性群組的名稱

將磁碟新增至一致性群組

如要將主要磁碟新增至一致性群組,請先將磁碟新增至一致性群組,再開始複製作業。您隨時可以將次要磁碟新增至一致性群組。在一致性群組中,區域磁碟的所有磁碟都必須位於同一個可用區,而區域磁碟的所有磁碟則必須位於同一個區域組合。

使用 Google Cloud 控制台、Google Cloud CLI、REST 或 Terraform,將磁碟新增至一致性群組。

主控台

如要將磁碟新增至一致性群組,請按照下列步驟操作:

  1. 前往 Google Cloud 控制台中的「非同步複製」頁面。

    前往非同步複製

  2. 按一下「一致性群組」分頁標籤。

  3. 按一下要新增磁碟的一致性群組名稱。系統隨即會開啟「Manage consistency group」(管理一致性群組) 頁面。

  4. 按一下「指派磁碟」。「Assign disks」(指派磁碟) 頁面隨即開啟。

  5. 選取要加入一致性群組的磁碟。

  6. 按一下「指派磁碟」。出現提示時,按一下「新增」

gcloud

使用 gcloud compute disks add-resource-policies 指令將磁碟新增至一致性群組:

gcloud compute disks add-resource-policies DISK_NAME \
    --LOCATION_FLAG=LOCATION \
    --resource-policies=CONSISTENCY_GROUP

更改下列內容:

  • DISK_NAME:要新增至一致性群組的磁碟名稱。
  • LOCATION_FLAG:磁碟的位置標記。如為地區磁碟,請使用 --region。如果是區域磁碟,請使用 --zone
  • LOCATION:磁碟的區域或可用區。如為區域性磁碟,請使用區域。如果是區域磁碟,請使用區域。
  • CONSISTENCY_GROUP:一致性群組的網址。例如:projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME

Go

import (
	"context"
	"fmt"
	"io"

	compute "cloud.google.com/go/compute/apiv1"
	computepb "cloud.google.com/go/compute/apiv1/computepb"
)

// addDiskConsistencyGroup adds a disk to a consistency group for a project in a given region.
func addDiskConsistencyGroup(w io.Writer, projectID, region, groupName, diskName string) error {
	// projectID := "your_project_id"
	// region := "europe-west4"
	// diskName := "your_disk_name"
	// groupName := "your_group_name"

	ctx := context.Background()
	disksClient, err := compute.NewRegionDisksRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewResourcePoliciesRESTClient: %w", err)
	}
	defer disksClient.Close()

	consistencyGroupUrl := fmt.Sprintf("projects/%s/regions/%s/resourcePolicies/%s", projectID, region, groupName)

	req := &computepb.AddResourcePoliciesRegionDiskRequest{
		Project: projectID,
		Disk:    diskName,
		RegionDisksAddResourcePoliciesRequestResource: &computepb.RegionDisksAddResourcePoliciesRequest{
			ResourcePolicies: []string{consistencyGroupUrl},
		},
		Region: region,
	}

	op, err := disksClient.AddResourcePolicies(ctx, req)
	if err != nil {
		return fmt.Errorf("unable to add disk: %w", err)
	}

	if err = op.Wait(ctx); err != nil {
		return fmt.Errorf("unable to wait for the operation: %w", err)
	}

	fmt.Fprintf(w, "Disk added\n")

	return nil
}

Java

import com.google.cloud.compute.v1.AddResourcePoliciesDiskRequest;
import com.google.cloud.compute.v1.AddResourcePoliciesRegionDiskRequest;
import com.google.cloud.compute.v1.DisksAddResourcePoliciesRequest;
import com.google.cloud.compute.v1.DisksClient;
import com.google.cloud.compute.v1.Operation;
import com.google.cloud.compute.v1.Operation.Status;
import com.google.cloud.compute.v1.RegionDisksAddResourcePoliciesRequest;
import com.google.cloud.compute.v1.RegionDisksClient;
import java.io.IOException;
import java.util.Arrays;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class AddDiskToConsistencyGroup {
  public static void main(String[] args)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project that contains the disk.
    String project = "YOUR_PROJECT_ID";
    // Zone or region of the disk.
    String location = "us-central1";
    // Name of the disk.
    String diskName = "DISK_NAME";
    // Name of the consistency group.
    String consistencyGroupName = "CONSISTENCY_GROUP";
    // Region of the consistency group.
    String consistencyGroupLocation = "us-central1";

    addDiskToConsistencyGroup(
        project, location, diskName, consistencyGroupName, consistencyGroupLocation);
  }

  // Adds a disk to a consistency group.
  public static Status addDiskToConsistencyGroup(
      String project, String location, String diskName,
      String consistencyGroupName, String consistencyGroupLocation)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    String consistencyGroupUrl = String.format(
        "https://www.googleapis.com/compute/v1/projects/%s/regions/%s/resourcePolicies/%s",
        project, consistencyGroupLocation, consistencyGroupName);
    Operation response;
    if (Character.isDigit(location.charAt(location.length() - 1))) {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      try (RegionDisksClient disksClient = RegionDisksClient.create()) {
        AddResourcePoliciesRegionDiskRequest request =
                AddResourcePoliciesRegionDiskRequest.newBuilder()
                    .setDisk(diskName)
                    .setRegion(location)
                    .setProject(project)
                    .setRegionDisksAddResourcePoliciesRequestResource(
                            RegionDisksAddResourcePoliciesRequest.newBuilder()
                                .addAllResourcePolicies(Arrays.asList(consistencyGroupUrl))
                                .build())
                    .build();
        response = disksClient.addResourcePoliciesAsync(request).get(1, TimeUnit.MINUTES);
      }
    } else {
      try (DisksClient disksClient = DisksClient.create()) {
        AddResourcePoliciesDiskRequest request =
                AddResourcePoliciesDiskRequest.newBuilder()
                    .setDisk(diskName)
                    .setZone(location)
                    .setProject(project)
                    .setDisksAddResourcePoliciesRequestResource(
                            DisksAddResourcePoliciesRequest.newBuilder()
                                .addAllResourcePolicies(Arrays.asList(consistencyGroupUrl))
                                .build())
                    .build();
        response = disksClient.addResourcePoliciesAsync(request).get(1, TimeUnit.MINUTES);
      }
    }
    if (response.hasError()) {
      throw new Error("Error adding disk to consistency group! " + response.getError());
    }
    return response.getStatus();
  }
}

Node.js

// Import the Compute library
const computeLib = require('@google-cloud/compute');
const compute = computeLib.protos.google.cloud.compute.v1;

// If you want to add regional disk,
// you should use: RegionDisksClient and RegionOperationsClient.
// Instantiate a disksClient
const disksClient = new computeLib.DisksClient();
// Instantiate a zone
const zoneOperationsClient = new computeLib.ZoneOperationsClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// The project that contains the disk.
const projectId = await disksClient.getProjectId();

// The name of the disk.
// diskName = 'disk-name';

// If you use RegionDisksClient- define region, if DisksClient- define zone.
// The zone or region of the disk.
// diskLocation = 'europe-central2-a';

// The name of the consistency group.
// consistencyGroupName = 'consistency-group-name';

// The region of the consistency group.
// consistencyGroupLocation = 'europe-central2';

async function callAddDiskToConsistencyGroup() {
  const [response] = await disksClient.addResourcePolicies({
    disk: diskName,
    project: projectId,
    // If you use RegionDisksClient, pass region as an argument instead of zone.
    zone: diskLocation,
    disksAddResourcePoliciesRequestResource:
      new compute.DisksAddResourcePoliciesRequest({
        resourcePolicies: [
          `https://www.googleapis.com/compute/v1/projects/${projectId}/regions/${consistencyGroupLocation}/resourcePolicies/${consistencyGroupName}`,
        ],
      }),
  });

  let operation = response.latestResponse;

  // Wait for the add disk operation to complete.
  while (operation.status !== 'DONE') {
    [operation] = await zoneOperationsClient.wait({
      operation: operation.name,
      project: projectId,
      // If you use RegionDisksClient, pass region as an argument instead of zone.
      zone: operation.zone.split('/').pop(),
    });
  }

  console.log(
    `Disk: ${diskName} added to consistency group: ${consistencyGroupName}.`
  );
}

await callAddDiskToConsistencyGroup();

Python

from google.cloud import compute_v1


def add_disk_consistency_group(
    project_id: str,
    disk_name: str,
    disk_location: str,
    consistency_group_name: str,
    consistency_group_region: str,
) -> None:
    """Adds a disk to a specified consistency group.
    Args:
        project_id (str): The ID of the Google Cloud project.
        disk_name (str): The name of the disk to be added.
        disk_location (str): The region or zone of the disk
        consistency_group_name (str): The name of the consistency group.
        consistency_group_region (str): The region of the consistency group.
    Returns:
        None
    """
    consistency_group_link = (
        f"regions/{consistency_group_region}/resourcePolicies/{consistency_group_name}"
    )

    # Checking if the disk is zonal or regional
    # If the final character of the disk_location is a digit, it is a regional disk
    if disk_location[-1].isdigit():
        policy = compute_v1.RegionDisksAddResourcePoliciesRequest(
            resource_policies=[consistency_group_link]
        )
        disk_client = compute_v1.RegionDisksClient()
        disk_client.add_resource_policies(
            project=project_id,
            region=disk_location,
            disk=disk_name,
            region_disks_add_resource_policies_request_resource=policy,
        )
    # For zonal disks we use DisksClient
    else:
        print("Using DisksClient")
        policy = compute_v1.DisksAddResourcePoliciesRequest(
            resource_policies=[consistency_group_link]
        )
        disk_client = compute_v1.DisksClient()
        disk_client.add_resource_policies(
            project=project_id,
            zone=disk_location,
            disk=disk_name,
            disks_add_resource_policies_request_resource=policy,
        )

    print(f"Disk {disk_name} added to consistency group {consistency_group_name}")

REST

請使用下列任一方法,將磁碟新增至一致性群組:

更改下列內容:

  • PROJECT:包含磁碟的專案。
  • LOCATION:磁碟的可用區或區域。如為區域磁碟,請使用區域。如果是區域性磁碟,請使用區域。
  • DISK_NAME:要新增至一致性群組的磁碟名稱。
  • CONSISTENCY_GROUP:一致性群組的網址。例如:projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME

Terraform

如要將磁碟新增至一致性群組,請使用 compute_disk_resource_policy_attachment 資源

  • 如果是地區磁碟,請改為指定地區,而非區域。

    resource "google_compute_disk_resource_policy_attachment" "default" {
      name = google_compute_resource_policy.default.name
      disk = google_compute_disk.default.name
      zone = "us-central1-a"
    }

    如要瞭解如何套用或移除 Terraform 設定,請參閱「基本 Terraform 指令」。

從一致性群組中移除磁碟

您必須先停止磁碟的複寫,才能從一致性群組中移除磁碟。

使用 Google Cloud 主控台、Google Cloud CLI 或 REST,從一致性群組中移除磁碟。

主控台

如要從一致性群組中移除主要磁碟,請按照下列步驟操作:

  1. 前往 Google Cloud 控制台中的「非同步複製」頁面。

    前往非同步複製

  2. 按一下「一致性群組」分頁標籤。

  3. 按一下要新增磁碟的一致性群組名稱。系統隨即會開啟「Manage consistency group」(管理一致性群組) 頁面。

  4. 選取要從一致性群組中移除的磁碟。

  5. 按一下「移除磁碟」。系統顯示提示時,按一下「移除」

gcloud

使用 gcloud compute disks remove-resource-policies 指令,從一致性群組中移除磁碟:

gcloud compute disks remove-resource-policies DISK_NAME \
    --LOCATION_FLAG=LOCATION \
    --resource-policies=CONSISTENCY_GROUP

更改下列內容:

  • DISK_NAME:要從一致性群組中移除的磁碟名稱。
  • LOCATION_FLAG:磁碟的位置標記。如為地區磁碟,請使用 --region。如果是區域磁碟,請使用 --zone
  • LOCATION:磁碟的區域或可用區。如為區域性磁碟,請使用區域。如果是區域磁碟,請使用區域。
  • CONSISTENCY_GROUP:一致性群組的網址。例如:projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME

Go

import (
	"context"
	"fmt"
	"io"

	compute "cloud.google.com/go/compute/apiv1"
	computepb "cloud.google.com/go/compute/apiv1/computepb"
)

// removeDiskConsistencyGroup removes a disk from consistency group for a project in a given region.
func removeDiskConsistencyGroup(w io.Writer, projectID, region, groupName, diskName string) error {
	// projectID := "your_project_id"
	// region := "europe-west4"
	// diskName := "your_disk_name"
	// groupName := "your_group_name"

	ctx := context.Background()
	disksClient, err := compute.NewRegionDisksRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewResourcePoliciesRESTClient: %w", err)
	}
	defer disksClient.Close()

	consistencyGroupUrl := fmt.Sprintf("projects/%s/regions/%s/resourcePolicies/%s", projectID, region, groupName)

	req := &computepb.RemoveResourcePoliciesRegionDiskRequest{
		Project: projectID,
		Disk:    diskName,
		RegionDisksRemoveResourcePoliciesRequestResource: &computepb.RegionDisksRemoveResourcePoliciesRequest{
			ResourcePolicies: []string{consistencyGroupUrl},
		},
		Region: region,
	}

	op, err := disksClient.RemoveResourcePolicies(ctx, req)
	if err != nil {
		return fmt.Errorf("unable to remove disk: %w", err)
	}

	if err = op.Wait(ctx); err != nil {
		return fmt.Errorf("unable to wait for the operation: %w", err)
	}

	fmt.Fprintf(w, "Disk removed\n")

	return nil
}

Java

import com.google.cloud.compute.v1.DisksClient;
import com.google.cloud.compute.v1.DisksRemoveResourcePoliciesRequest;
import com.google.cloud.compute.v1.Operation;
import com.google.cloud.compute.v1.Operation.Status;
import com.google.cloud.compute.v1.RegionDisksClient;
import com.google.cloud.compute.v1.RegionDisksRemoveResourcePoliciesRequest;
import com.google.cloud.compute.v1.RemoveResourcePoliciesDiskRequest;
import com.google.cloud.compute.v1.RemoveResourcePoliciesRegionDiskRequest;
import java.io.IOException;
import java.util.Arrays;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class RemoveDiskFromConsistencyGroup {

  public static void main(String[] args)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project that contains the disk.
    String project = "YOUR_PROJECT_ID";
    // Zone or region of the disk.
    String location = "us-central1";
    // Name of the disk.
    String diskName = "DISK_NAME";
    // Name of the consistency group.
    String consistencyGroupName = "CONSISTENCY_GROUP";
    // Region of the consistency group.
    String consistencyGroupLocation = "us-central1";

    removeDiskFromConsistencyGroup(
        project, location, diskName, consistencyGroupName, consistencyGroupLocation);
  }

  // Removes a disk from a consistency group.
  public static Status removeDiskFromConsistencyGroup(
      String project, String location, String diskName,
      String consistencyGroupName, String consistencyGroupLocation)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    String consistencyGroupUrl = String.format(
        "https://www.googleapis.com/compute/v1/projects/%s/regions/%s/resourcePolicies/%s",
        project, consistencyGroupLocation, consistencyGroupName);
    Operation response;
    if (Character.isDigit(location.charAt(location.length() - 1))) {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      try (RegionDisksClient disksClient = RegionDisksClient.create()) {
        RemoveResourcePoliciesRegionDiskRequest request =
            RemoveResourcePoliciesRegionDiskRequest.newBuilder()
                .setDisk(diskName)
                .setRegion(location)
                .setProject(project)
                .setRegionDisksRemoveResourcePoliciesRequestResource(
                    RegionDisksRemoveResourcePoliciesRequest.newBuilder()
                        .addAllResourcePolicies(Arrays.asList(consistencyGroupUrl))
                        .build())
                .build();

        response = disksClient.removeResourcePoliciesAsync(request).get(1, TimeUnit.MINUTES);
      }
    } else {
      try (DisksClient disksClient = DisksClient.create()) {
        RemoveResourcePoliciesDiskRequest request =
            RemoveResourcePoliciesDiskRequest.newBuilder()
                .setDisk(diskName)
                .setZone(location)
                .setProject(project)
                .setDisksRemoveResourcePoliciesRequestResource(
                    DisksRemoveResourcePoliciesRequest.newBuilder()
                        .addAllResourcePolicies(Arrays.asList(consistencyGroupUrl))
                        .build())
                .build();
        response = disksClient.removeResourcePoliciesAsync(request).get(1, TimeUnit.MINUTES);
      }
    }
    if (response.hasError()) {
      throw new Error("Error removing disk from consistency group! " + response.getError());
    }
    return response.getStatus();
  }
}

Node.js

// Import the Compute library
const computeLib = require('@google-cloud/compute');
const compute = computeLib.protos.google.cloud.compute.v1;

// If you want to remove regional disk,
// you should use: RegionDisksClient and RegionOperationsClient.
// Instantiate a disksClient
const disksClient = new computeLib.DisksClient();
// Instantiate a zone
const zoneOperationsClient = new computeLib.ZoneOperationsClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// The project that contains the disk.
const projectId = await disksClient.getProjectId();

// The name of the disk.
// diskName = 'disk-name';

// If you use RegionDisksClient- define region, if DisksClient- define zone.
// The zone or region of the disk.
// diskLocation = 'europe-central2-a';

// The name of the consistency group.
// consistencyGroupName = 'consistency-group-name';

// The region of the consistency group.
// consistencyGroupLocation = 'europe-central2';

async function callDeleteDiskFromConsistencyGroup() {
  const [response] = await disksClient.removeResourcePolicies({
    disk: diskName,
    project: projectId,
    // If you use RegionDisksClient, pass region as an argument instead of zone.
    zone: diskLocation,
    disksRemoveResourcePoliciesRequestResource:
      new compute.DisksRemoveResourcePoliciesRequest({
        resourcePolicies: [
          `https://www.googleapis.com/compute/v1/projects/${projectId}/regions/${consistencyGroupLocation}/resourcePolicies/${consistencyGroupName}`,
        ],
      }),
  });

  let operation = response.latestResponse;

  // Wait for the delete disk operation to complete.
  while (operation.status !== 'DONE') {
    [operation] = await zoneOperationsClient.wait({
      operation: operation.name,
      project: projectId,
      // If you use RegionDisksClient, pass region as an argument instead of zone.
      zone: operation.zone.split('/').pop(),
    });
  }

  console.log(
    `Disk: ${diskName} deleted from consistency group: ${consistencyGroupName}.`
  );
}

await callDeleteDiskFromConsistencyGroup();

Python

from google.cloud import compute_v1


def remove_disk_consistency_group(
    project_id: str,
    disk_name: str,
    disk_location: str,
    consistency_group_name: str,
    consistency_group_region: str,
) -> None:
    """Removes a disk from a specified consistency group.
    Args:
        project_id (str): The ID of the Google Cloud project.
        disk_name (str): The name of the disk to be deleted.
        disk_location (str): The region or zone of the disk
        consistency_group_name (str): The name of the consistency group.
        consistency_group_region (str): The region of the consistency group.
    Returns:
        None
    """
    consistency_group_link = (
        f"regions/{consistency_group_region}/resourcePolicies/{consistency_group_name}"
    )
    # Checking if the disk is zonal or regional
    # If the final character of the disk_location is a digit, it is a regional disk
    if disk_location[-1].isdigit():
        policy = compute_v1.RegionDisksRemoveResourcePoliciesRequest(
            resource_policies=[consistency_group_link]
        )
        disk_client = compute_v1.RegionDisksClient()
        disk_client.remove_resource_policies(
            project=project_id,
            region=disk_location,
            disk=disk_name,
            region_disks_remove_resource_policies_request_resource=policy,
        )
    # For zonal disks we use DisksClient
    else:
        policy = compute_v1.DisksRemoveResourcePoliciesRequest(
            resource_policies=[consistency_group_link]
        )
        disk_client = compute_v1.DisksClient()
        disk_client.remove_resource_policies(
            project=project_id,
            zone=disk_location,
            disk=disk_name,
            disks_remove_resource_policies_request_resource=policy,
        )

    print(f"Disk {disk_name} removed from consistency group {consistency_group_name}")

REST

針對區域磁碟,請使用 disks.removeResourcePolicies 方法從一致性群組中移除磁碟;針對地區磁碟,請使用 regionDisks.removeResourcePolicies 方法

  • 從一致性群組中移除區域磁碟:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT/zones/LOCATION/disks/DISK_NAME/removeResourcePolicies
    
    {
    "resourcePolicies": "CONSISTENCY_GROUP"
    }
    
  • 從一致性群組中移除區域磁碟:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/LOCATION/disks/DISK_NAME/removeResourcePolicies
    
    {
    "resourcePolicies": "CONSISTENCY_GROUP"
    }
    

更改下列內容:

  • PROJECT:包含磁碟的專案。
  • LOCATION:磁碟的可用區或區域。如為區域磁碟,請使用區域。如果是區域性磁碟,請使用區域。
  • DISK_NAME:要從一致性群組中移除的磁碟名稱。
  • CONSISTENCY_GROUP:一致性群組的網址。例如:projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME

刪除一致性群組

使用 Google Cloud 控制台、Google Cloud CLI 或 REST 刪除一致性群組。

主控台

如要刪除一致性,請按照下列步驟操作:

  1. 前往 Google Cloud 控制台中的「非同步複製」頁面。

    前往非同步複製

  2. 按一下「一致性群組」分頁標籤。

  3. 選取要刪除的一致性群組。

  4. 按一下 [Delete] (刪除),「Delete consistency group」視窗隨即開啟。

  5. 點選「刪除」。

gcloud

使用 gcloud compute resource-policies delete 指令刪除資源政策:

gcloud compute resource-policies delete CONSISTENCY_GROUP \
    --region=REGION

更改下列內容:

  • CONSISTENCY_GROUP:一致性群組名稱
  • REGION:一致性群組的區域

Go

import (
	"context"
	"fmt"
	"io"

	compute "cloud.google.com/go/compute/apiv1"
	computepb "cloud.google.com/go/compute/apiv1/computepb"
)

// deleteConsistencyGroup deletes consistency group for a project in a given region.
func deleteConsistencyGroup(w io.Writer, projectID, region, groupName string) error {
	// projectID := "your_project_id"
	// region := "europe-west4"
	// groupName := "your_group_name"

	ctx := context.Background()
	disksClient, err := compute.NewResourcePoliciesRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewResourcePoliciesRESTClient: %w", err)
	}
	defer disksClient.Close()

	req := &computepb.DeleteResourcePolicyRequest{
		Project:        projectID,
		ResourcePolicy: groupName,
		Region:         region,
	}

	op, err := disksClient.Delete(ctx, req)
	if err != nil {
		return fmt.Errorf("unable to delete group: %w", err)
	}

	if err = op.Wait(ctx); err != nil {
		return fmt.Errorf("unable to wait for the operation: %w", err)
	}

	fmt.Fprintf(w, "Group deleted\n")

	return nil
}

Java

import com.google.cloud.compute.v1.Operation;
import com.google.cloud.compute.v1.Operation.Status;
import com.google.cloud.compute.v1.ResourcePoliciesClient;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class DeleteConsistencyGroup {

  public static void main(String[] args)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String project = "YOUR_PROJECT_ID";
    // Region in which your consistency group is located.
    String region = "us-central1";
    // Name of the consistency group you want to delete.
    String consistencyGroupName = "YOUR_CONSISTENCY_GROUP_NAME";

    deleteConsistencyGroup(project, region, consistencyGroupName);
  }

  // Deletes a consistency group resource policy in the specified project and region.
  public static Status deleteConsistencyGroup(
      String project, String region, String consistencyGroupName)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (ResourcePoliciesClient resourcePoliciesClient = ResourcePoliciesClient.create()) {
      Operation response = resourcePoliciesClient
          .deleteAsync(project, region, consistencyGroupName).get(1, TimeUnit.MINUTES);

      if (response.hasError()) {
        throw new Error("Error deleting disk! " + response.getError());
      }
      return response.getStatus();
    }
  }
}

Node.js


// Import the Compute library
const computeLib = require('@google-cloud/compute');

// Instantiate a resourcePoliciesClient
const resourcePoliciesClient = new computeLib.ResourcePoliciesClient();
// Instantiate a regionOperationsClient
const regionOperationsClient = new computeLib.RegionOperationsClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// The project that contains the consistency group.
const projectId = await resourcePoliciesClient.getProjectId();

// The region of the consistency group.
// region = 'europe-central2';

// The name of the consistency group.
// consistencyGroupName = 'consistency-group-name';

async function callDeleteConsistencyGroup() {
  // Delete a resourcePolicyResource
  const [response] = await resourcePoliciesClient.delete({
    project: projectId,
    region,
    resourcePolicy: consistencyGroupName,
  });

  let operation = response.latestResponse;

  // Wait for the delete group operation to complete.
  while (operation.status !== 'DONE') {
    [operation] = await regionOperationsClient.wait({
      operation: operation.name,
      project: projectId,
      region,
    });
  }

  console.log(`Consistency group: ${consistencyGroupName} deleted.`);
}

await callDeleteConsistencyGroup();

Python

from __future__ import annotations

import sys
from typing import Any

from google.api_core.extended_operation import ExtendedOperation
from google.cloud import compute_v1


def wait_for_extended_operation(
    operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
    """
    Waits for the extended (long-running) operation to complete.

    If the operation is successful, it will return its result.
    If the operation ends with an error, an exception will be raised.
    If there were any warnings during the execution of the operation
    they will be printed to sys.stderr.

    Args:
        operation: a long-running operation you want to wait on.
        verbose_name: (optional) a more verbose name of the operation,
            used only during error and warning reporting.
        timeout: how long (in seconds) to wait for operation to finish.
            If None, wait indefinitely.

    Returns:
        Whatever the operation.result() returns.

    Raises:
        This method will raise the exception received from `operation.exception()`
        or RuntimeError if there is no exception set, but there is an `error_code`
        set for the `operation`.

        In case of an operation taking longer than `timeout` seconds to complete,
        a `concurrent.futures.TimeoutError` will be raised.
    """
    result = operation.result(timeout=timeout)

    if operation.error_code:
        print(
            f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
            file=sys.stderr,
            flush=True,
        )
        print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
        raise operation.exception() or RuntimeError(operation.error_message)

    if operation.warnings:
        print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
        for warning in operation.warnings:
            print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)

    return result


def delete_consistency_group(project_id: str, region: str, group_name: str) -> None:
    """
    Deletes a consistency group in Google Cloud Compute Engine.
    Args:
        project_id (str): The ID of the Google Cloud project.
        region (str): The region where the consistency group is located.
        group_name (str): The name of the consistency group to delete.
    Returns:
        None
    """

    # Initialize the ResourcePoliciesClient
    client = compute_v1.ResourcePoliciesClient()

    # Delete the (consistency group) from the specified project and region
    operation = client.delete(
        project=project_id,
        region=region,
        resource_policy=group_name,
    )
    wait_for_extended_operation(operation, "Consistency group deletion")

REST

使用 resourcePolicies.delete 方法刪除一致性:

DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME

更改下列內容:

  • PROJECT:包含一致性群組的專案
  • REGION:一致性群組的區域
  • CONSISTENCY_GROUP:一致性群組名稱

後續步驟