建立 Hyperdisk 儲存空間集區


Hyperdisk 儲存空間集區是一種全新的區塊儲存空間資源,可協助您匯總管理 Hyperdisk 區塊儲存空間。Hyperdisk 儲存空間集區可用於 Hyperdisk Throughput 儲存空間集區和 Hyperdisk Balanced 儲存空間集區變化版本。

建立儲存空間集區時,您必須指定下列屬性:

  • 可用區
  • 儲存空間集區類型
  • 容量佈建類型
  • 已佈建的集區容量
  • 效能佈建類型
  • 已佈建集區的 IOPS 和處理量

您可以使用標準容量、進階容量、標準效能或進階效能佈建類型,搭配 Hyperdisk 儲存空間集區:

  • 標準容量:儲存空間集區中,為儲存空間集區中建立的每個磁碟所佈建的容量,會從儲存空間集區的已佈建總容量中扣除。
  • 進階容量:儲存空間集區可享有精簡配置和資料縮減功能。系統只會從儲存空間集區的總佈建容量中扣除實際寫入的資料量。
  • 標準效能:儲存空間集區中,為每個建立的磁碟所佈建的效能,會從儲存空間集區的已佈建總效能中扣除。
  • 進階效能:為每個磁碟配置的效能可從精簡配置中受益。系統只會從儲存空間集區的已佈建總效能中扣除磁碟使用的效能。

事前準備

  • 如果尚未設定,請先設定驗證機制。驗證是指驗證身分,以便存取 Google Cloud 服務和 API 的程序。如要在本機開發環境中執行程式碼或範例,您可以選取下列任一選項,向 Compute Engine 進行驗證:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. After installing the Google Cloud CLI, initialize it by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    2. Set a default region and zone.

    Go

    To use the Go samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    For more information, see Set up authentication for a local development environment.

    Java

    To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    For more information, see Set up authentication for a local development environment.

    Node.js

    To use the Node.js samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      After installing the Google Cloud CLI, initialize it by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

必要角色和權限

如要取得建立儲存空間資源池所需的權限,請要求管理員為您授予專案的下列 IAM 角色:

  • Compute 執行個體管理員 (v1) (roles/compute.instanceAdmin.v1)
  • 如要連線至可當做服務帳戶執行的 VM 執行個體,請按照下列步驟操作: 服務帳戶使用者 (v1) (roles/iam.serviceAccountUser 角色)

如要進一步瞭解如何授予角色,請參閱「管理專案、資料夾和機構的存取權」。

這些預先定義的角色包含建立儲存空間集區所需的權限。如要查看確切的必要權限,請展開「必要權限」部分:

所需權限

如要建立儲存空間集區,您必須具備下列權限:

  • compute.storagePools.create 專案
  • compute.storagePools.setLabels 專案

您或許還可透過自訂角色或其他預先定義的角色取得這些權限。

限制

建立 Hyperdisk 儲存空間集區時,請注意下列限制:

資源限制

  • 您可以建立 Hyperdisk 儲存空間集區,最多可佈建 5 PiB 的容量。
  • 每小時最多可建立 5 個儲存空間資源池。
  • 每天最多可以建立 10 個儲存空間資源池。
  • 每個專案最多可建立 10 個儲存空間資源池。
  • 您無法變更集區的佈建模式;無法將標準容量儲存空間集區變更為進階容量儲存空間集區,也無法將進階效能儲存空間集區變更為標準效能儲存空間集區。
  • 儲存空間集區是區域資源。
  • 儲存體系中最多可建立 1,000 個磁碟。
  • Hyperdisk 儲存空間集區只能與 Compute Engine 搭配使用。Cloud SQL 執行個體無法使用 Hyperdisk 儲存空間集區。
  • 您最多可以在 24 小時內變更儲存體系結構的配置容量或效能兩次。

儲存空間集區中的磁碟限制

  • 儲存空間集區中只能建立相同專案和區域的新磁碟。
  • 您無法將磁碟移入或移出儲存空間集區。如要將磁碟移入或移出儲存體資源池,您必須從快照重新建立磁碟。詳情請參閱「變更磁碟類型」。
  • 如要在儲存空間集區中建立開機磁碟,您必須使用 Hyperdisk 平衡儲存空間集區。
  • 儲存空間集區不支援區域性磁碟
  • 您無法複製建立即時快照設定儲存體池中的磁碟的非同步複製功能
  • 儲存空間資源池中的 Hyperdisk 平衡磁碟無法連接至多個運算執行個體。

容量範圍和已配置的效能限制

建立儲存空間集區時,佈建容量、IOPS 和處理量會受到「儲存空間集區的限制」一文所述的限制。

建立 Hyperdisk 儲存空間集區

如要建立新的 Hyperdisk 儲存空間集區,請使用 Google Cloud 控制台、Google Cloud CLI 或 REST。

主控台

  1. 前往 Google Cloud 主控台的「Create a storage pool」(建立儲存空間資源池) 頁面。
    前往「Create Storage Pool」(建立儲存空間集區) 頁面
  2. 在「名稱」欄位中,輸入儲存空間資源池的專屬名稱。
  3. 選用:在「說明」欄位中輸入儲存空間資源池的說明。
  4. 選取要建立儲存空間集區的「Region」(區域)和「Zone」(可用區)
  5. 選擇「儲存空間集區類型」的值。
  6. 在「容量類型」欄位中選擇佈建類型,然後在「儲存空間集區容量」欄位中指定要為儲存空間集區佈建的容量。您可以指定的大小介於 10 TiB 到 1 PiB 之間。

    如要建立大容量的儲存空間集區,您可能需要申請提高配額

  7. 在「效能類型」欄位中選擇佈建類型。

  8. 針對 Hyperdisk 平衡儲存空間集區,請在「已佈建 IOPS」欄位中,輸入要為儲存空間集區佈建的 IOPS。

  9. 如果是 Hyperdisk Throughput 儲存空間集區或 Hyperdisk Balanced 儲存空間集區,請在「Provisioned throughput」欄位中輸入要為儲存空間集區佈建的處理量。

  10. 按一下「提交」即可建立儲存空間集區。

gcloud

如要建立 Hyperdisk 儲存空間集區,請使用 gcloud compute storage-pools create 指令

gcloud compute storage-pools create NAME  \
    --zone=ZONE   \
    --storage-pool-type=STORAGE_POOL_TYPE   \
    --capacity-provisioning-type=CAPACITY_TYPE \
    --provisioned-capacity=POOL_CAPACITY   \
    --performance-provisioning-type=PERFORMANCE_TYPE \
    --provisioned-iops=IOPS   \
    --provisioned-throughput=THROUGHPUT   \
    --description=DESCRIPTION

更改下列內容:

  • NAME:不重複的儲存空間集區名稱。
  • ZONE:建立儲存空間集區的區域,例如 us-central1-a
  • STORAGE_POOL_TYPE:儲存於儲存空間集區的磁碟類型。允許的值為 hyperdisk-throughputhyperdisk-balanced
  • CAPACITY_TYPE:選用:儲存空間集區的容量佈建類型。允許的值為 advancedstandard。如未指定,則會使用 advanced 值。
  • POOL_CAPACITY:為新儲存空間集區配置的總容量,預設以 GiB 為單位。
  • PERFORMANCE_TYPE:選用:儲存空間集區的效能佈建類型。允許的值為 advancedstandard。如未指定,則會使用 advanced 值。
  • IOPS:為儲存空間資源池配置的 IOPS。這個旗標只能與 Hyperdisk Balanced 儲存空間集區搭配使用。
  • THROUGHPUT:為儲存空間資源池配置的處理量 (以 MB/s 為單位)。
  • DESCRIPTION:選用:描述儲存空間資源池的文字字串。

REST

使用 storagePools.insert 方法建構 POST 要求,以建立 Hyperdisk 儲存空間集區。

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/storagePools

{
    "name": "NAME",
    "description": "DESCRIPTION",
    "poolProvisionedCapacityGb": "POOL_CAPACITY",
    "storagePoolType": "projects/PROJECT_ID/zones/ZONE/storagePoolTypes/STORAGE_POOL_TYPE",
    "poolProvisionedIops": "IOPS",
    "poolProvisionedThroughput": "THROUGHPUT",
    "capacityProvisioningType": "CAPACITY_TYPE",
    "performanceProvisioningType": "PERFORMANCE_TYPE"
}

更改下列內容:

  • PROJECT_ID:專案 ID
  • ZONE:建立儲存空間集區的區域,例如 us-central1-a
  • NAME:儲存空間集區的專屬名稱。
  • DESCRIPTION:選用:描述儲存空間資源池的文字字串。
  • POOL_CAPACITY:為新儲存空間資源池配置的總容量,預設以 GiB 為單位。
  • STORAGE_POOL_TYPE:儲存於儲存空間集區的磁碟類型。允許的值為 hyperdisk-throughputhyperdisk-balanced
  • IOPS:選用:為儲存空間集區佈建的 IOPS。這個旗標只能與 Hyperdisk Balanced 儲存空間集區搭配使用。
  • THROUGHPUT:選用:為儲存空間集區配置的處理量 (以 MBps 為單位)。
  • CAPACITY_TYPE:選用:儲存空間集區的容量佈建類型。允許的值為 advancedstandard。如未指定,則會使用 advanced 值。
  • PERFORMANCE_TYPE:選用:儲存空間集區的效能佈建類型。允許的值為 advancedstandard。如未指定,則會使用 advanced 值。

Go


// createHyperdiskStoragePool creates a new Hyperdisk storage pool in the specified project and zone.
func createHyperdiskStoragePool(w io.Writer, projectId, zone, storagePoolName, storagePoolType string) error {
	// projectID := "your_project_id"
	// zone := "europe-west4-b"
	// storagePoolName := "your_storage_pool_name"
	// storagePoolType := "projects/**your_project_id**/zones/europe-west4-b/diskTypes/hyperdisk-balanced"

	ctx := context.Background()
	client, err := compute.NewStoragePoolsRESTClient(ctx)
	if err != nil {
		return fmt.Errorf("NewStoragePoolsRESTClient: %v", err)
	}
	defer client.Close()

	// Create the storage pool resource
	resource := &computepb.StoragePool{
		Name:                        proto.String(storagePoolName),
		Zone:                        proto.String(zone),
		StoragePoolType:             proto.String(storagePoolType),
		CapacityProvisioningType:    proto.String("advanced"),
		PerformanceProvisioningType: proto.String("advanced"),
		PoolProvisionedCapacityGb:   proto.Int64(10240),
		PoolProvisionedIops:         proto.Int64(10000),
		PoolProvisionedThroughput:   proto.Int64(1024),
	}

	// Create the insert storage pool request
	req := &computepb.InsertStoragePoolRequest{
		Project:             projectId,
		Zone:                zone,
		StoragePoolResource: resource,
	}

	// Send the insert storage pool request
	op, err := client.Insert(ctx, req)
	if err != nil {
		return fmt.Errorf("Insert storage pool request failed: %v", err)
	}

	// Wait for the insert storage pool operation to complete
	if err = op.Wait(ctx); err != nil {
		return fmt.Errorf("unable to wait for the operation: %w", err)
	}

	// Retrieve and return the created storage pool
	storagePool, err := client.Get(ctx, &computepb.GetStoragePoolRequest{
		Project:     projectId,
		Zone:        zone,
		StoragePool: storagePoolName,
	})
	if err != nil {
		return fmt.Errorf("Get storage pool request failed: %v", err)
	}

	fmt.Fprintf(w, "Hyperdisk Storage Pool created: %v\n", storagePool.GetName())
	return nil
}

Java

import com.google.cloud.compute.v1.InsertStoragePoolRequest;
import com.google.cloud.compute.v1.Operation;
import com.google.cloud.compute.v1.StoragePool;
import com.google.cloud.compute.v1.StoragePoolsClient;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class CreateHyperdiskStoragePool {
  public static void main(String[] args)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Google Cloud project you want to use.
    String projectId = "YOUR_PROJECT_ID";
    // Name of the zone in which you want to create the storagePool.
    String zone = "us-central1-a";
    // Name of the storagePool you want to create.
    String storagePoolName = "YOUR_STORAGE_POOL_NAME";
    // The type of disk you want to create.
    // Storage types can be "hyperdisk-throughput" or "hyperdisk-balanced"
    String storagePoolType = String.format(
        "projects/%s/zones/%s/storagePoolTypes/hyperdisk-balanced", projectId, zone);
    // Optional: the capacity provisioning type of the storage pool.
    // The allowed values are advanced and standard. If not specified, the value advanced is used.
    String capacityProvisioningType = "advanced";
    // The total capacity to provision for the new storage pool, specified in GiB by default.
    long provisionedCapacity = 128;
    // the IOPS to provision for the storage pool.
    // You can use this flag only with Hyperdisk Balanced Storage Pools.
    long provisionedIops = 3000;
    // the throughput in MBps to provision for the storage pool.
    long provisionedThroughput = 140;
    // The allowed values are low-casing strings "advanced" and "standard".
    // If not specified, "advanced" is used.
    String performanceProvisioningType = "advanced";

    createHyperdiskStoragePool(projectId, zone, storagePoolName, storagePoolType,
            capacityProvisioningType, provisionedCapacity, provisionedIops,
        provisionedThroughput, performanceProvisioningType);
  }

  // Creates a hyperdisk storagePool in a project
  public static StoragePool createHyperdiskStoragePool(String projectId, String zone,
        String storagePoolName, String storagePoolType, String capacityProvisioningType,
        long capacity, long iops, long throughput, String performanceProvisioningType)
          throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (StoragePoolsClient client = StoragePoolsClient.create()) {
      // Create a storagePool.
      StoragePool resource = StoragePool.newBuilder()
              .setZone(zone)
              .setName(storagePoolName)
              .setStoragePoolType(storagePoolType)
              .setCapacityProvisioningType(capacityProvisioningType)
              .setPoolProvisionedCapacityGb(capacity)
              .setPoolProvisionedIops(iops)
              .setPoolProvisionedThroughput(throughput)
              .setPerformanceProvisioningType(performanceProvisioningType)
              .build();

      InsertStoragePoolRequest request = InsertStoragePoolRequest.newBuilder()
              .setProject(projectId)
              .setZone(zone)
              .setStoragePoolResource(resource)
              .build();

      // Wait for the insert disk operation to complete.
      Operation operation = client.insertAsync(request).get(1, TimeUnit.MINUTES);

      if (operation.hasError()) {
        System.out.println("StoragePool creation failed!");
        throw new Error(operation.getError().toString());
      }

      // Wait for server update
      TimeUnit.SECONDS.sleep(10);

      StoragePool storagePool = client.get(projectId, zone, storagePoolName);

      System.out.printf("Storage pool '%s' has been created successfully", storagePool.getName());

      return storagePool;
    }
  }
}

Node.js

// Import the Compute library
const computeLib = require('@google-cloud/compute');
const compute = computeLib.protos.google.cloud.compute.v1;

// Instantiate a storagePoolClient
const storagePoolClient = new computeLib.StoragePoolsClient();
// Instantiate a zoneOperationsClient
const zoneOperationsClient = new computeLib.ZoneOperationsClient();

/**
 * TODO(developer): Update/uncomment these variables before running the sample.
 */
// Project ID or project number of the Google Cloud project you want to use.
const projectId = await storagePoolClient.getProjectId();
// Name of the zone in which you want to create the storagePool.
const zone = 'us-central1-a';
// Name of the storagePool you want to create.
// storagePoolName = 'storage-pool-name';
// The type of disk you want to create. This value uses the following format:
// "projects/{projectId}/zones/{zone}/storagePoolTypes/(hyperdisk-throughput|hyperdisk-balanced)"
const storagePoolType = `projects/${projectId}/zones/${zone}/storagePoolTypes/hyperdisk-balanced`;
// Optional: The capacity provisioning type of the storage pool.
// The allowed values are advanced and standard. If not specified, the value advanced is used.
const capacityProvisioningType = 'advanced';
// The total capacity to provision for the new storage pool, specified in GiB by default.
const provisionedCapacity = 10240;
// The IOPS to provision for the storage pool.
// You can use this flag only with Hyperdisk Balanced Storage Pools.
const provisionedIops = 10000;
// The throughput in MBps to provision for the storage pool.
const provisionedThroughput = 1024;
// Optional: The performance provisioning type of the storage pool.
// The allowed values are advanced and standard. If not specified, the value advanced is used.
const performanceProvisioningType = 'advanced';

async function callCreateComputeHyperdiskPool() {
  // Create a storagePool.
  const storagePool = new compute.StoragePool({
    name: storagePoolName,
    poolProvisionedCapacityGb: provisionedCapacity,
    poolProvisionedIops: provisionedIops,
    poolProvisionedThroughput: provisionedThroughput,
    storagePoolType,
    performanceProvisioningType,
    capacityProvisioningType,
    zone,
  });

  const [response] = await storagePoolClient.insert({
    project: projectId,
    storagePoolResource: storagePool,
    zone,
  });

  let operation = response.latestResponse;

  // Wait for the create storage pool operation to complete.
  while (operation.status !== 'DONE') {
    [operation] = await zoneOperationsClient.wait({
      operation: operation.name,
      project: projectId,
      zone: operation.zone.split('/').pop(),
    });
  }

  console.log(`Storage pool: ${storagePoolName} created.`);
}

await callCreateComputeHyperdiskPool();

後續步驟