You can create a group of virtual machines (VMs) that have attached graphical processing units (GPUs) by using the bulk creation process. With the bulk creation process, you get upfront validation where the request fails fast if it is not feasible. Also, if you use the region flag, the bulk creation API automatically chooses the zone that has the capacity to fulfill the request.
To learn more about bulk creation, see About bulk creation of VMs. To learn more about creating VMs with attached GPUs, see Overview of creating an instance with attached GPUs.
Before you begin
- To review limitations and additional prerequisite steps for creating instances with attached GPUs, such as selecting an OS image and checking GPU quota, see Overview of creating an instance with attached GPUs.
- To review limitations for bulk creation, see About bulk creation of VMs.
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
gcloud
-
After installing the Google Cloud CLI, initialize it by running the following command:
gcloud init
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
After installing the Google Cloud CLI, initialize it by running the following command:
gcloud init
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Required roles
To get the permissions that
you need to create VMs,
ask your administrator to grant you the
Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1
) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to create VMs:
-
compute.instances.create
on the project -
To use a custom image to create the VM:
compute.images.useReadOnly
on the image -
To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot -
To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template -
To assign a legacy network to the VM:
compute.networks.use
on the project -
To specify a static IP address for the VM:
compute.addresses.use
on the project -
To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project -
To specify a subnet for your VM:
compute.subnetworks.use
on the project or on the chosen subnet -
To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet -
To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project -
To set tags for the VM:
compute.instances.setTags
on the VM -
To set labels for the VM:
compute.instances.setLabels
on the VM -
To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM -
To create a new disk for the VM:
compute.disks.create
on the project -
To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk -
To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
You might also be able to get these permissions with custom roles or other predefined roles.
Overview
When creating VMs with attached GPUs using the bulk creation method, you can choose
to create VMs in a region (such as us-central1
) or in a specific zone such as
(us-central1-a
).
If you choose to specify a region, Compute Engine places the VMs in any zone within the region that supports GPUs.
Machine types
The accelerator-optimized machine family contains multiple machine types.
Each accelerator-optimized machine type has a specific model of NVIDIA GPUs attached.
- For A4 accelerator-optimized machine types, NVIDIA B200 GPUs are attached.
- For A3 accelerator-optimized machine types,
NVIDIA H100 80GB or NVIDIA H200 141GB GPUs are attached. These are available
in the following options:
- A3 Ultra: these machine types have H200 141GB GPUs attached
- A3 Mega: these machine types have H100 80GB GPUs attached
- A3 High: these machine types have H100 80GB GPUs attached
- A3 Edge: these machine types have H100 80GB GPUs attached
- For A2 accelerator-optimized machine types,
NVIDIA A100 GPUs are attached. These are available in the following options:
- A2 Ultra: these machine types have A100 80GB GPUs attached
- A2 Standard: these machine types have A100 40GB GPUs attached
- For G2 accelerator-optimized machine types, NVIDIA L4 GPUs are attached.
Create groups of A3, A2, and G2 VMs
This section explains you can create instances in bulk for the A3 High, A3 Mega, A3 Edge, A2, and G2 machine series by using Google Cloud CLI, or REST.
gcloud
To create a group of VMs, use the gcloud compute instances bulk create
command. For more
information about the parameters and how to use this command, see
Create VMs in bulk.
The following optional flags are shown in the example command:
The
--provisioning-model=SPOT
is an optional flag that configures your VMs as Spot VMs. If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled.The
--accelerator
flag to specify a virtual workstation. NVIDIA RTX Virtual Workstations (vWS) are supported for only G2 VMs.
Example
This example creates two VMs that have attached GPUs by using the following specifications:
- VM names:
my-test-vm-1
,my-test-vm-2
- Each VM has two GPUs attached, specified by using the appropriate accelerator-optimized machine type
gcloud compute instances bulk create \ --name-pattern="my-test-vm-#" \ --region=REGION \ --count=2 \ --machine-type=MACHINE_TYPE \ --boot-disk-size=200 \ --image=IMAGE \ --image-project=IMAGE_PROJECT \ --on-host-maintenance=TERMINATE \ [--provisioning-model=SPOT] \ [--accelerator=type=nvidia-l4-vws,count=VWS_ACCELERATOR_COUNT]
Replace the following:
REGION
: the region for the VMs. This region must support your selected GPU model.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:- An A3 machine type.
- An A2 machine type.
- A G2 machine type. G2 machine types also
support custom memory. Memory must be a multiple of 1024 MB and
within the supported memory range. For example, to create a VM
with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
IMAGE
: an operating system image that supports GPUs.If you want to use the latest image in an image family, replace the
--image
flag with the--image-family
flag and set its value to an image family that supports GPUs. For example:--image-family=rocky-linux-8-optimized-gcp
.You can also specify a custom image or Deep Learning VM Images.
IMAGE_PROJECT
: the Compute Engine image project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.VWS_ACCELERATOR_COUNT
: the number of virtual GPUs that you need.
If successful, the output is similar to the following:
NAME ZONE my-test-vm-1 us-central1-b my-test-vm-2 us-central1-b Bulk create request finished with status message: [VM instances created: 2, failed: 0.]
REST
Use the instances.bulkInsert
method with the
required parameters to create multiple VMs in a zone. For more
information about the parameters and how to use this command, see
Create VMs in bulk.
Example
This example creates two VMs that have attached GPUs by using the following specifications:
- VM names:
my-test-vm-1
,my-test-vm-2
Each VM has two GPUs attached, specified by using the appropriate accelerator-optimized machine type
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instances/bulkInsert { "namePattern":"my-test-vm-#", "count":"2", "instanceProperties": { "machineType":MACHINE_TYPE, "disks":[ { "type":"PERSISTENT", "initializeParams":{ "diskSizeGb":"200", "sourceImage":SOURCE_IMAGE_URI }, "boot":true } ], "name": "default", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/default" } ], "scheduling":{ "onHostMaintenance":"TERMINATE", ["automaticRestart":true] } } }
Replace the following:
PROJECT_ID
: your project IDREGION
: the region for the VMs. This region must support your selected GPU model.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:- An A2 machine type.
- A G2 machine type. G2 machine types also
support custom memory. Memory must be a multiple of 1024 MB and
within the supported memory range. For example, to create a VM
with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
SOURCE_IMAGE_URI
: the URI for the specific image or image family that you want to use.For example:
- Specific image:
"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
- Image family:
"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp"
.
When you specify an image family, Compute Engine creates a VM from the most recent, non-deprecated OS image in that family. For more information about when to use image families, see Image family best practices.
- Specific image:
Additional settings:
If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. To use a Spot VM, add the
"provisioningModel": "SPOT
option to your request. For Spot VMs, the automatic restart and host maintenance options flags are disabled."scheduling": { "provisioningModel": "SPOT" }
For G2 VMs, NVIDIA RTX Virtual Workstations (vWS) are supported. To specify a virtual workstation, add the
guestAccelerators
option to your request. ReplaceVWS_ACCELERATOR_COUNT
with the number of virtual GPUs that you need."guestAccelerators": [ { "acceleratorCount": VWS_ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONEacceleratorTypes/nvidia-l4-vws" } ]
Create groups of N1-general purpose VMs
You create a group of VMs with attached GPUs by using either the Google Cloud CLI, or REST.
This section describes how to create multiple VMs using the following GPU types:
NVIDIA GPUs:
- NVIDIA T4:
nvidia-tesla-t4
- NVIDIA P4:
nvidia-tesla-p4
- NVIDIA P100:
nvidia-tesla-p100
- NVIDIA V100:
nvidia-tesla-v100
NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
- NVIDIA T4 Virtual Workstation:
nvidia-tesla-t4-vws
- NVIDIA P4 Virtual Workstation:
nvidia-tesla-p4-vws
NVIDIA P100 Virtual Workstation:
nvidia-tesla-p100-vws
For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.
gcloud
To create a group of VMs, use the gcloud compute instances bulk create
command.
For more information about the parameters and how to use this command, see
Create VMs in bulk.
Example
The following example creates two VMs with attached GPUs using the following specifications:
- VM names:
my-test-vm-1
,my-test-vm-2
- VMs created in any zone in
us-central1
that supports GPUs - Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
- Each VM has GPU drivers installed
- Each VM uses the Deep Learning VM image
pytorch-latest-gpu-v20211028-debian-10
gcloud compute instances bulk create \ --name-pattern="my-test-vm-#" \ --count=2 \ --region=us-central1 \ --machine-type=n1-standard-2 \ --accelerator type=nvidia-tesla-t4,count=2 \ --boot-disk-size=200 \ --metadata="install-nvidia-driver=True" \ --scopes="https://www.googleapis.com/auth/cloud-platform" \ --image=pytorch-latest-gpu-v20211028-debian-10 \ --image-project=deeplearning-platform-release \ --on-host-maintenance=TERMINATE --restart-on-failure
If successful, the output is similar to the following:
NAME ZONE my-test-vm-1 us-central1-b my-test-vm-2 us-central1-b Bulk create request finished with status message: [VM instances created: 2, failed: 0.]
REST
Use the instances.bulkInsert
method with the
required parameters to create multiple VMs in a zone.
For more information about the parameters and how to use this command, see
Create VMs in bulk.
Example
The following example creates two VMs with attached GPUs using the following specifications:
- VM names:
my-test-vm-1
,my-test-vm-2
- VMs created in any zone in
us-central1
that supports GPUs - Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
- Each VM has GPU drivers installed
- Each VM uses the Deep Learning VM image
pytorch-latest-gpu-v20211028-debian-10
Replace PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/region/us-central1/instances/bulkInsert { "namePattern":"my-test-vm-#", "count":"2", "instanceProperties": { "machineType":"n1-standard-2", "disks":[ { "type":"PERSISTENT", "initializeParams":{ "diskSizeGb":"200", "sourceImage":"projects/deeplearning-platform-release/global/images/pytorch-latest-gpu-v20211028-debian-10" }, "boot":true } ], "name": "default", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/default" } ], "guestAccelerators": [ { "acceleratorCount": 2, "acceleratorType": "nvidia-tesla-t4" } ], "scheduling":{ "onHostMaintenance":"TERMINATE", "automaticRestart":true }, "metadata":{ "items":[ { "key":"install-nvidia-driver", "value":"True" } ] } } }
What's next?
Learn how to monitor GPU performance.
Learn how to use higher network bandwidth.
Learn how to handle GPU host maintenance events.