You can monitor GPU utilization, performance, and health by configuring GKE to send NVIDIA Data Center GPU Manager (DCGM) metrics to Cloud Monitoring.
When you enable DCGM metrics, GKE installs the DCGM-Exporter tool, installs Google-managed GPU drivers, and deploys a ClusterPodMonitoring resource to send metrics to Google Cloud Managed Service for Prometheus.
You can also configure self-managed DCGM if you want to customize the set of DCGM metrics or if you have a cluster that does not meet the requirements for managed DCGM metrics.
What is DCGM
NVIDIA Data Center GPU Manager (DCGM) is a set of tools from NVIDIA that let you manage and monitor NVIDIA GPUs. DCGM provides a comprehensive view of GPU utilization, performance, and health.
- GPU utilization metrics are an indication of how busy the monitored GPU is and if it is effectively utilized for processing tasks. This includes metrics for core processing, memory, I/O, and power utilization.
- GPU performance metrics refer to how effectively and efficiently a GPU can perform a computational task. This includes metrics for clock speed and temperature.
- GPU I/0 metrics like NVlink and PCIe measure data transfer bandwidth.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Requirements for NVIDIA Data Center GPU Manager (DCGM) metrics
To collect NVIDIA Data Center GPU Manager (DCGM) metrics, your GKE cluster must meet the following requirements:
- GKE version 1.30.1-gke.1204000 or later
- System metrics collection must be enabled
- Google Cloud Managed Service for Prometheus managed collection must be enabled
- The node pools must be running GKE managed GPU drivers. This
means that you must create your node pools using
default
orlatest
for--gpu-driver-version
. - Profiling metrics are only collected for NVIDIA H100 80GB GPUs.
Configure collection of DCGM metrics
You can enable GKE to collect DCGM metrics for an existing cluster using the Google Cloud console, the gcloud CLI, or Terraform.
Console
-
You must use either Default or Latest for GPU Driver Installation.
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click the name of your cluster.
Next to Cloud Monitoring, click edit.
Select
SYSTEM
andDCGM
.Click Save.
gcloud
Create a GPU node pool.
You must use either
default
orlatest
for--gpu-driver-version
.Update your cluster:
gcloud container clusters update CLUSTER_NAME \ --location=COMPUTE_LOCATION \ --enable-managed-prometheus \ --monitoring=SYSTEM,DCGM
Replace the following:
CLUSTER_NAME
: the name of the existing cluster.COMPUTE_LOCATION
: the Compute Engine location of the cluster.
Terraform
To configure the collection of DCGM metrics by using
Terraform, see the monitoring_config
block in the
Terraform registry for google_container_cluster
.
For general information about using Google Cloud with Terraform, see
Terraform with Google Cloud.
Use DCGM metrics
You can view DCGM metrics by using the dashboards in the Google Cloud console or directly in the cluster overview and cluster details pages. For information, see View observability metrics.
You can view metrics using the Grafana DCGM metrics dashboard. For more information, see Query using Grafana. If you encounter any errors, see API compatibility.
Pricing
DCGM metrics use Google Cloud Managed Service for Prometheus to load metrics into Cloud Monitoring. Cloud Monitoring charges for the ingestion of these metrics are based on the number of samples ingested. However, these metrics are free-of-charge for the registered clusters that belong to a project that has GKE Enterprise edition enabled.
For more information, see Cloud Monitoring pricing.
Quota
DCGM metrics consume the Time series ingestion requests per minute quota of the Cloud Monitoring API. Before enabling the metrics packages, check your recent peak usage of that quota. If you have many clusters in the same project or are already approaching that quota limit, you can request a quota-limit increase before enabling either observability package.
What's next
- Learn how to View observability metrics.