Manage GPU container workloads

You can enable and manage graphics processing unit (GPU) resources on your containers. For example, you might prefer running artificial intelligence (AI) and machine learning (ML) notebooks in a GPU environment. To run GPU container workloads, you must have a Kubernetes cluster that supports GPU devices. GPU support is enabled by default for Kubernetes clusters that have GPU machines provisioned for them.

Before you begin

To deploy GPUs to your containers, you must have the following:

  • A Kubernetes cluster with a GPU machine class. Check the supported GPU cards section for options on what you can configure for your cluster machines.

  • The User Cluster Node Viewer role (user-cluster-node-viewer) to check GPUs, and the Namespace Admin role (namespace-admin) to deploy GPU workloads in your project namespace.

  • The org admin cluster kubeconfig path. Sign in and generate the kubeconfig file if you don't have one.

  • The Kubernetes cluster name. Ask your Platform Administrator for this information if you don't have it.

  • The Kubernetes cluster kubeconfig path. Sign in and generate the kubeconfig file if you don't have one.

Configure a container to use GPU resources

To use these GPUs in a container, complete the following steps:

  1. Verify your Kubernetes cluster has node pools that support GPUs:

    kubectl describe nodepoolclaims -n KUBERNETES_CLUSTER_NAME \
        --kubeconfig ORG_ADMIN_CLUSTER_KUBECONFIG
    

    The relevant output is similar to the following snippet:

    Spec:
      Machine Class Name:  a2-ultragpu-1g-gdc
      Node Count:          2
    

    For a full list of supported GPU machine types and Multi-Instance GPU (MIG) profiles, see Cluster node machine types.

  2. Add the .containers.resources.requests and .containers.resources.limits fields to your container spec. Each resource name is different depending on your machine class. Check your GPU resource allocation to find your GPU resource names.

    For example, the following container spec requests three partitions of a GPU from an a2-ultragpu-1g-gdc node:

     ...
     containers:
     - name: my-container
       image: "my-image"
       resources:
         requests:
           nvidia.com/mig-1g.10gb-NVIDIA_A100_80GB_PCIE: 3
         limits:
           nvidia.com/mig-1g.10gb-NVIDIA_A100_80GB_PCIE: 3
     ...
    
  3. Containers also require additional permissions to access GPUs. For each container that requests GPUs, add the following permissions to your container spec:

    ...
    securityContext:
     seLinuxOptions:
       type: unconfined_t
    ...
    
  4. Apply your container manifest file:

    kubectl apply -f CONTAINER_MANIFEST_FILE \
        -n NAMESPACE \
        --kubeconfig KUBERNETES_CLUSTER_KUBECONFIG
    

Check GPU resource allocation

  • To check your GPU resource allocation, use the following command:

    kubectl describe nodes NODE_NAME
    

    Replace NODE_NAME with the node managing the GPUs you want to inspect.

    The relevant output is similar to the following snippet:

    Capacity:
      nvidia.com/mig-1g.10gb-NVIDIA_A100_80GB_PCIE: 7
    Allocatable:
      nvidia.com/mig-1g.10gb-NVIDIA_A100_80GB_PCIE: 7
    

Note the resource names for your GPUs; you must specify them when configuring a container to use GPU resources.