Halaman ini menunjukkan cara mengonfigurasi deployment Autopilot Google Kubernetes Engine (GKE)
untuk meminta node yang didukung
arsitektur Arm.
Tentang arsitektur Arm di Autopilot
Cluster Autopilot memberikan
class komputasi
untuk workload yang memiliki persyaratan hardware tertentu. Beberapa class komputasi ini
mendukung beberapa arsitektur CPU, seperti amd64 dan arm64.
Kasus penggunaan node Arm
Node dengan arsitektur Arm memberikan performa yang lebih hemat biaya dibandingkan
node x86 yang serupa. Anda harus memilih Arm untuk workload Autopilot dalam situasi
seperti berikut:
Lingkungan Anda mengandalkan arsitektur Arm untuk melakukan pembuatan dan pengujian.
Anda sedang mengembangkan aplikasi untuk perangkat Android yang berjalan di CPU Arm.
Anda menggunakan image multi arsitektur dan ingin mengoptimalkan biaya saat menjalankan
beban kerja.
Sebelum memulai
Sebelum memulai, pastikan Anda telah melakukan tugas berikut:
Jika ingin menggunakan Google Cloud CLI untuk tugas ini,
instal lalu
lakukan inisialisasi
gcloud CLI. Jika sebelumnya Anda telah menginstal gcloud CLI, dapatkan versi terbaru dengan menjalankan gcloud components update.
Pastikan Anda memiliki kuota untuk jenis mesin Compute Engine
C4A atau
Tau T2A.
Pastikan Anda memiliki Pod dengan image container yang kompatibel arsitektur
Arm.
Cara meminta node Arm di Autopilot
Untuk memberi tahu Autopilot agar menjalankan Pod Anda di node Arm, tentukan salah satu label berikut dalam aturan
nodeSelector
atau afinitas
node:
kubernetes.io/arch: arm64. GKE menempatkan Pod pada jenis mesin C4A secara default untuk cluster yang menjalankan versi 1.31.3-gke.1056000 dan yang lebih baru.
Jika cluster menjalankan versi yang lebih lama, GKE akan menempatkan Pod pada jenis mesin T2A.
cloud.google.com/machine-family: ARM_MACHINE_SERIES.
Ganti ARM_MACHINE_SERIES dengan seri mesin Arm seperti C4A atau T2A. GKE menempatkan Pod pada seri yang ditentukan.
Secara default, penggunaan salah satu label memungkinkan GKE menempatkan Pod lain di node yang sama jika ada kapasitas ketersediaan di node tersebut. Untuk meminta
node khusus untuk setiap Pod, tambahkan label cloud.google.com/compute-class:
Performance ke manifes Anda. Untuk mengetahui detailnya, lihat Mengoptimalkan performa Pod Autopilot dengan memilih seri mesin.
Atau, Anda dapat menggunakan label Scale-Out dengan label arm64 untuk meminta T2A.
Anda juga dapat meminta arsitektur Arm untuk Pod Spot.
Saat men-deploy workload Anda, Autopilot akan melakukan hal berikut:
Menyediakan secara otomatis node Arm untuk menjalankan Pod Anda.
Melakukan taint secara otomatis pada node baru agar Pod non-Arm tidak dijadwalkan
pada node tersebut.
Menambahkan toleransi secara otomatis ke Pod Arm Anda untuk memungkinkan penjadwalan
node baru.
Contoh permintaan arsitektur Arm
Contoh spesifikasi berikut menunjukkan cara menggunakan aturan pemilih node atau
afinitas node untuk meminta arsitektur Arm di Autopilot.
Pemilih node
Contoh manifes berikut menunjukkan cara meminta node Arm di
Pemilih node:
Anda dapat menggunakan
afinitas node
untuk meminta node Arm. Anda juga dapat menentukan jenis afinitas node yang akan digunakan:
requiredDuringSchedulingIgnoredDuringExecution: Harus menggunakan class komputasi
dan arsitektur yang ditentukan.
preferredDuringSchedulingIgnoredDuringExecution: Menggunakan class komputasi
dan arsitektur yang ditentukan berdasarkan upaya terbaik. Misalnya, jika
node x86 yang sudah ada dapat dialokasikan, GKE akan menempatkan Pod Anda
pada node x86, bukan menyediakan node Arm baru. Jika Anda menggunakan selain
manifes image multi-arsitektur, Pod akan error. Sebaiknya
Anda secara eksplisit meminta arsitektur spesifik yang Anda inginkan.
Manifes contoh berikut memerlukan class Performance dan node
Arm:
Bangun dan gunakan image multi-arsitektur
sebagai bagian dari pipeline Anda. Image multi-arsitektur memastikan Pod Anda dapat berjalan,
meskipun ditempatkan di node x86.
Minta class komputasi dan arsitektur secara eksplisit dalam manifes workload
Anda. Jika tidak melakukannya, Autopilot akan menggunakan arsitektur default
untuk class komputasi yang dipilih, yang mungkin bukan Arm.
Ketersediaan
Anda dapat men-deploy workload Autopilot pada arsitektur Arm di
Google Cloud lokasi yang mendukung arsitektur Arm. Untuk mengetahui detailnya, lihat
Region dan zona yang tersedia.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-01 UTC."],[],[],null,["# Deploy Autopilot workloads on Arm architecture\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview)\n\n*** ** * ** ***\n\nThis page shows you how to configure your Google Kubernetes Engine (GKE)\nAutopilot deployments to request nodes that are backed by Arm\narchitecture.\n\nAbout Arm architecture in Autopilot\n-----------------------------------\n\nAutopilot clusters offer\n[*compute classes*](/kubernetes-engine/docs/concepts/autopilot-compute-classes)\nfor workloads that have specific hardware requirements. Some of these compute\nclasses support multiple CPU architectures, such as `amd64` and `arm64`.\n\nUse cases for Arm nodes\n-----------------------\n\nNodes with Arm architecture offer more cost-efficient performance than similar\nx86 nodes. You should select Arm for your Autopilot workloads in\nsituations such as the following:\n\n- Your environment relies on Arm architecture for building and testing.\n- You're developing applications for Android devices that run on Arm CPUs.\n- You use multi-arch images and want to optimize costs while running your workloads.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\n\u003c!-- --\u003e\n\n- Review the [requirements and limitations for Arm\n nodes](/kubernetes-engine/docs/concepts/arm-on-gke#arm-requirements-limitations).\n- Ensure that you have quota for the [C4A](/compute/docs/general-purpose-machines#c4a_series) or [Tau T2A](/compute/docs/general-purpose-machines#t2a_machines) Compute Engine machine types.\n- Ensure that you have a Pod with a container image that's built for Arm architecture.\n\nHow to request Arm nodes in Autopilot\n-------------------------------------\n\nTo tell Autopilot to run your Pods on Arm nodes, specify one of the\nfollowing labels in a\n[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)\nor [node\naffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)\nrule:\n\n- `kubernetes.io/arch: arm64`. GKE places Pods on `C4A` machine types by default for clusters running version 1.31.3-gke.1056000 and later. If the cluster is running an earlier version, GKE places Pods on `T2A` machine types.\n- `cloud.google.com/machine-family: `\u003cvar translate=\"no\"\u003eARM_MACHINE_SERIES\u003c/var\u003e. Replace \u003cvar translate=\"no\"\u003eARM_MACHINE_SERIES\u003c/var\u003e with an Arm machine series like `C4A` or `T2A`. GKE places Pods on the specified series.\n\nBy default, using either of the labels lets GKE place other Pods\non the same node if there's availability capacity on that node. To request a\ndedicated node for each Pod, add the `cloud.google.com/compute-class:\nPerformance` label to your manifest. For details, see [Optimize\nAutopilot Pod performance by choosing a machine\nseries](/kubernetes-engine/docs/how-to/performance-pods).\n\nOr, you can use the `Scale-Out` label with the `arm64` label to request `T2A`.\nYou can also request Arm architecture for [Spot Pods](/kubernetes-engine/docs/how-to/autopilot-spot-pods).\n\nWhen you deploy your workload, Autopilot does the following:\n\n1. Automatically provisions Arm nodes to run your Pods.\n2. Automatically taints the new nodes to prevent non-Arm Pods from being scheduled on those nodes.\n3. Automatically adds a toleration to your Arm Pods to allow scheduling on the new nodes.\n\nExample request for Arm architecture\n------------------------------------\n\nThe following example specifications show you how to use a node selector or a\nnode affinity rule to request Arm architecture in Autopilot. \n\n### nodeSelector\n\nThe following example manifest shows you how to request Arm nodes in a\nnodeSelector: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: nginx-arm\n spec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx-arm\n template:\n metadata:\n labels:\n app: nginx-arm\n spec:\n nodeSelector:\n cloud.google.com/compute-class: Performance\n kubernetes.io/arch: arm64\n containers:\n - name: nginx-arm\n image: nginx\n resources:\n requests:\n cpu: 2000m\n memory: 2Gi\n\n### nodeAffinity\n\nYou can use\n[node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)\nto request Arm nodes. You can also specify the type of node affinity to use:\n\n- `requiredDuringSchedulingIgnoredDuringExecution`: Must use the specified compute class and architecture.\n- `preferredDuringSchedulingIgnoredDuringExecution`: Use the specified compute class and architecture on a best-effort basis. For example, if an existing x86 node is allocatable, GKE places your Pod on the x86 node instead of provisioning a new Arm node. Unless you're using a multi-arch image manifest, your Pod will crash. We strongly recommend that you explicitly request the specific architecture that you want.\n\nThe following example manifest *requires* the `Performance` class and Arm\nnodes: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: nginx-arm\n spec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx-arm\n template:\n metadata:\n labels:\n app: nginx-arm\n spec:\n terminationGracePeriodSeconds: 25\n containers:\n - name: nginx-arm\n image: nginx\n resources:\n requests:\n cpu: 2000m\n memory: 2Gi\n ephemeral-storage: 1Gi\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: cloud.google.com/compute-class\n operator: In\n values:\n - Performance\n - key: kubernetes.io/arch\n operator: In\n values:\n - arm64\n\nRecommendations\n---------------\n\n- [Build and use multi-arch images](/kubernetes-engine/docs/how-to/build-multi-arch-for-arm) as part of your pipeline. Multi-arch images ensure that your Pods run even if they're placed on x86 nodes.\n- Explicitly request architecture and compute classes in your workload manifests. If you don't, Autopilot uses the default architecture of the selected compute class, which might not be Arm.\n\nAvailability\n------------\n\nYou can deploy Autopilot workloads on Arm architecture in\nGoogle Cloud locations that support Arm architecture. For details, see\n[Available regions and zones](/compute/docs/regions-zones#available).\n\nTroubleshooting\n---------------\n\nFor common errors and troubleshooting information, refer to\n[Troubleshooting Arm workloads](/kubernetes-engine/docs/troubleshooting/troubleshooting-arm-workloads).\n\nWhat's next\n-----------\n\n- [Learn more about Autopilot cluster architecture](/kubernetes-engine/docs/concepts/autopilot-architecture).\n- [Learn about the lifecycle of Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/).\n- [Learn about the available Autopilot compute classes](/kubernetes-engine/docs/concepts/autopilot-compute-classes).\n- [Read about the default, minimum, and maximum resource requests for each\n platform](/kubernetes-engine/docs/concepts/autopilot-resource-requests)."]]