Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Untuk menggunakan Terraform di lingkungan air gap Google Distributed Cloud (GDC), Anda harus mendownload dan mengonfigurasinya untuk menangani resource Kubernetes.
Verifikasi
bahwa Anda memiliki bucket penyimpanan GDC yang ada. Jika Anda belum
memiliki bucket penyimpanan,
buat bucket penyimpanan.
Pastikan sistem Anda dapat mengenali sertifikat Certificate Authority (CA) yang digunakan oleh penyimpanan objek.
Mengelola file status
File status di Terraform digunakan untuk mencatat status deployment saat ini dan memetakannya ke konfigurasi Terraform. Karena penyimpanan objek GDC diimplementasikan menggunakan S3, Anda dapat menggunakan Terraform S3 API untuk menyinkronkan dengan file status bersama. Untuk melakukannya, Anda harus
mengonfigurasi Terraform agar disinkronkan dengan status jarak jauh:
Tambahkan konfigurasi berikut ke file Terraform yang disimpan secara lokal, seperti file main.tf:
BUCKET_FQN: nama yang sepenuhnya memenuhi syarat dari
resource kustom Bucket.
TF_STATE_PATH: lokasi file status Terraform yang akan disimpan di bucket penyimpanan.
BUCKET_ENDPOINT: endpoint dari resource kustom Bucket.
ACCESS_KEY: kunci akses yang diperoleh dari secret yang berisi kredensial akses Anda. Ikuti
Mendapatkan kredensial akses bucket
untuk mendapatkan kunci akses.
SECRET_KEY: kunci rahasia yang diperoleh dari
secret yang berisi kredensial akses Anda. Ikuti
Mendapatkan kredensial akses bucket
untuk mendapatkan kunci rahasia.
Anda harus menyetel skip_credentials_validation dan force_style_path ke true
karena GDC tidak mendukung validasi kredensial
dan menggunakan endpoint gaya jalur.
Lakukan inisialisasi pengeditan file status baru di bucket penyimpanan yang Anda tentukan di
langkah sebelumnya:
terraforminit
Terraform mungkin meminta region AWS sebagai input yang diperlukan, tetapi nilai tersebut tidak digunakan karena Anda menggunakan penyimpanan objek GDC.
Masukkan region AWS apa pun untuk memenuhi persyaratan.
Menetapkan izin
Selain izin yang diperlukan untuk melakukan tugas tertentu menggunakan Terraform, seperti membuat project GDC, Anda juga harus memiliki izin untuk melihat definisi resource kustom dalam cakupan tersebut. Terapkan izin yang diperlukan untuk menggunakan Terraform:
Ganti KUBECONFIG dengan file kubeconfig
server atau cluster API yang menghosting resource yang Anda kelola dengan
Terraform. Misalnya, sebagian besar resource berjalan di server Management API. Untuk
beban kerja container, tetapkan file kubeconfig cluster Kubernetes Anda. Pastikan untuk
menetapkan server API global jika Anda mengelola resource global.
Ikat peran cluster yang ditentukan pada langkah sebelumnya kepada pengguna:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eTerraform must be downloaded and configured to manage Kubernetes resources within a Google Distributed Cloud (GDC) air-gapped environment.\u003c/p\u003e\n"],["\u003cp\u003eTo manage state files, you must configure Terraform to sync with a remote state file on GDC object storage using the Terraform S3 API, setting parameters such as bucket details, access keys, and specific flags.\u003c/p\u003e\n"],["\u003cp\u003eUsers need specific permissions, including a \u003ccode\u003ecrd-viewer\u003c/code\u003e cluster role, to view custom resource definitions when performing tasks with Terraform on GDC, which must be bound to their user account.\u003c/p\u003e\n"],["\u003cp\u003eThe Kubernetes provider is required to provision and manage Kubernetes resources, and must be added to the required providers block in the terraform configuration file, and initialized with the command \u003ccode\u003eterraform init\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Configure Terraform\n\nTo use Terraform in your Google Distributed Cloud (GDC) air-gapped environment, you must\ndownload it and configure it to handle Kubernetes resources.\n\nBefore you begin\n----------------\n\n- Download Terraform to your workstation following the documentation provided by\n HashiCorp:\n \u003chttps://developer.hashicorp.com/terraform/install\u003e.\n\n- [Verify](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/create-storage-buckets#verify_bucket_and_related_resource_creation)\n you have an existing GDC storage bucket. If you don't\n have a storage bucket,\n [create one](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/create-storage-buckets#create_a_bucket).\n\n- Make sure your system can recognize the Certificate Authority (CA) certificate\n used by object storage.\n\nManage the state file\n---------------------\n\nThe state file in Terraform is used to record the current state of the\ndeployment and map it to the Terraform configuration. Since\nGDC object storage is implemented using S3, you can use\nthe Terraform S3 API to sync with a shared state file. To do this, you must\nconfigure Terraform to sync with the remote state:\n\n1. Add the following configuration to a Terraform file that is stored locally,\n such as the `main.tf` file:\n\n terraform {\n backend \"s3\" {\n bucket = \"\u003cvar translate=\"no\"\u003eBUCKET_FQN\u003c/var\u003e\"\n key = \"\u003cvar translate=\"no\"\u003eTF_STATE_PATH\u003c/var\u003e\"\n endpoint = \"\u003cvar translate=\"no\"\u003eBUCKET_ENDPOINT\u003c/var\u003e\"\n skip_credentials_validation = true\n force_path_style = true\n access_key = \"\u003cvar translate=\"no\"\u003eACCESS_KEY\u003c/var\u003e\"\n secret_key = \"\u003cvar translate=\"no\"\u003eSECRET_KEY\u003c/var\u003e\"\n ...\n }\n }\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eBUCKET_FQN\u003c/var\u003e: the fully qualified name from the\n `Bucket` custom resource.\n\n - \u003cvar translate=\"no\"\u003eTF_STATE_PATH\u003c/var\u003e: the location of the Terraform\n state file to store in the storage bucket.\n\n - \u003cvar translate=\"no\"\u003eBUCKET_ENDPOINT\u003c/var\u003e: the endpoint from the `Bucket`\n custom resource.\n\n - \u003cvar translate=\"no\"\u003eACCESS_KEY\u003c/var\u003e: the access key acquired from the\n secret containing your access credentials. Follow\n [Obtain bucket access credentials](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access#getting_bucket_access_credentials)\n to acquire the access key.\n\n - \u003cvar translate=\"no\"\u003eSECRET_KEY\u003c/var\u003e: the secret key acquired from the\n secret containing your access credentials. Follow\n [Obtain bucket access credentials](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access#getting_bucket_access_credentials)\n to acquire the secret key.\n\n You must set `skip_credentials_validation` and `force_style_path` to `true`\n since GDC does not support the credential validation\n and uses the path style endpoint.\n | **Note:** We recommend using environment variables to store the credentials. If credentials are hardcoded, they are stored within the `.terraform` directory and in plan files.\n2. Initialize the new state file edits in the storage bucket you specified in\n the previous step:\n\n terraform init\n\n Terraform might ask for an AWS region as a required input, but the value is\n not used since you're using GDC object storage.\n Input any AWS region to satisfy the requirement.\n\nSet permissions\n---------------\n\nBesides the permissions required to perform a specific task using Terraform,\nsuch as creating a GDC project, you must also have\npermissions to view custom resource definitions at that scope. Apply the\nrequired permissions to use Terraform:\n\n1. Create the `crd-viewer` cluster role resource:\n\n kubectl apply --kubeconfig \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e -f - \u003c\u003cEOF\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRole\n metadata:\n name: crd-viewer\n rules:\n - apiGroups: [\"apiextensions.k8s.io\"]\n resources: [\"customresourcedefinitions\"]\n verbs: [\"get\", \"list\", \"watch\"]\n EOF\n\n Replace \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e with the kubeconfig file of the\n API server or cluster that hosts the resource you're managing with\n Terraform. For example, most resources run on the Management API server. For\n container workloads, set your Kubernetes cluster kubeconfig file. Be sure to\n set the global API server if you're managing a global resource.\n2. Bind the cluster role defined in the previous step to the user:\n\n kubectl apply --kubeconfig \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e -f - \u003c\u003cEOF\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRoleBinding\n metadata:\n name: crd-viewer-binding\n subjects:\n - kind: User\n name: \u003cvar translate=\"no\"\u003eUSER_EMAIL\u003c/var\u003e\n roleRef:\n kind: ClusterRole\n name: crd-viewer\n apiGroup: rbac.authorization.k8s.io\n EOF\n\nRepeat these steps for each API server or cluster you want to set Terraform\npermissions for.\n\nInstall and configure Terraform provider\n----------------------------------------\n\nYou must install the\n[Kubernetes Provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs)\nto provision and manage Kubernetes resources.\n\n1. In a Terraform file within your module, such as the `main.tf` file, insert\n the following `required_providers` block:\n\n terraform {\n required_providers {\n kubernetes = {\n source = \"hashicorp/kubernetes\"\n version = \"~\u003e2.6.1\"\n }\n }\n }\n\n2. Initialize your Terraform working directory to install the provider:\n\n terraform init"]]