To use Terraform in your Google Distributed Cloud (GDC) air-gapped environment, you must download it and configure it to handle Kubernetes resources.
Before you begin
Download Terraform to your workstation following the documentation provided by HashiCorp: https://developer.hashicorp.com/terraform/install.
Verify you have an existing GDC storage bucket. If you don't have a storage bucket, create one.
Make sure your system can recognize the Certificate Authority (CA) certificate used by object storage.
Manage the state file
The state file in Terraform is used to record the current state of the deployment and map it to the Terraform configuration. Since GDC object storage is implemented using S3, you can use the Terraform S3 API to sync with a shared state file. To do this, you must configure Terraform to sync with the remote state:
Add the following configuration to the
terraform.tfstate
state file that is stored locally:terraform { backend "s3" { bucket = "BUCKET_FQN" key = "TF_STATE_PATH" endpoint = "BUCKET_ENDPOINT" skip_credentials_validation = true force_path_style = true access_key = "ACCESS_KEY" secret_key = "SECRET_KEY" ... } }
Replace the following:
BUCKET_FQN
: the fully qualified name from theBucket
custom resource.TF_STATE_PATH
: the location of the Terraform state file to store in the storage bucket.BUCKET_ENDPOINT
: the endpoint from theBucket
custom resource.ACCESS_KEY
: the access key acquired from the secret containing your access credentials. Follow Obtain bucket access credentials to acquire the access key.SECRET_KEY
: the secret key acquired from the secret containing your access credentials. Follow Obtain bucket access credentials to acquire the secret key.
You must set
skip_credentials_validation
andforce_style_path
totrue
since GDC does not support the credential validation and uses the path style endpoint.Initialize the new state file edits in the storage bucket you specified in the previous step:
terraform init
Terraform might ask for an AWS region as a required input, but the value is not used since you're using GDC object storage. Input any AWS region to satisfy the requirement.
Set permissions
Besides the permissions required to perform a specific task using Terraform, such as creating a GDC project, you must also have permissions to view custom resource definitions at the cluster scope. Apply the required permissions to use Terraform:
Create the
crd-viewer
cluster role resource:kubectl apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: crd-viewer rules: - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch"] EOF
Bind the cluster role defined in the previous step to the user:
kubectl apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: crd-viewer-binding subjects: - kind: User name: USER_EMAIL roleRef: kind: ClusterRole name: crd-viewer apiGroup: rbac.authorization.k8s.io EOF
Install and configure Terraform provider
You must install the Kubernetes Provider to provision and manage Kubernetes resources.
In a Terraform file within your module, such as the
main.tf
file, insert the followingrequired_providers
block:terraform { required_providers { kubernetes = { source = "hashicorp/kubernetes" version = "~>2.6.1" } } }
Initialize your Terraform working directory to install the provider:
terraform init