Stay organized with collections
Save and categorize content based on your preferences.
This document describes how to install Kf and its dependencies on an on-premises cluster created as part of Google Distributed Cloud, either on VMware or on bare metal.
If you are already familiar with the process of installing Kf
on a GKE cluster in Google Cloud, the main differences for the on-premises
procedure are:
You do not have to install the Config Connector for an on-premises install.
The on-premises procedure uses Docker credentials instead of Workload Identity.
Create the Google Cloud service account (GSA) and service account key used
for the builds to read/write from Container Registry. This step is different if you
are using a different container registry because it could have a different way
of obtaining the credentials to access the registry.
Create the service account used by Kf:
gcloud beta iam service-accounts create ${SA_NAME} \
--project=${CLUSTER_PROJECT_ID} \
--description="gcr.io admin for ${CLUSTER_NAME}" \
--display-name="${CLUSTER_NAME}"
Assign the service account the storage.admin role required to read/write from the Container Registry:
Create a Kubernetes secret in the Kf namespace for Docker credentials
you created above in Service account setup. Then patch the
Kubernetes secret to the subresource-apiserver deployment for source uploads.
Enable and update the Kf operator to use Container Registry as the container registry.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Install Kf outside Google Cloud\n\nThis document describes how to install Kf and its dependencies on an on-premises cluster created as part of Google Distributed Cloud, either [on VMware](/anthos/clusters/docs/on-prem/latest/overview) or [on bare metal](/anthos/clusters/docs/bare-metal/latest/concepts/about-bare-metal).\n\nIf you are already familiar with the process of installing Kf\non a GKE cluster in Google Cloud, the main differences for the on-premises\nprocedure are:\n\n- You do not have to install the Config Connector for an on-premises install.\n- The on-premises procedure uses Docker credentials instead of Workload Identity.\n\nBefore you begin\n----------------\n\n### Google Distributed Cloud requirements\n\n- A user cluster that meets\n [Cloud Service Mesh requirements](/service-mesh/docs/unified-install/prerequisites).\n\n- Configured for logging and monitoring.\n\n - [VMware](/anthos/clusters/docs/on-prem/1.10/how-to/logging-and-monitoring).\n - [Bare metal](/anthos/clusters/docs/bare-metal/latest/how-to/log-monitoring).\n- Registered to a fleet:\n\n [Go to GKE clusters](https://console.cloud.google.com/kubernetes/list/overview)\n\n### Kf requirements\n\nReview and understand the access permissions of components in Kf in the [Kf dependencies and architecture page](/migrate/kf/docs/2.9/concepts/kf-dependencies).\n\n- [Cloud Service Mesh](/service-mesh/docs/unified-install/install).\n\n- [Tekton](https://github.com/tektoncd/pipeline) for use by Kf. This is not a\n user facing service.\n\n- A dedicated Google Service Account.\n\nPrepare a new on-premises cluster and related services\n------------------------------------------------------\n\n### Set up environment variables\n\n**Note:** Export these environment variables in your terminal or Cloud Shell to make subsequent commands work. If you disconnect from your shell environment and reconnect later, you need to set up the variables again. \n\n### Linux and Mac\n\n```\nexport PROJECT_ID=YOUR_PROJECT_ID\nexport CLUSTER_PROJECT_ID=YOUR_PROJECT_ID\nexport CLUSTER_NAME=kf-cluster\nexport COMPUTE_ZONE=us-central1-a\nexport COMPUTE_REGION=us-central1\nexport CLUSTER_LOCATION=${COMPUTE_ZONE} # Replace ZONE with REGION to switch\nexport NODE_COUNT=4\nexport MACHINE_TYPE=e2-standard-4\nexport NETWORK=default\nexport CLUSTER_PROJECT_ID=YOUR_PROJECT_ID\nexport CLUSTER_NAME=kf-cluster\nexport DOCKER_SERVER=YOUR_DOCKER_SERVER_URL\nexport SA_NAME=${CLUSTER_NAME}-sa\nexport SA_EMAIL=${SA_NAME}@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com\n```\n\n### Windows PowerShell\n\n```\nSet-Variable -Name PROJECT_ID -Value YOUR_PROJECT_ID\nSet-Variable -Name CLUSTER_PROJECT_ID -Value YOUR_PROJECT_ID\nSet-Variable -Name CLUSTER_NAME -Value kf-cluster\nSet-Variable -Name COMPUTE_ZONE -Value us-central1-a\nSet-Variable -Name COMPUTE_REGION -Value us-central1\nSet-Variable -Name CLUSTER_LOCATION -Value $COMPUTE_ZONE # Replace ZONE with REGION to switch\nSet-Variable -Name NODE_COUNT -Value 4\nSet-Variable -Name MACHINE_TYPE -Value e2-standard-4\nSet-Variable -Name NETWORK -Value default\nSet-Variable -Name CLUSTER_PROJECT_ID -Value YOUR_PROJECT_ID\nSet-Variable -Name CLUSTER_NAME -Value kf-cluster\nSet-Variable -Name DOCKER_SERVER -Value YOUR_DOCKER_SERVER_URL\nSet-Variable -Name SA_NAME -Value ${CLUSTER_NAME}-sa\nSet-Variable -Name SA_EMAIL -Value ${SA_NAME}@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com\n```\n\n### Set up service account\n\nCreate the Google Cloud service account (GSA) and service account key used\nfor the builds to read/write from Container Registry. This step is different if you\nare using a different container registry because it could have a different way\nof obtaining the credentials to access the registry.\n\n1. Create the service account used by Kf:\n\n ```\n gcloud beta iam service-accounts create ${SA_NAME} \\\n --project=${CLUSTER_PROJECT_ID} \\\n --description=\"gcr.io admin for ${CLUSTER_NAME}\" \\\n --display-name=\"${CLUSTER_NAME}\"\n ```\n2. Assign the service account the `storage.admin` role required to read/write from the Container Registry:\n\n ```\n gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \\\n --member=\"serviceAccount:${SA_NAME}@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com\" \\\n --role=\"roles/storage.admin\"\n ```\n3. Create the service account key:\n\n \u003cbr /\u003e\n\n temp_dir=$(mktemp -d)\n key_path=${temp_dir}/key.json\n gcloud iam service-accounts keys create --iam-account ${SA_EMAIL} ${key_path}\n key_json=$(cat ${key_path})\n rm -rf ${temp_dir}\n\n \u003cbr /\u003e\n\nInstall software dependencies on cluster\n----------------------------------------\n\n1. Install Cloud Service Mesh v1.12.\n\n 1. Follow the [Cloud Service Mesh install guide](/service-mesh/v1.11/docs/unified-install/install).\n\n 2. After installing Cloud Service Mesh, you must create an ingress gateway using the [gateway install guide](/service-mesh/v1.11/docs/unified-install/install#install_gateways).\n\n 3. If on Google Distributed Cloud, set the `loadBalancerIP` to an IP allocated to the cluster as described in\n [Configure external IP addresses for Google Distributed Cloud](/service-mesh/v1.11/docs/unified-install/external-ip-load-balance).\n\n2. Install Tekton:\n\n ```\n kubectl apply -f \"https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.32.1/release.yaml\"\n ```\n\nInstall Kf\n----------\n\n1. Install the Kf CLI:\n\n ### Linux\n\n This command installs the Kf CLI for all users on the system. Follow the\n instructions in the Cloud Shell tab to install it just for yourself. \n\n gcloud storage cp gs://kf-releases/v2.9.0/kf-linux /tmp/kf\n chmod a+x /tmp/kf\n sudo mv /tmp/kf /usr/local/bin/kf\n\n ### Mac\n\n This command installs `kf` for all users on the system. \n\n gcloud storage cp gs://kf-releases/v2.9.0/kf-darwin /tmp/kf\n chmod a+x /tmp/kf\n sudo mv /tmp/kf /usr/local/bin/kf\n\n ### Cloud Shell\n\n This command installs `kf` on your Cloud Shell instance if you use `bash`,\n the instructions may need to be modified for other shells. \n\n mkdir -p ~/bin\n gcloud storage cp gs://kf-releases/v2.9.0/kf-linux ~/bin/kf\n chmod a+x ~/bin/kf\n echo \"export PATH=$HOME/bin:$PATH\" \u003e\u003e ~/.bashrc\n source ~/.bashrc\n\n ### Windows\n\n This command downloads `kf` to current directory. Add it to the path if you\n want to call if from anywhere other than the current directory. \n\n gcloud storage cp gs://kf-releases/v2.9.0/kf-windows.exe kf.exe\n\n2. Install the operator:\n\n ```\n kubectl apply -f \"https://storage.googleapis.com/kf-releases/v2.9.0/operator.yaml\"\n ```\n3. Configure the operator for Kf:\n\n ```\n kubectl apply -f \"https://storage.googleapis.com/kf-releases/v2.9.0/kfsystem.yaml\"\n ```\n\nCreate a Kubernetes secret for Docker credentials\n-------------------------------------------------\n\nCreate a Kubernetes secret in the Kf namespace for Docker credentials\nyou created above in [Service account setup](#service-account). Then patch the\nKubernetes secret to the `subresource-apiserver` deployment for source uploads.\n\n1. Enable and update the Kf operator to use Container Registry as the container registry.\n\n **Note:** The command below uses the following Container Registry as the container registry: `gcr.io/${CLUSTER_PROJECT_ID}`. Alternatively, you can use your own custom container registry. Modify this command as necessary if you use a custom container registry. \n\n export CONTAINER_REGISTRY=gcr.io/${CLUSTER_PROJECT_ID}\n kubectl patch kfsystem kfsystem \\\n --type='json' \\\n -p=\"[{'op': 'replace', 'path': '/spec/kf', 'value': {'enabled': true, 'config': {'spaceContainerRegistry':'${CONTAINER_REGISTRY}'}}}]\"\n\n2. Verify the `kf` namespace has been created by the Kf operator. This might take a few minutes to complete.\n\n ```\n kubectl get namespace kf\n ```\n3. Create a Kubernetes secret for use with Docker registries.\n\n export secret_name=kf-gcr-key-${RANDOM}\n kubectl -n kf create secret docker-registry ${secret_name} \\\n --docker-username=_json_key --docker-server ${DOCKER_SERVER} \\\n --docker-password=\"${key_json}\"\n\n4. Update the Kf operator to specify the secret containing Docker credentials.\n\n ```\n kubectl patch kfsystem kfsystem \\\n --type='json' \\\n -p=\"[{'op': 'replace', 'path': '/spec/kf', 'value': {'config': {'secrets':{'build':{'imagePushSecrets':'${secret_name}'}}}}}]\"\n ```\n\nValidate installation\n---------------------\n\n```\nkf doctor --retries=20\n```\n| **Note:** This command will inspect the cluster to ensure Kf and its dependencies are installed and working. It will run up to 20 times to give the Kubernetes cluster time to reconcile, and should result in a series `PASS` results. If the status for any component results in `FAIL` after the retries, please see the [Kf Troubleshooting](/migrate/kf/docs/2.9/support/troubleshooting) page."]]