[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-08。"],[],[],null,["# Attach your CNCF conformant cluster\n\n| **Note:** If you're attaching an Amazon EKS or Azure AKS cluster, see [Attach your EKS cluster](/kubernetes-engine/multi-cloud/docs/attached/eks/how-to/attach-cluster) or [Attach your AKS cluster](/kubernetes-engine/multi-cloud/docs/attached/aks/how-to/attach-cluster). For instructions on using the *previous generation* of GKE attached clusters, see [Attach third-party clusters](/kubernetes-engine/multi-cloud/docs/attached/previous-generation/how-to/attach-kubernetes-clusters). Be aware that the previous generation product is in maintenance mode. It won't receive new features, fixes, or support, so its continued use is discouraged.\n\nWith GKE attached clusters, you can bring your existing Kubernetes\nclusters --- whether they're hosted on AWS, Azure, or elsewhere ---\ninto the Google Kubernetes Engine (GKE) Enterprise edition dashboard for centralized management. This\nincludes the ability to attach any CNCF conformant Kubernetes cluster.\n\nSupported Kubernetes Clusters\n-----------------------------\n\nYou can add any\n[conformant Kubernetes cluster](https://www.cncf.io/certification/software-conformance/)\nwith x86 nodes to your fleet, and then view it within the Google Cloud console\nalongside your GKE clusters.\n\nWhile Google doesn't verify every Kubernetes distribution for complete feature\ncompatibility, any discovered incompatibilities are documented here. For\nfurther details and troubleshooting assistance, refer to\n[Google Kubernetes Engine (GKE) Enterprise edition version and upgrade support](/anthos/docs/version-and-upgrade-support#anthos).\n\nPrerequisites\n-------------\n\nEnsure that your cluster meets the [cluster requirements](/kubernetes-engine/multi-cloud/docs/attached/generic/reference/cluster-prerequisites).\n\nWhen attaching your cluster, you must specify the following:\n\n- A supported Google Cloud [administrative region](/kubernetes-engine/multi-cloud/docs/attached/generic/reference/supported-regions)\n- A platform version\n\nThe administrative region is a Google Cloud region\nto administer your attached cluster from. You can choose any supported\nregion, but best practice is to choose the region geographically closest to\nyour cluster. No user data is stored in the administrative region.\n\nThe platform version is the version of GKE attached clusters to be installed on your\ncluster. You can list all supported versions by running the following command: \n\n gcloud container attached get-server-config \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e with the name of the\nGoogle Cloud location to administer your cluster from.\n\nPlatform version numbering\n--------------------------\n\nThese documents refer to the GKE attached clusters version as the platform version,\nto distinguish it from the Kubernetes version. GKE attached clusters uses the same\nversion numbering convention as GKE - for example, 1.21.5-gke.1. When attaching\nor updating your cluster, you must choose a platform version whose minor version\nis the same as or one level below the Kubernetes version of your cluster. For\nexample, you can attach a cluster running Kubernetes v1.22.\\* with\nGKE attached clusters platform version 1.21.\\* or 1.22.\\*.\n\nThis lets you upgrade your cluster to the next minor version before upgrading\nGKE attached clusters.\n\nAttach your cluster\n-------------------\n\n| **Note:** The default number of clusters that you can attach per project is 50. To increase this quota, contact [Google Cloud support](/support).\n\nTo attach your CNCF conformant cluster to Google Cloud\n[Fleet management](/anthos/fleet-management/docs),\nrun the following commands:\n\n1. Ensure that your kubeconfig file has an entry for the cluster you'd like\n to attach. Specific instructions vary by distribution.\n\n2. Run this command to extract your cluster's kubeconfig context and\n store it in the `KUBECONFIG_CONTEXT` environment variable:\n\n KUBECONFIG_CONTEXT=$(kubectl config current-context)\n\n3. The command to register your cluster varies slightly depending on whether\n your cluster has a public or private OIDC issuer. Choose the tab that\n applies to your cluster:\n\n ### Private OIDC issuer\n\n Use the\n [`gcloud container attached clusters register` command](/sdk/gcloud/reference/container/attached/clusters/register) to register the cluster: \n\n gcloud container attached clusters register \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e \\\n --fleet-project=\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e \\\n --platform-version=\u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e \\\n --distribution=generic \\\n --context=\u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e \\\n --has-private-issuer \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster. The \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e must be compliant with the [RFC 1123 Label Names standard](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e: the Google Cloud region to administer your cluster from\n - \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the fleet host project to register the cluster with\n - \u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e: the platform version to use for the cluster\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e: context in the kubeconfig for accessing the cluster\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e: path to your kubeconfig\n\n ### Public OIDC issuer\n\n 1. Retrieve your cluster's OIDC issuer URL and save it for use later.\n Specific instructions vary by distribution.\n\n 2. Run this command to extract your cluster's kubeconfig context and\n store it in the `KUBECONFIG_CONTEXT` environment variable:\n\n KUBECONFIG_CONTEXT=$(kubectl config current-context)\n\n 3. Use the\n [`gcloud container attached clusters register` command](/sdk/gcloud/reference/container/attached/clusters/register) to register the cluster:\n\n gcloud container attached clusters register \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e \\\n --fleet-project=\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e \\\n --platform-version=\u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e \\\n --distribution=generic \\\n --issuer-url=\u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e \\\n --context=\u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster. The \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e must be compliant with the [RFC 1123 Label Names standard](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).\n - \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e: the Google Cloud region to administer your cluster\n - \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the fleet host project where the cluster will be registered\n - \u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e: the GKE attached clusters version to use for the cluster\n - \u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e: the issuer URL retrieved earlier\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e: context in the kubeconfig for accessing your cluster, as extracted earlier\n - \u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e: path to your kubeconfig\n\nAuthorize Cloud Logging / Cloud Monitoring\n------------------------------------------\n\n| **Note:** Starting with GKE Enterprise version 1.28, manual policy binding to authorize the `gke-system/gke-telemetry-agent` service account for log and metric collection is no longer necessary. The required permissions are now automatically granted to this service account. You can therefore disregard this section.\n\nIn order for GKE attached clusters to create and upload system logs and metrics to\nGoogle Cloud, it must be authorized.\n\nTo authorize the Kubernetes workload identity `gke-system/gke-telemetry-agent`\nto write logs to Google Cloud Logging, and metrics to Google Cloud Monitoring,\nrun this command: \n\n gcloud projects add-iam-policy-binding \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e \\\n --member=\"serviceAccount:\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e.svc.id.goog[gke-system/gke-telemetry-agent]\" \\\n --role=roles/gkemulticloud.telemetryWriter\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e with the cluster's Google Cloud project ID.\n\nThis IAM binding grants access for all clusters in the Google Cloud project project to\nupload logs and metrics. You only need to run it after creating your\nfirst cluster for the project.\n\nAdding this IAM binding will fail unless at least one cluster has been\ncreated in your Google Cloud project. This is because the workload identity pool\nit refers to (\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e`.svc.id.goog`) is\nnot provisioned until cluster creation."]]