您可以使用 Workload Identity Federation for GKE 向 Google Cloud API 和服务验证 Knative serving 服务的身份。您必须在将服务部署到集群之前设置 Workload Identity Federation for GKE;否则,在启用 Workload Identity Federation for GKE 之前就已存在于集群中的每个服务都需要迁移。详细了解如何使用 Workload Identity Federation for GKE。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-01。"],[],[],null,["# Setting up Knative serving\n\nLearn how to setup and configure your installation of Knative serving.\n\nBefore you begin\n----------------\n\nYou must have Knative serving installed on your GKE cluster. See the\n[installation guide](/kubernetes-engine/enterprise/knative-serving/docs/install) for details about\nGKE cluster prerequisites and how to install Knative serving.\n\nSetting up authentication with Workload Identity Federation for GKE\n-------------------------------------------------------------------\n\nYou can use Workload Identity Federation for GKE to authenticate your Knative serving services\nto Google Cloud APIs and services. You must set up Workload Identity Federation for GKE\nbefore you deploy services to your cluster, otherwise each service that exists on\nyour cluster prior to enabling Workload Identity Federation for GKE needs to be migrated.\n[Learn more about using Workload Identity Federation for GKE](/kubernetes-engine/enterprise/knative-serving/docs/securing/workload-identity).\n\n### Enabling metrics with Workload Identity Federation for GKE\n\nTo enable metrics, like reporting request count or request latency to\nGoogle Cloud Observability, you need to manually set write permissions for\nCloud Monitoring. For details, see\n[Enabling metrics with Workload Identity Federation for GKE](/kubernetes-engine/enterprise/knative-serving/docs/securing/workload-identity#enabling_all_metrics_with_workload_identity).\n\nConfiguring HTTPS and custom domains\n------------------------------------\n\nTo enable HTTPS and set a custom domain, see the following pages:\n\n- [Using managed TLS certificates and HTTPS](/kubernetes-engine/enterprise/knative-serving/docs/managed-tls)\n- [Mapping custom domains](/kubernetes-engine/enterprise/knative-serving/docs/mapping-custom-domains)\n\nSetting up Cloud Service Mesh\n-----------------------------\n\nTo configure Cloud Service Mesh options for Knative serving, see the\n[In-cluster control plane options](/service-mesh/v1.18/docs/unified-install/options/all-install-options),\nincluding how to\n[set up a private, internal network](/service-mesh/v1.18/docs/unified-install/options/enable-optional-features#enable_an_internal_load_balancer).\n| **Note:** The [Google-managed control plane](/service-mesh/docs/supported-features-mcp) is currently not fully supported by Knative serving.\n\n### Setting up a private, internal network\n\nDeploying services on an internal network is useful for enterprises\nthat provide internal apps to their staff, and for services that are used by\nclients that run outside the Knative serving cluster. This configuration\nallows other resources in your network to communicate with the service using a\nprivate, internal\n([RFC 1918](https://tools.ietf.org/html/rfc1918))\nIP address that can't be accessed by the public.\n\nTo create your internal network, you configure Cloud Service Mesh to use\n[Internal TCP/UDP Load Balancing](/kubernetes-engine/docs/how-to/internal-load-balancing)\ninstead of a public, external network load balancer. You can then deploy your\nKnative serving services on an internal IP address within your\n[VPC network](/vpc/docs/vpc).\n\n**Before you begin**\n\n- You must have `admin` permissions on your cluster.\n- If you configured a custom domain, you must [disable the managed TLS feature](/kubernetes-engine/enterprise/knative-serving/docs/managed-tls#disable-for-domain) because Managed TLS on Knative serving is currently unsupported by the internal load balancer.\n- Only Google Cloud CLI versions 310.0 or above are supported. For details about setting up your command-line tools, see\n - [GKE clusters on Google Cloud](/kubernetes-engine/enterprise/knative-serving/docs/install/on-gcp/command-line-tools)\n - [GKE clusters outside Google Cloud](/kubernetes-engine/enterprise/knative-serving/docs/install/outside-gcp/command-line-tools)\n\nTo set up the internal load balancer:\n\n1. Enable the internal load balancer feature in Cloud Service Mesh.\n\n The internal load balancer is an optional feature that you can configure\n during the installation of Cloud Service Mesh, or by updating your existing\n installation.\n\n Follow the steps in\n [Enabling optional features on the in-cluster control plane](/service-mesh/v1.18/docs/unified-install/options/enable-optional-features)\n and make sure to include the `--option internal-load-balancer` script option.\n\n When you specify the `--option internal-load-balancer` option, the script\n automatically fetches the\n [Enable an internal load balancer](/service-mesh/v1.18/docs/unified-install/options/enable-optional-features#enable_an_internal_load_balancer)\n custom resource from GitHub. If you need to modify the custom resource,\n follow the instructions for using the `--custom_overlay` option instead.\n2. Run the following command to watch updates to your GKE\n cluster:\n\n ```bash\n kubectl -n INGRESS_NAMESPACE get svc istio-ingressgateway --watch\n ```\n\n Replace \u003cvar translate=\"no\"\u003eINGRESS_NAMESPACE\u003c/var\u003e with the namespace of your\n Cloud Service Mesh ingress service. Specify `istio-system` if you installed\n Cloud Service Mesh using its default configuration.\n 1. Note the annotation `cloud.google.com/load-balancer-type: Internal`.\n 2. Look for the value of `IP` in the Ingress load balancer to change to a [private IP address](/vpc/docs/ip-addresses).\n 3. Press `Ctrl+C` to stop the updates once you see a private IP address in the `IP` field.\n3. For [private clusters](/kubernetes-engine/docs/concepts/private-cluster-concept)\n on Google Cloud, you must open ports. For details, see\n [opening ports on your private cluster](/service-mesh/v1.18/docs/private-cluster-open-port)\n in the Cloud Service Mesh documentation.\n\nTo verify internal connectivity after your changes:\n\n1. Deploy a service called `sample` to Knative serving in the\n `default` namespace:\n\n gcloud run deploy sample \\\n --image gcr.io/knative-samples/helloworld \\\n --namespace default\n --platform gke\n\n2. Create a Compute Engine virtual machine\n (VM) in the same zone as the GKE cluster:\n\n VM=cloudrun-gke-ilb-tutorial-vm\n\n gcloud compute instances create $VM\n\n3. Store the private IP address of the Istio Ingress Gateway in an environment\n variable called `EXTERNAL_IP` and a file called `external-ip.txt`:\n\n ```bash\n export EXTERNAL_IP=$(kubectl -n INGRESS_NAMESPACE get svc istio-ingressgateway \\\n -o jsonpath='{.status.loadBalancer.ingress[0].ip}' | tee external-ip.txt)\n ```\n\n Replace \u003cvar translate=\"no\"\u003eINGRESS_NAMESPACE\u003c/var\u003e with the namespace of your\n Cloud Service Mesh ingress service. Specify `istio-system` if you installed\n Cloud Service Mesh using its default configuration.\n4. Copy the file containing the IP address to the VM:\n\n gcloud compute scp external-ip.txt $VM:~\n\n5. Connect to the VM using SSH:\n\n gcloud compute ssh $VM\n\n6. While in the SSH session, test the sample service:\n\n curl -s -w'\\n' -H Host:sample.default.nip.io $(cat external-ip.txt)\n\n The output is as follows: \n\n ```\n Hello World!\n ```\n7. Leave the SSH session:\n\n exit\n\nSetting up a multi-tenant environment\n-------------------------------------\n\nIn multi-tenant use cases, you'll need to manage and deploy\nKnative serving services to a Google Kubernetes Engine cluster that is\noutside your current project. For more information about GKE\nmulti-tenancy, see\n[Cluster multi-tenancy](/kubernetes-engine/docs/concepts/multitenancy-overview).\n\nTo learn how to configure multi-tenancy for Knative serving, see\n[Cross-project multi-tenancy](/kubernetes-engine/enterprise/knative-serving/docs/multi-tenancy).\n\nWhat's next\n-----------\n\n- [Building containers](/kubernetes-engine/enterprise/knative-serving/docs/building/containers)\n- [Deploying containers](/kubernetes-engine/enterprise/knative-serving/docs/deploying)\n- [Troubleshooting](/kubernetes-engine/enterprise/knative-serving/docs/troubleshooting)"]]