This page shows you how to investigate and resolve GKE logging related issues.
If you need additional assistance, reach out to Cloud Customer Care.Missing cluster logs in Cloud Logging
Verify logging is enabled in the project
List enabled services:
gcloud services list --enabled --filter="NAME=logging.googleapis.com"
The following output indicates that logging is enabled for the project:
NAME TITLE logging.googleapis.com Cloud Logging API
Optional: Check the logs in Logs Viewer to determine who disabled the API and when they disabled the API:
protoPayload.methodName="google.api.serviceusage.v1.ServiceUsage.DisableService" protoPayload.response.services="logging.googleapis.com"
If logging is disabled, enable logging:
gcloud services enable logging.googleapis.com
Verify logging is enabled on the cluster
List the clusters:
gcloud container clusters list \ --project=PROJECT_ID \ '--format=value(name,loggingConfig.componentConfig.enableComponents)' \ --sort-by=name | column -t
Replace the following:
PROJECT_ID
: your Google Cloud project ID.
The output is similar to the following:
cluster-1 SYSTEM_COMPONENTS cluster-2 SYSTEM_COMPONENTS;WORKLOADS cluster-3
If the value for your cluster is empty, logging is disabled. For example,
cluster-3
in this output has logging disabled.Enable cluster logging if set to
NONE
:gcloud container clusters update CLUSTER_NAME \ --logging=SYSTEM,WORKLOAD \ --location=COMPUTE_LOCATION
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_LOCATION
: the Compute Engine location for your cluster.
Verify nodes in the node pools have Cloud Logging access scope
One of the following scopes is required for nodes to write logs to Cloud Logging:
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/logging.admin
Check the scopes configured on each node pool in the cluster:
gcloud container node-pools list --cluster=CLUSTER_NAME \ --format="table(name,config.oauthScopes)" \ --location COMPUTE_LOCATION
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_LOCATION
: the Compute Engine location for your cluster.
Migrate your workloads from the old node pool to the newly created node pool and monitor the progress.
Create new node pools with the correct logging scope:
gcloud container node-pools create NODE_POOL_NAME \ --cluster=CLUSTER_NAME \ --location=COMPUTE_LOCATION \ --scopes="gke-default"
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_LOCATION
: the Compute Engine location for your cluster.
Identify and fix permissions issues with writing logs
GKE uses IAM service accounts that are attached to your nodes to
run system tasks like logging and monitoring. At a minimum, these node service accounts
must have the
Kubernetes Engine Default Node Service Account
(roles/container.defaultNodeServiceAccount
) role on your project. By default,
GKE uses the
Compute Engine default service account,
which is automatically created in your project, as the node service account.
If your organization enforces the
iam.automaticIamGrantsForDefaultServiceAccounts
organization policy constraint, the default Compute Engine service account in your project might
not automatically get the required permissions for GKE.
To identify the issue, check for
401
errors in the system logging workload in your cluster:[[ $(kubectl logs -l k8s-app=fluentbit-gke -n kube-system -c fluentbit-gke | grep -cw "Received 401") -gt 0 ]] && echo "true" || echo "false"
If the output is
true
, then the system workload is experiencing 401 errors, which indicate a lack of permissions. If the output isfalse
, skip the rest of these steps and try a different troubleshooting procedure.
-
Find the name of the service account that your nodes use:
console
- Go to the Kubernetes clusters page:
- In the cluster list, click the name of the cluster that you want to inspect.
- Depending on the cluster mode of operation, do one of the following:
- For Autopilot mode clusters, in the Security section, find the Service account field.
- For Standard mode clusters, do the following:
- Click the Nodes tab.
- In the Node pools table, click a node pool name. The Node pool details page opens.
- In the Security section, find the Service account field.
If the value in the Service account field is
default
, your nodes use the Compute Engine default service account. If the value in this field is notdefault
, your nodes use a custom service account. To grant the required role to a custom service account, see Use least privilege IAM service accounts.gcloud
For Autopilot mode clusters, run the following command:
gcloud container clusters describe
CLUSTER_NAME
\ --location=LOCATION
\ --flatten=autoscaling.autoprovisioningNodePoolDefaults.serviceAccountFor Standard mode clusters, run the following command:
gcloud container clusters describe
CLUSTER_NAME
\ --location=LOCATION
\ --format="table(nodePools.name,nodePools.config.serviceAccount)"If the output is
default
, your nodes use the Compute Engine default service account. If the output is notdefault
, your nodes use a custom service account. To grant the required role to a custom service account, see Use least privilege IAM service accounts. -
To grant the
roles/container.defaultNodeServiceAccount
role to the Compute Engine default service account, complete the following steps:console
- Go to the Welcome page:
- In the Project number field, click Copy to clipboard.
- Go to the IAM page:
- Click Grant access.
- In the New principals field, specify the following value:
ReplacePROJECT_NUMBER-compute@developer.gserviceaccount.com
PROJECT_NUMBER
with the project number that you copied. - In the Select a role menu, select the Kubernetes Engine Default Node Service Account role.
- Click Save.
gcloud
- Find your Google Cloud project number:
gcloud projects describe PROJECT_ID \ --format="value(projectNumber)"
Replace
PROJECT_ID
with your project ID.The output is similar to the following:
12345678901
- Grant the
roles/container.defaultNodeServiceAccount
role to the Compute Engine default service account:gcloud projects add-iam-policy-binding PROJECT_ID \ --member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \ --role="roles/container.defaultNodeServiceAccount"
Replace
PROJECT_NUMBER
with the project number from the previous step.
Verify that Cloud Logging write API quotas have not been reached
Confirm that you have not reached API write quotas for Cloud Logging.
Go to the Quotas page in the Google Cloud console.
Filter the table by "Cloud Logging API".
Confirm that you have not reached any of the quotas.
Debugging GKE logging issues with gcpdiag
If you are missing or getting incomplete logs from your GKE cluster, use thegcpdiag
tool for troubleshooting.
gcpdiag
is an open source tool. It is not an officially supported Google Cloud product.
You can use the gcpdiag
tool to help you identify and fix Google Cloud
project issues. For more information, see the
gcpdiag project on GitHub.
- Project-Level Logging: Ensures that the Google Cloud project housing the GKE cluster has the Cloud Logging API enabled.
- Cluster-Level Logging: Verifies that logging is explicitly enabled within the configuration of the GKE cluster.
- Node Pool Permissions: Confirms that the nodes within the cluster's node pools have the 'Cloud Logging Write' scope enabled, allowing them to send log data.
- Service Account Permissions: Validates that the service account used by the node pools possesses the necessary IAM permissions to interact with Cloud Logging. Specifically, the 'roles/logging.logWriter' role is typically required.
- Cloud Logging API Write Quotas: Verifies that Cloud Logging API Write quotas have not been exceeded within the specified timeframe.
Google Cloud console
- Complete and then copy the following command.
- Open the Google Cloud console and activate Cloud Shell. Open Cloud console
- Paste the copied command.
- Run the
gcpdiag
command, which downloads thegcpdiag
docker image, and then performs diagnostic checks. If applicable, follow the output instructions to fix failed checks.
gcpdiag runbook gke/logs \
--parameter project_id=PROJECT_ID \
--parameter name=GKE_NAME \
--parameter location=LOCATION
Docker
You can
run gcpdiag
using a wrapper that starts gcpdiag
in a
Docker container. Docker or
Podman must be installed.
- Copy and run the following command on your local workstation.
curl https://gcpdiag.dev/gcpdiag.sh >gcpdiag && chmod +x gcpdiag
- Execute the
gcpdiag
command../gcpdiag runbook gke/logs \ --parameter project_id=PROJECT_ID \ --parameter name=GKE_NAME \ --parameter location=LOCATION
View available parameters for this runbook.
Replace the following:
- PROJECT_ID: The ID of the project containing the resource.
- GKE_NAME: The name of the GKE cluster.
- LOCATION: The zone or region of the GKE cluster.
Useful flags:
--universe-domain
: If applicable, the Trusted Partner Sovereign Cloud domain hosting the resource--parameter
or-p
: Runbook parameters
For a list and description of all gcpdiag
tool flags, see the
gcpdiag
usage instructions.