Stay organized with collections
Save and categorize content based on your preferences.
This page describes how to troubleshoot Spanner components to find the
source of the latency. To learn more about possible latency points in a
Spanner request, see
Latency points in a Spanner request.
In your client application that affects your service, confirm there's a
latency increase from client round-trip latency. Check the following dimensions
from your client-side metrics.
Client Application Name
Client locality (for example, Compute Engine VM zones) and Host (that
is, VM names)
Spanner API method
Spanner API status
Group by these dimensions to see if the issue is limited to a specific
client, status, or method. For dual-region or multi-regional workloads, see
if the issue is limited to a specific client or Spanner region.
Check your client application health, especially the computing
infrastructure on the client side (for example, VM, CPU, or memory
utilization, connections, file descriptors, and so on).
If you have high client round-trip latency, but low GFE latency, and a low
Spanner API request latency, the application code might
have an issue. It could also indicate a networking issue between the client
and regional GFE. If your application has a performance issue that causes
some code paths to be slow, then the client round-trip latency for each API
request might increase. There might also be an issue in the client computing
infrastructure that was not detected in the previous step.
Group by these dimensions to see if the issue is limited to a specific
database, status, or method. For dual-region or multi-regional workloads,
check to see if the issue is limited to a specific region.
If you have a high GFE latency, but a low Spanner API request
latency, it might have one of the following causes:
Accessing a database from another region. This action can lead to high GFE
latency and low Spanner API request latency. For example,
traffic from a client in the us-east1 region that has an instance in the
us-central1 region might have a high GFE latency but a lower
Spanner API request latency.
There's an issue at the GFE layer. Check the Google Cloud Status Dashboard
to see if there are any ongoing networking issues in your region. If there
aren't any issues, then open a support case and include this information so
that support engineers can help with troubleshooting the GFE.
Observe and troubleshoot potential hotspots or unbalanced access patterns
using Key Visualizer
and try to roll back any application code changes that strongly correlate
with the issue timeframe.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Identify where latency occurs\n\n| **Note:** [OpenCensus is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/). We recommend using OpenTelemetry to capture and visualize Spanner observability metrics. For more information, see [Capture custom client-side metrics using OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry).\n\nThis page describes how to troubleshoot Spanner components to find the\nsource of the latency. To learn more about possible latency points in a\nSpanner request, see\n[Latency points in a Spanner request](/spanner/docs/latency-points).\n\n1. In your client application that affects your service, confirm there's a\n latency increase from client round-trip latency. Check the following dimensions\n from your client-side metrics.\n\n - Client Application Name\n - Client locality (for example, Compute Engine VM zones) and Host (that is, VM names)\n - Spanner API method\n - Spanner API status\n\n Group by these dimensions to see if the issue is limited to a specific\n client, status, or method. For dual-region or multi-regional workloads, see\n if the issue is limited to a specific client or Spanner region.\n2. Check your client application health, especially the computing\n infrastructure on the client side (for example, VM, CPU, or memory\n utilization, connections, file descriptors, and so on).\n\n3. Check latency in Spanner components:\n\n a. Check client round-trip latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-client-round-trip-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_client_round-trip_latency).\n\n b. Check Google Front End (GFE) latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-gfe-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_gfe_latency).\n\n c. Check Spanner API request latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-spanner-api-request-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_api_request_latency).\n\n If you have high client round-trip latency, but low GFE latency, and a low\n Spanner API request latency, the application code might\n have an issue. It could also indicate a networking issue between the client\n and regional GFE. If your application has a performance issue that causes\n some code paths to be slow, then the client round-trip latency for each API\n request might increase. There might also be an issue in the client computing\n infrastructure that was not detected in the previous step.\n4. Check the following dimensions for\n [Spanner metrics](/spanner/docs/latency-metrics):\n\n - Spanner Database Name\n - Spanner API method\n - Spanner API status\n\n Group by these dimensions to see if the issue is limited to a specific\n database, status, or method. For dual-region or multi-regional workloads,\n check to see if the issue is limited to a specific region.\n\n If you have a high GFE latency, but a low Spanner API request\n latency, it might have one of the following causes:\n - Accessing a database from another region. This action can lead to high GFE\n latency and low Spanner API request latency. For example,\n traffic from a client in the `us-east1` region that has an instance in the\n `us-central1` region might have a high GFE latency but a lower\n Spanner API request latency.\n\n - There's an issue at the GFE layer. Check the [Google Cloud Status Dashboard](https://status.cloud.google.com/)\n to see if there are any ongoing networking issues in your region. If there\n aren't any issues, then open a support case and include this information so\n that support engineers can help with troubleshooting the GFE.\n\n5. [Check the CPU utilization of the instance](/spanner/docs/cpu-utilization).\n If the CPU utilization of the instance is above the recommended level, you\n should manually add more nodes, or set up auto scaling. For more information,\n see [Autoscaling overview](/spanner/docs/autoscaling-overview).\n\n6. Observe and troubleshoot potential hotspots or unbalanced access patterns\n using [Key Visualizer](/spanner/docs/key-visualizer)\n and try to roll back any application code changes that strongly correlate\n with the issue timeframe.\n\n | **Note:** We recommend you follow [Schema design best practices](/spanner/docs/schema-design) to ensure your access is balanced across Spanner computing resources.\n7. Check any traffic pattern changes.\n\n8. Check [Query insights](/spanner/docs/using-query-insights) and\n [Transaction insights](/spanner/docs/use-lock-and-transaction-insights) to\n see if there might be any query or transaction performance bottlenecks.\n\n9. Use procedures in [Oldest active queries](/spanner/docs/introspection/oldest-active-queries)\n to see any expense queries that might cause a performance bottleneck and\n cancel the queries as needed.\n\n10. Use procedures in the troubleshooting sections in the following topics to\n troubleshoot the issue further using Spanner introspection\n tools:\n\n - [Query statistics](/spanner/docs/introspection/query-statistics)\n - [Read statistics](/spanner/docs/introspection/read-statistics)\n - [Transaction statistics](/spanner/docs/introspection/transaction-statistics)\n - [Lock statistics](/spanner/docs/introspection/lock-statistics)\n\nWhat's next\n-----------\n\n- Now that you've identified the component that contains the latency, explore the problem further using OpenCensus. For more information, see [Capture custom client-side metrics using OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry) or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus).\n- Learn how to use [metrics](/spanner/docs/latency-metrics) to diagnose latency.\n- Learn how to [troubleshoot Spanner deadline exceeded errors](/spanner/docs/deadline-exceeded)."]]