Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menjelaskan cara memecahkan masalah komponen Spanner untuk menemukan
sumber latensi. Untuk mempelajari lebih lanjut kemungkinan titik latensi dalam
permintaan Spanner, lihat
Titik latensi dalam permintaan Spanner.
Di aplikasi klien yang memengaruhi layanan Anda, pastikan ada
peningkatan latensi dari latensi bolak-balik klien. Periksa dimensi berikut
dari metrik sisi klien Anda.
Nama Aplikasi Klien
Lokalitas klien (misalnya, zona VM Compute Engine) dan Host (yaitu, nama VM)
Metode Spanner API
Status Spanner API
Kelompokkan menurut dimensi ini untuk melihat apakah masalahnya terbatas pada klien, status, atau metode tertentu. Untuk workload dual-region atau multi-region, lihat
apakah masalahnya terbatas pada klien atau region Spanner tertentu.
Periksa kesehatan aplikasi klien Anda, terutama infrastruktur
komputasi di sisi klien (misalnya, penggunaan VM, CPU, atau memori, koneksi, deskripsi file, dan sebagainya).
Jika Anda memiliki latensi bolak-balik klien yang tinggi, tetapi latensi GFE rendah, dan latensi permintaan Spanner API rendah, kode aplikasi mungkin memiliki masalah. Hal ini juga dapat menunjukkan masalah jaringan antara klien
dan GFE regional. Jika aplikasi Anda memiliki masalah performa yang menyebabkan
beberapa jalur kode menjadi lambat, latensi bolak-balik klien untuk setiap permintaan
API mungkin akan meningkat. Mungkin juga ada masalah dalam infrastruktur komputasi
klien yang tidak terdeteksi di langkah sebelumnya.
Kelompokkan menurut dimensi ini untuk melihat apakah masalahnya terbatas pada database, status, atau metode tertentu. Untuk workload multi-region atau region ganda,
periksa untuk melihat apakah masalahnya terbatas pada region tertentu.
Jika Anda memiliki latensi GFE yang tinggi, tetapi latensi permintaan Spanner API rendah, hal ini mungkin disebabkan oleh salah satu hal berikut:
Mengakses database dari region lain. Tindakan ini dapat menyebabkan latensi GFE
yang tinggi dan latensi permintaan Spanner API yang rendah. Misalnya,
traffic dari klien di region us-east1 yang memiliki instance di
region us-central1 mungkin memiliki latensi GFE yang tinggi, tetapi latensi permintaan
Spanner API yang lebih rendah.
Ada masalah di lapisan GFE. Periksa Google Cloud Dasbor Status
untuk melihat apakah ada masalah jaringan yang sedang berlangsung di wilayah Anda. Jika tidak ada masalah, buka kasus dukungan dan sertakan informasi ini agar engineer dukungan dapat membantu memecahkan masalah GFE.
Periksa penggunaan CPU instance.
Jika penggunaan CPU instance berada di atas tingkat yang direkomendasikan, Anda harus menambahkan lebih banyak node secara manual, atau menyiapkan penskalaan otomatis. Untuk informasi selengkapnya,
lihat Ringkasan penskalaan otomatis.
Amati dan pecahkan masalah potensi hotspot atau pola akses yang tidak seimbang
menggunakan Key Visualizer
dan coba kembalikan perubahan kode aplikasi yang sangat berkorelasi
dengan jangka waktu masalah.
Gunakan prosedur di Kueri aktif terlama
untuk melihat kueri pengeluaran yang mungkin menyebabkan bottleneck performa dan
membatalkan kueri sesuai kebutuhan.
Gunakan prosedur di bagian pemecahan masalah dalam topik berikut untuk
memecahkan masalah lebih lanjut menggunakan alat introspeksi
Spanner:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-11 UTC."],[],[],null,["# Identify where latency occurs\n\n| **Note:** [OpenCensus is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/). We recommend using OpenTelemetry to capture and visualize Spanner observability metrics. For more information, see [Capture custom client-side metrics using OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry).\n\nThis page describes how to troubleshoot Spanner components to find the\nsource of the latency. To learn more about possible latency points in a\nSpanner request, see\n[Latency points in a Spanner request](/spanner/docs/latency-points).\n\n1. In your client application that affects your service, confirm there's a\n latency increase from client round-trip latency. Check the following dimensions\n from your client-side metrics.\n\n - Client Application Name\n - Client locality (for example, Compute Engine VM zones) and Host (that is, VM names)\n - Spanner API method\n - Spanner API status\n\n Group by these dimensions to see if the issue is limited to a specific\n client, status, or method. For dual-region or multi-regional workloads, see\n if the issue is limited to a specific client or Spanner region.\n2. Check your client application health, especially the computing\n infrastructure on the client side (for example, VM, CPU, or memory\n utilization, connections, file descriptors, and so on).\n\n3. Check latency in Spanner components:\n\n a. Check client round-trip latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-client-round-trip-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_client_round-trip_latency).\n\n b. Check Google Front End (GFE) latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-gfe-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_gfe_latency).\n\n c. Check Spanner API request latency [with OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry#capture-spanner-api-request-latency)\n or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus#capture_and_visualize_api_request_latency).\n\n If you have high client round-trip latency, but low GFE latency, and a low\n Spanner API request latency, the application code might\n have an issue. It could also indicate a networking issue between the client\n and regional GFE. If your application has a performance issue that causes\n some code paths to be slow, then the client round-trip latency for each API\n request might increase. There might also be an issue in the client computing\n infrastructure that was not detected in the previous step.\n4. Check the following dimensions for\n [Spanner metrics](/spanner/docs/latency-metrics):\n\n - Spanner Database Name\n - Spanner API method\n - Spanner API status\n\n Group by these dimensions to see if the issue is limited to a specific\n database, status, or method. For dual-region or multi-regional workloads,\n check to see if the issue is limited to a specific region.\n\n If you have a high GFE latency, but a low Spanner API request\n latency, it might have one of the following causes:\n - Accessing a database from another region. This action can lead to high GFE\n latency and low Spanner API request latency. For example,\n traffic from a client in the `us-east1` region that has an instance in the\n `us-central1` region might have a high GFE latency but a lower\n Spanner API request latency.\n\n - There's an issue at the GFE layer. Check the [Google Cloud Status Dashboard](https://status.cloud.google.com/)\n to see if there are any ongoing networking issues in your region. If there\n aren't any issues, then open a support case and include this information so\n that support engineers can help with troubleshooting the GFE.\n\n5. [Check the CPU utilization of the instance](/spanner/docs/cpu-utilization).\n If the CPU utilization of the instance is above the recommended level, you\n should manually add more nodes, or set up auto scaling. For more information,\n see [Autoscaling overview](/spanner/docs/autoscaling-overview).\n\n6. Observe and troubleshoot potential hotspots or unbalanced access patterns\n using [Key Visualizer](/spanner/docs/key-visualizer)\n and try to roll back any application code changes that strongly correlate\n with the issue timeframe.\n\n | **Note:** We recommend you follow [Schema design best practices](/spanner/docs/schema-design) to ensure your access is balanced across Spanner computing resources.\n7. Check any traffic pattern changes.\n\n8. Check [Query insights](/spanner/docs/using-query-insights) and\n [Transaction insights](/spanner/docs/use-lock-and-transaction-insights) to\n see if there might be any query or transaction performance bottlenecks.\n\n9. Use procedures in [Oldest active queries](/spanner/docs/introspection/oldest-active-queries)\n to see any expense queries that might cause a performance bottleneck and\n cancel the queries as needed.\n\n10. Use procedures in the troubleshooting sections in the following topics to\n troubleshoot the issue further using Spanner introspection\n tools:\n\n - [Query statistics](/spanner/docs/introspection/query-statistics)\n - [Read statistics](/spanner/docs/introspection/read-statistics)\n - [Transaction statistics](/spanner/docs/introspection/transaction-statistics)\n - [Lock statistics](/spanner/docs/introspection/lock-statistics)\n\nWhat's next\n-----------\n\n- Now that you've identified the component that contains the latency, explore the problem further using OpenCensus. For more information, see [Capture custom client-side metrics using OpenTelemetry](/spanner/docs/capture-custom-metrics-opentelemetry) or [with OpenCensus](/spanner/docs/capture-visualize-latency-opencensus).\n- Learn how to use [metrics](/spanner/docs/latency-metrics) to diagnose latency.\n- Learn how to [troubleshoot Spanner deadline exceeded errors](/spanner/docs/deadline-exceeded)."]]