[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-28。"],[],[],null,["# Retention and latency of metric data\n\nThis page provides information about how long Cloud Monitoring retains\nyour metric data and information about the latency between the collection\nof the data and its visibility to you.\n\n[Quotas and limits](/monitoring/quotas) provides additional information about\nlimits on metric data.\n\nRetention of metric data\n------------------------\n\nCloud Monitoring acquires metric data and holds it in the time series\nof metric types for a period of time. This period of time varies with the\nmetric type; See [Data retention](/monitoring/quotas#data_retention_policy) for details.\n\nAt the end of that period, Cloud Monitoring deletes the expired data points.\n\nWhen all the points in a time series have expired, Cloud Monitoring\ndeletes the time series. Deleted time series don't appear in\nCloud Monitoring charts or in results from the\nMonitoring API.\n\nLatency of metric data\n----------------------\n\n*Latency* refers to the delay between when Cloud Monitoring samples a metric and when the metric data point becomes visible as time series data. The latency depends on whether the metric is a metric from a Google Cloud service or a user-defined metric:\n\n- **Google Cloud metrics** : The [Google Cloud metrics list](/monitoring/api/metrics_gcp)\n includes the metric types from Google Cloud services. Many of these\n descriptions include a statement like the following: \"Sampled every\n 60 seconds. After sampling, data is not visible for up to 240\n seconds.\"\n\n The values in the statement vary for specific metrics.\n The example statement means that Cloud Monitoring collects one\n measurement each minute (the sampling interval), but because some\n of these metrics receive additional processing before they are\n exposed, it can take additional time (latency) before you can retrieve the\n data for this metric. In this example, the latency can be\n up to 4 minutes. So, the timestamp recording the collection time might\n be up to 4 minutes old for this metric.\n This latency doesn't apply to user-defined metrics.\n\n\u003c!-- --\u003e\n\n- **User-defined metrics**: If you are writing data to user-defined metrics, including custom metrics, OpenTelemetry-collected metrics, application metrics collected by the Ops Agent, and Prometheus metrics, then data from these metrics is typically visible and queryable within 3 to 7 seconds, excluding network latency.\n\nIn some situations, you might need to adjust how you use a metric with latency.\nFor example:\n\n- When using client libraries to retrieve metric data, you might need to\n use an offset in the query interval to account for latency.\n\n- When using a metric to drive resource management like when autoscaling,\n the latency of the metric can affect the responsiveness of the autoscaling.\n For example, some [Pub/Sub metrics](/monitoring/api/metrics_gcp_p_z#gcp-pubsub) have latencies\n that range from 2 to 4 minutes.\n\n- When using alerting policies, be aware that latency can affect incident\n creation time for metric-based alerting policies. For example, if a\n monitored metric has a latency of up to 180 seconds, then\n Cloud Monitoring won't create an incident for up to 180 seconds after\n the metric violates the threshold of the alerting policy condition.\n Cloud Monitoring automatically accounts for the latency, if any, of the\n underlying metric when evaluating alerting policies."]]