{"protoPayload":{"@type":"type.googleapis.com/google.cloud.audit.AuditLog","status":{"code":5,"message":"Not found: Dataset my-project:my-dataset was not found in location US"},"authenticationInfo":{...},"requestMetadata":{...},"serviceName":"bigquery.googleapis.com","methodName":"google.cloud.bigquery.v2.JobService.InsertJob","metadata":{},"resource":{"type":"bigquery_project","labels":{..},},"severity":"ERROR","logName":"projects/my-project/logs/cloudaudit.googleapis.com%2Fdata_access",...}
SELECT
creation_time,
project_id,
user_email,
job_id,
job_type,
priority,
state,
TIMESTAMP_DIFF(CURRENT_TIMESTAMP(), start_time,second) as running_time_sec
FROM
`region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE
creation_time BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) AND CURRENT_TIMESTAMP()
AND state != "DONE"
ORDER BY
running_time_sec DESC
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[[["\u003cp\u003eMonitoring and logging are essential for managing BigQuery workloads, especially for high-volume or mission-critical applications, providing insights into performance and operational status.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery metrics, collected at regular intervals, include job counts, bytes scanned, and query time distributions, although they are only available after successful query completion and may take up to seven minutes to appear.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery logs capture actions like table creation, slot purchases, and job executions, including audit logs for administrative activity, data access, and system events, which are available in older and newer formats with differing resource types.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eINFORMATION_SCHEMA\u003c/code\u003e views in BigQuery offer metadata on jobs, datasets, and tables, enabling detailed analysis of workloads through queries that can track slot utilization, user activity, and streaming errors.\u003c/p\u003e\n"],["\u003cp\u003eCloud monitoring can be used to track metrics and create charts and alerts for different types of resources such as \u003ccode\u003ebigquery_dataset\u003c/code\u003e, \u003ccode\u003ebigquery_project\u003c/code\u003e or \u003ccode\u003eglobal\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Introduction to BigQuery monitoring\n===================================\n\nMonitoring and logging are crucial for running reliable applications in the\ncloud. BigQuery workloads are no exception, especially if your workload\nhas high volumes or is mission critical. This document provides a high-level\noverview of the monitoring data that is available for BigQuery.\n\nMonitoring and logging sources can vary based on the frequency of sampling or\naggregation. For example, information schema data might be available at a\nhigher level of granularity than Cloud Monitoring metrics data.\n\nAs a result, graphs of metrics with lower granularity might appear to diverge\nfrom comparable information schema statistics. Aggregation will tend to smooth\nout the discrepancies. When you design a monitoring solution, assess the\nrequest response time, precision, and accuracy of the metrics against your\nrequirements.\n\nMetrics\n-------\n\nMetrics are numerical values that are collected at regular\nintervals and made available for analysis. You can use metrics to:\n\n- Create charts and dashboards.\n- Trigger alerts for conditions or situations that need human intervention.\n- Analyze historical performance.\n\nIn the case of BigQuery, the available metrics include the number\nof jobs that are running, how many bytes were scanned during a query, and the\ndistribution of query times. The metrics for a query only\nbecome available after the query succeeds and can take up to seven minutes to be\nreported. Metrics for queries that fail are not reported. For a complete list of\navailable metrics including their sample rates, visibility, and limitations, see\n[`bigquery`](/monitoring/api/metrics_gcp_a_b#gcp-bigquery) under\n[Google Cloud metrics](/monitoring/api/metrics_gcp).\n\nUse [Cloud Monitoring](/monitoring/docs) to view BigQuery\nmetrics and create charts and alerts. Each metric has a resource type, either\n`bigquery_dataset`, `bigquery_project`, or `global`, and a set of labels. You\ncan group or filter each metric by using the labels.\n\nFor example, to chart the number of interactive queries in flight, use the\nfollowing PromQL statement, which filters by `priority` equal to `interactive`: \n\n {\"bigquery.googleapis.com/query/count\", monitored_resource=\"global\", priority=\"interactive\"}\n\nThe next example gets the number of load jobs in flight, grouped into 10-minute\nintervals: \n\n avg_over_time({\"bigquery.googleapis.com/job/num_in_flight\",\n monitored_resource=\"bigquery_project\",\n job_type=\"load\"\n }[10m])\n\nFor more information, see\n[Creating charts and alerts for BigQuery](/bigquery/docs/monitoring-dashboard).\n\nLogs\n----\n\nLogs are text records that are generated in response to particular events or\nactions. BigQuery creates log entries for actions such as\ncreating or deleting a table, purchasing slots, or running a load job. For more\ninformation about logging in Google Cloud, see\n[Cloud Logging](/logging/docs).\n\nA log is an append-only collection of log entries. For example, you could write\nyour own log entries to a log named\n`projects/`\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e`/logs/my-test-log`. Many\nGoogle Cloud services, including BigQuery, create a type of\nlog called [audit logs](/logging/docs/audit). These logs record:\n\n- Administrative activity, such as creating or modifying resources.\n- Data access, such as reading user-provided data from a resource.\n- System events that are generated by Google systems, rather than by user actions.\n\nAudit logs are written in a structured JSON format. The base data type for\nGoogle Cloud log entries is the\n[`LogEntry`](/logging/docs/reference/v2/rest/v2/LogEntry) structure. This\nstructure contains the name of the log, the resource that generated the log\nentry, the timestamp (UTC), and other basic information.\n\nThe details of the logged event are contained in a subfield called the *payload*\nfield. For audit logs, the payload field is named `protoPayload`. The value of\nthis field is an\n[`AuditLog`](/logging/docs/reference/audit/auditlog/rest/Shared.Types/AuditLog)\nstructure, indicated by the value of the `protoPayload.@type` field, which is\nset to `type.googleapis.com/google.cloud.audit.AuditLog`.\n\nFor operations on datasets, tables, and jobs, BigQuery writes\naudit logs in two different formats, although both share the `AuditLog` base\ntype.\n\nIn the older format:\n\n- The `resource.type` field is `bigquery_resource`.\n- Details about the operation are written to the `protoPayload.serviceData` field. The value of this field is an [`AuditData`](/bigquery/docs/reference/auditlogs/rest/Shared.Types/AuditData) structure.\n\nIn the newer format:\n\n- The `resource.type` field is either `bigquery_project` or `bigquery_dataset`. The `bigquery_project` resource has log entries about jobs, while the `bigquery_dataset` resource has log entries about storage.\n- Details about the operation are written to the `protoPayload.metadata` field. The value of this field is a [`BigQueryAuditMetadata`](/bigquery/docs/reference/auditlogs/rest/Shared.Types/BigQueryAuditMetadata) structure.\n\nWe recommend consuming logs in the newer format. For more information, see\n[Audit logs migration guide](/bigquery/docs/reference/auditlogs/migration).\n\nHere is an abbreviated example of a log entry that shows a failed operation: \n\n {\n \"protoPayload\": {\n \"@type\": \"type.googleapis.com/google.cloud.audit.AuditLog\",\n \"status\": {\n \"code\": 5,\n \"message\": \"Not found: Dataset my-project:my-dataset was not found in location US\"\n },\n \"authenticationInfo\": { ... },\n \"requestMetadata\": { ... },\n \"serviceName\": \"bigquery.googleapis.com\",\n \"methodName\": \"google.cloud.bigquery.v2.JobService.InsertJob\",\n \"metadata\": {\n },\n \"resource\": {\n \"type\": \"bigquery_project\",\n \"labels\": { .. },\n },\n \"severity\": \"ERROR\",\n \"logName\": \"projects/my-project/logs/cloudaudit.googleapis.com%2Fdata_access\",\n ...\n }\n\nFor operations on BigQuery Reservations, the `protoPayload` is an\n[`AuditLog`](/logging/docs/reference/audit/auditlog/rest/Shared.Types/AuditLog)\nstructure, and the `protoPayload.request` and `protoPayload.response` fields\ncontain more information. You can find the field definitions in\n[BigQuery Reservation API](/bigquery/docs/reference/reservations/rpc). For more\ninformation, see\n[Monitoring BigQuery Reservations](/bigquery/docs/reservations-monitoring).\n\nBigQuery `INFORMATION_SCHEMA` views\n-----------------------------------\n\n[`INFORMATION_SCHEMA`](/bigquery/docs/information-schema-intro) views are\nanother source of insights in BigQuery, which you can use in\nconjunction with metrics and logs.\n\nThese views contain metadata about jobs, datasets, tables, and other\nBigQuery entities. For example, you can get real-time metadata\nabout which BigQuery jobs ran over a specified time period, and\nthen group or filter the results by project, user, tables referenced, and other\ndimensions.\n\nYou can use this information to perform more detailed analysis about your\nBigQuery workloads, and answer questions like:\n\n- What is the average slot utilization for all queries over the past 7 days for a given project?\n- Which users submitted a batch load job for a given project?\n- What streaming errors occurred in the past 30 minutes, grouped by error code?\n\nIn particular, look at [jobs metadata](/bigquery/docs/information-schema-jobs),\n[streaming metadata](/bigquery/docs/information-schema-streaming), and\n[reservations metadata](/bigquery/docs/information-schema-reservations) to\nget insights into the performance of your BigQuery workloads.\n\nYou can find [example `INFORMATION_SCHEMA` queries on\nGitHub](https://github.com/GoogleCloudPlatform/bigquery-utils/tree/master/dashboards/system_tables)\nthat show an organization's slot and reservation utilization, job execution,\nand job errors. For example, the following query provides a list of\nqueries that are either pending or running. These queries are ordered\nby the length of time since they were created in the `us` region: \n\n```\nSELECT\n creation_time,\n project_id,\n user_email,\n job_id,\n job_type,\n priority,\n state,\n TIMESTAMP_DIFF(CURRENT_TIMESTAMP(), start_time,second) as running_time_sec\n FROM\n `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT\n WHERE\n creation_time BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) AND CURRENT_TIMESTAMP()\n AND state != \"DONE\"\nORDER BY\n running_time_sec DESC\n```\n\nFor more information, see [Troubleshoot BigQuery performance with\nthese dashboards](https://cloud.google.com/blog/products/data-analytics/troubleshoot-bigquery-performance-with-these-dashboards).\n\nIf you have slot reservations, then in addition to writing your own query, you\ncan use BigQuery Admin Resource Charts to view charts that\ndisplay slot usage, job concurrency, and job run time. For more information, see\n[Monitor health, resource utilization, and jobs](/bigquery/docs/admin-resource-charts).\n\nWhat's next\n-----------\n\n- Learn how to [monitor resource utilization and jobs](/bigquery/docs/admin-resource-charts).\n- Learn how to [create charts and alerts for BigQuery](/bigquery/docs/monitoring-dashboard)."]]