Cloud Logging overview

This document provides an overview of Cloud Logging, which is a real-time log-management system with storage, search, analysis, and monitoring support. Cloud Logging automatically collects logs from Google Cloud resources. You can also collect logs from your applications, on-premise resources, and resources from other cloud providers. You can also configure alerting policies so that Cloud Monitoring notifies you if certain kinds of events are reported in your logs. For regulatory or security reasons, you can determine where your log data is stored.

Collect logs from your applications and third-party software

You can collect logs from applications that you write by instrumenting your application by using a client library. However, it's not always necessary to instrument your application. For example, for some configurations you can use the Ops Agent to send logs that were written to stdout or stderr to your Google Cloud project.

You can also collect log data from your third-party applications, like nginx, by installing the Ops Agent and then configuring it to write logs from that application to your Google Cloud project.

See Which should you use: Logging agent or client library? for information that can help you decide which approach best suits your requirements.

Troubleshoot and analyze logs

You can view and analyze your log data by using the Google Cloud console, either with the Logs Explorer or the Log Analytics pages. You can query and view logs with both interfaces; however, they use different query languages and they have different capabilities.

When you want to troubleshoot and analyze the performance of your services and applications, we recommend that you use the Logs Explorer. This interface is designed to let you view individual log entries and find related log entries. For example, when a log entry is part of an error group, that entry is annotated with a menu of options that you access more information about the error.

When you're interested in performing aggregate operations on your logs, for example, to compute the average latency for HTTP requests issued to a specific URL over time, use the Log Analytics interface. With this interface, you use SQL to query your log data, and therefore you can use the capabilities of SQL to help you understand your log data.

If you prefer to query your log data programmatically, you can use the Cloud Logging API or the Google Cloud CLI to export log data from your Google Cloud project.

For more information, see Query and view logs overview.

Monitor your logs

You can configure Cloud Logging to notify you when certain kinds of events occur in your logs. These notifications might be sent when a particular pattern appears in a log entry, or when a trend is detected in your log data. If you're interested in viewing the error rates of your Google Cloud services, then you can view the Cloud Logging dashboard, which is preconfigured.

For example, if you want to be notified when a particular message, like a critical security-related event occurs, then you can create a log-based alerting policy. A log-based alerting policy monitors your logs for a specific pattern. If that pattern is found, then Monitoring sends a notification and creates an incident. Log-based alerting policies are useful for important but rare events, like the following:

  • You want to be notified when an event appears in an audit log; for example, a user accesses the security key of a service account.
  • Your application writes deployment messages to logs, and you want to be notified when a deployment change is logged.

Alternatively, you might want to monitor trends or the occurrence of events over time. For these situations, you can create a log-based metric. A log-based metric can count the number of log entries that match some criterion, or they can extract and organize information like response times into histograms. You can also configure alerting policies that notify you when performance changes occur, for example, the response time increases to an unacceptable level. Log-based metrics are suitable when you want to do any of the following:

  • Count the occurrences of a message, like a warning or error, in your logs and receive a notification when the number of occurrences crosses a threshold.
  • Observe trends in your data, like latency values in your logs, and receive a notification if the values change in an unacceptable way.
  • Create charts to display the numeric data extracted from your logs.

For more information, see Monitor your logs.

Log storage

You don't have to configure the location where logs are stored. By default, your Google Cloud project automatically stores all logs it receives in a Cloud Logging log bucket. For example, if your Google Cloud project contains a Compute Engine instance, then all logs Compute Engine generates are automatically stored for you. However, if you need to, you can configure a number of aspects about your log storage, such as which logs are stored, which are discarded, and where the logs are stored.

You can route, or forward, log entries to the following destinations, which can be in the same Google Cloud project or in a different Google Cloud project:

  • Cloud Logging bucket: Provides storage in Cloud Logging. A log bucket can store log entries that are received by multiple Google Cloud projects. You can combine your Cloud Logging data with other data by upgrading a log bucket to use Log Analytics, and then creating a linked BigQuery dataset. For information about viewing log entries stored in log buckets, see Query and view logs overview and View logs routed to Cloud Logging buckets.
  • BigQuery dataset: Provides storage of log entries in BigQuery datasets. You can use big data analysis capabilities on the stored log entries. To combine your Cloud Logging data with other data sources, we recommend that you upgrade your log buckets to use Log Analytics and then create a linked BigQuery dataset. For information about viewing log entries routed to BigQuery, see View logs routed to BigQuery.
  • Cloud Storage bucket: Provides storage of log entries in Cloud Storage. Log entries are stored as JSON files. For information about viewing log entries routed to Cloud Storage, see View logs routed to Cloud Storage.
  • Pub/Sub topic: Provides support for third-party integrations. Log entries are formatted into JSON and then routed to a Pub/Sub topic. For information about viewing log entries routed to Pub/Sub, see View logs routed to Pub/Sub.
  • Splunk: Provides support for Splunk. You must route your log entries to a Pub/Sub topic and then subscribe to that topic by using Splunk.
  • Google Cloud project: Route log entries to a different Google Cloud project. When you route log entries to a different Google Cloud project, the destination project's Log Router receives the log entries and processes them. The sinks in the destination project determine how the received log entries are routed. Error Reporting can analyze log entries when the destination project routes those log entries to a log bucket owned by the destination project.
  • Other resources: Route your log entries to a supported destination that is in a different project. For information about the paths to use, see Destination path formats.

For more information, including data regionality support, see Routing and storage overview.

Categories of logs

Log categories are meant to help describe the logging information available to you; the categories aren't mutually exclusive:

  • Platform logs are logs written by your Google Cloud services. These logs can help you debug and troubleshoot issues, and help you better understand the Google Cloud services you're using. For example VPC Flow Logs record a sample of network flows sent from and received by VM instances.

  • Component logs are similar to platform logs, but they are generated by Google-provided software components that run on your systems. For example, GKE provides software components that users can run on their own VM or in their own data center. Logs are generated from the user's GKE instances and sent to a user's Google Cloud project. GKE uses the logs or their metadata to provide user support.

  • Security logs help you answer "who did what, where, and when":

    • Cloud Audit Logs provide information about administrative activities and accesses within your Google Cloud resources. Enabling audit logs helps your security, auditing, and compliance entities monitor Google Cloud data and systems for possible vulnerabilities or external data misuse. For a list of Google Cloud supported services, see Google services with audit logs.

    • Access Transparency provides you with logs of actions taken by Google staff when accessing your Google Cloud content. Access Transparency logs can help you track compliance with your legal and regulatory requirements for your organization. For a list of Google Cloud supported services, see Google services with Access Transparency logs.

  • User-written logs are logs written by custom applications and services. Typically, these logs are written to Cloud Logging by using one of the following methods:

  • Multicloud logs and Hybrid-cloud logs refer to logs from other cloud providers like Microsoft Azure and logs from on-premises infrastructure.

Data model for logs

The data model that Cloud Logging uses to organize your log data determines the dimensions over which you can query that data. For example, because a log is a named collection of individual entries, you can query your data by the name of the log. Similarly, because each log is composed of log entries which are formatted as LogEntry objects, you can write queries that retrieve only those log entries where the value of a LogEntry field matches some criteria. For example, you can display only those log entries whose severity field has the value of ERROR.

Each log entry records status or describes a specific event, such as the creation of a VM instance, and minimally consists of the following:

  • A timestamp that indicates either when the event took place or when it was received by Cloud Logging.
  • Information about the source of the log entry. This source is called the monitored resource. Examples of monitored resources include individual Compute Engine VM instances and Google Kubernetes Engine containers. For a complete listing of monitored resource types, see Monitored resources and services.
  • A payload, also known as a message, either provided as unstructured textual data or as structured textual data in JSON format.
  • The name of the log to which it belongs. The name of a log includes the full path of the resource to which the log entries belong, followed by an identifier. The following are examples of log names:

    • projects/my-project/logs/stderr
    • projects/my-project/logs/stdout
    • projects/my-project/

Access control

Identity and Access Management roles control the ability for a principal to access logs. You can grant predefined roles to principals, or you can create custom roles. For more information about required permissions, see Access control.


Log entries are stored in log buckets for a specified length of time and are then deleted. For more information, see Routing and storage overview: retention.

Pricing and cost controls

For information about pricing, see Cloud Logging pricing.

For strategies to reduce Logging costs, see Logging cost controls.