Eventarc overview

Eventarc lets you build event-driven architectures without having to implement, customize, or maintain the underlying infrastructure. Eventarc offers a standardized solution to manage the flow of state changes, called events, between decoupled microservices. When triggered, Eventarc routes these events to various destinations (in this document, see Event destinations) while managing delivery, security, authorization, observability, and error-handling for you.

You can manage Eventarc from the Google Cloud console, from the command line using the gcloud CLI, or by using the Eventarc API.

Eventarc is compliant with these certifications and standards.

Eventarc routes events from event providers to event destinations.
Figure 1. Eventarc routes events from event providers to event destinations (click diagram to enlarge).

1 Events from Google providers are either sent directly from the source (Cloud Storage, for example) or through Cloud Audit Logs entries, and use Pub/Sub as the transport layer. Events from Pub/Sub sources can use an existing Pub/Sub topic or Eventarc will automatically create a topic and manage it for you.

2 Events for Google Kubernetes Engine (GKE) destinations—including Knative serving services running in a GKE cluster—use Eventarc's event forwarder to pull new events from Pub/Sub and forward it to the destination. This component acts as a mediator between the Pub/Sub transport layer and the target service. It works on existing services and also supports signaling services (including those not exposed outside of the fully-managed cluster) while simplifying setup and maintenance. Note that the event forwarder's lifecycle is managed by Eventarc, and if you accidentally delete the event forwarder, Eventarc will restore this component.

3 Events for a workflow execution are transformed and passed to the workflow as runtime arguments. Workflows can combine and orchestrate Google Cloud and HTTP-based API services in an order that you define.

4 All event-driven functions in Cloud Functions (2nd gen) use Eventarc triggers to deliver events. You can configure Eventarc triggers when you deploy a Cloud Function using the Cloud Functions interface.

Key use cases

Eventarc supports many use cases for destination applications. Some examples are:

Configure and monitor
  • System configuration: Install a configuration management tool on a new VM when it is started.
  • Automated remediation: Detect if a service is not responding properly and automatically restart it.
  • Alerts and notifications: Monitor the balance of a cryptocurrency wallet address and trigger notifications.
  • Directory registrations: Activate an employee badge when a new employee joins a company.
  • Data synchronization: Trigger an accounting workflow when a prospect is converted in a CRM system.
  • Resource labeling: Label and identify the creator of a VM when it is created.
  • Sentiment analysis: Use the Cloud Natural Language API to train and deploy an ML model that attaches a satisfaction score to a customer service ticket when it is completed.
  • Image retouching and analysis: Remove the background and automatically categorize an image when a retailer adds it to an object store.


An event is a data record expressing an occurrence and its context. An event is a discrete unit of communication, independent of other events. For example, an event might be a change to data in a database, a file added to a storage system, or a scheduled job.

See Event types supported by Eventarc.

Event providers

Events are routed from an event provider (the source) to interested event consumers. The routing is performed based on information contained in the event, but an event does not identify a specific routing destination. Eventarc supports events from the following providers:

  • More than 130 Google Cloud providers. These providers send events (for example, an update to an object in a Cloud Storage bucket or a message published to a Pub/Sub topic) directly from the source, or through Cloud Audit Logs entries.
  • Third-party providers. These providers send events directly from the source (for example, third-party SaaS providers such as Datadog or the Check Point CloudGuard platform).

Event destinations

Events are routed to a specific destination (the target) known as the event receiver (or consumer) through Pub/Sub push subscriptions.

Cloud Functions (2nd gen)

All event-driven functions in Cloud Functions (2nd gen) use Eventarc triggers to deliver events. An Eventarc trigger enables a function to be triggered by any event type supported by Eventarc. You can configure Eventarc triggers when you deploy a Cloud Function using the Cloud Functions interface.

Cloud Run

Learn how to build an event receiver service that can be deployed to Cloud Run.

To determine how best to route events to a Cloud Run service, see Event routing options.


Eventarc supports creating triggers that target Google Kubernetes Engine (GKE) services. This includes the public endpoints of private and public services running in a GKE cluster.

  • For Eventarc to target and manage services in any given cluster, you must grant the Eventarc service account any necessary permissions.

  • You need to enable Workload Identity on the GKE cluster that the destination service is running on. Workload Identity is required to properly set up the event forwarder and is the recommended way to access Google Cloud services from applications running within GKE due to its improved security properties and manageability. For more information, see Enable Workload Identity.

Internal HTTP endpoints in a VPC network

You can configure event routing to an internal HTTP endpoint in a Virtual Private Cloud (VPC) network. To configure the trigger, you must also provide a network attachment ID. For more information, see Route events to an internal HTTP endpoint in a VPC network.


An execution of your workflow is triggered by the following:

  • When messages are published to a Pub/Sub topic
  • When an audit log is created that matches the trigger's filter criteria
  • In response to an unmediated event such as an update to an object in a Cloud Storage bucket

Workflows requires an IAM service account email that your Eventarc trigger will use to invoke the workflow executions. We recommend using a service account with the least privileges necessary to access the required resources. For more information, see Create and manage service accounts.

Event format and libraries

Eventarc delivers events, regardless of provider, to the target destination in a CloudEvents format using an HTTP request in binary content mode. CloudEvents is a specification for describing event metadata in a common way, under the Cloud Native Computing Foundation and organized by the foundation's Serverless Working Group.

Depending on the event provider, you can specify the encoding of the event payload data as either application/json or application/protobuf. Protocol Buffers (or Protobuf) is a language-neutral and platform-neutral extensible mechanism for serializing structured data. Note the following:

  • For custom sources or third-party providers, or for direct events from Pub/Sub, this formatting option is not supported.
  • An event payload formatted in JSON is larger than one formatted in Protobuf, and this might impact reliability depending on your event destination and its limits on event size. For more information, see Known issues.

Target destinations such as Cloud Functions, Cloud Run, and GKE consume events in the HTTP format. For Workflows destinations, the Workflows service converts the event to a JSON object, and passes the event into the workflow execution as a runtime argument.

Using a standard way to describe event metadata ensures consistency, accessibility, and portability. Event consumers can read these events directly, or you can use Google CloudEvents SDKs and libraries in various languages (including C#, Go, Java, Node.js, and Python) to read and parse the events:

Eventarc delivers events in a CloudEvents format. You can read events
    directly, or use Google CloudEvents SDKs or libraries to parse the events.
Figure 2. Eventarc delivers events in a CloudEvents format to event destinations. You can read these events directly, or use Google CloudEvents SDKs or libraries to read and parse the events.

The structure of the HTTP body for all events is available in the Google CloudEvents GitHub repository.


Eventarc considers the addition of the following attributes and fields backwards-compatible:

  • Optional filtering attributes or output-only attributes
  • Optional fields to the event payload

Eventarc triggers

Events occur whether or not a target destination reacts to them. You create a response to an event with a trigger. A trigger is a declaration that you are interested in a certain event or set of events. When you create a trigger, you specify filters for the trigger that let you capture and act on those specific events, including their routing from an event source to a target destination. For more information, see the REST representation of a trigger resource and Event providers and destinations.

Note that Pub/Sub subscriptions created for Eventarc persist regardless of activity and don't expire. To change the subscription properties, see Subscription properties.

Eventarc supports triggers for these event types:

Cloud Audit Logs (CAL) events
DescriptionCloud Audit Logs provides Admin Activity and Data Access audit logs for each Cloud project, folder, and organization. Google Cloud services write entries to these logs. Eventarc supports project-level audit logs. For more information, see the specific serviceName and methodName values that are supported by Eventarc as event types.
Event filter typeEventarc triggers with type=google.cloud.audit.log.v1.written send requests to your service or workflow when an audit log is created that matches the trigger's filter criteria.
Direct events
DescriptionEventarc can be triggered by various direct events such as an update to a Cloud Storage bucket, an update to a Firebase Remote Config template, or changes to resources on Google Cloud services.

Eventarc can also be triggered by messages published to Pub/Sub topics. Pub/Sub is a globally distributed message bus that automatically scales as you need it. Because Eventarc can be invoked by messages on a Pub/Sub topic, you can integrate Eventarc with any other service that supports Pub/Sub as a destination.

Event filter typeEventarc triggers with specific event filter types send requests to your service or workflow when an event occurs that matches the trigger's filter criteria; for example, type=google.cloud.storage.object.v1.finalized (when an object is created in a Cloud Storage bucket), or type=google.cloud.pubsub.topic.v1.messagePublished (when a message is published to the specified Pub/Sub topic).

Trigger location

Google Cloud services such as Cloud Storage can be set up to be regional or multi-regional. Some services, such as Cloud Build can be set up globally.

Eventarc lets you create regional triggers or, for some events, you can create a global trigger and receive events from all regions. For more information, see Understand Eventarc locations.

You should specify the location of the Eventarc trigger to match the location of the Google Cloud service that is generating events and avoid any performance and data residency issues caused by a global trigger.

You can specify trigger locations using a --location flag with each command. For Cloud Run destinations, if a --destination-run-region flag isn't specified, it's assumed that the service is in the same region as the trigger. For more information, see the Google Cloud CLI reference.

Reliability and delivery

Delivery expectations are as follows:

  • Events using Cloud Audit Logs are delivered in under a minute. (Note that although a Cloud Audit Logs trigger is created immediately, it can take up to two minutes for a trigger to propagate and filter events.)
  • Events using Pub/Sub are delivered in seconds.

There is no in-order, first-in-first-out delivery guarantee. Note that having strict ordering would undermine Eventarc's availability and scalability features which match those of its transport layer, Cloud Pub/Sub. For more information, see Ordering messages.

Latency and throughput are best effort. They vary based on multiple factors, including whether the Eventarc trigger is regional, multi-regional, or global; the configuration of a particular service; and the network load on resources in a Google Cloud region.

Note that there are usage quotas and limits that apply generally to Eventarc. There are also usage quota and limits that are specific to Workflows.

Event retry policy

The retry characteristics of Eventarc match that of its transport layer, Cloud Pub/Sub. For more information, see Retry requests and Push backoff.

The default message retention duration set by Eventarc is 24 hours with an exponential backoff delay.

You can update the retry policy through the Pub/Sub subscription associated with the Eventarc trigger:

  1. Open the Trigger details page.
  2. Click the topic.
  3. Click the Subscriptions tab.

Any subscription automatically created by Eventarc will have this format: projects/PROJECT_ID/subscriptions/eventarc-REGION-TRIGGER_ID-sub-SUBSCRIPTION_ID. For more information on subscription limits, see Pub/Sub resource limits.

If Pub/Sub attempts to deliver a message but the destination can't acknowledge it, Pub/Sub will send the message again with a minimum exponential backoff of 10 seconds. If the destination continues to not acknowledge the message, more time is added to the delay in each retry (up to a maximum of 600 seconds) and the message is resent to the destination.

Note that Workflows acknowledges events as soon as the workflow execution starts.

Dead letter topics

If the destination doesn't receive the message, you can forward undelivered messages to a dead-letter topic (also known as a dead-letter queue). A dead-letter topic can store messages that the destination can't acknowledge. You must set a dead-letter topic when you create or update a Pub/Sub subscription, not when you create a Pub/Sub topic or when Eventarc creates a Pub/Sub topic. For more information, see Handle message failures.

Errors that don't warrant retries

When applications use Pub/Sub as the event source and the event is not delivered, the event is automatically retried, except for errors that don't warrant retries. Events to the workflow destination from any source won't be retried if the workflow doesn't execute. If the workflow execution starts but later fails, the executions are not retried. To resolve such service issues, you should handle errors and retries within the workflow.

Duplicate events

Duplicate events might be delivered to event handlers. According to the CloudEvents specification, the combination of source and id attributes is considered unique, and therefore any events with the same combination are considered duplicates. You should implement idempotent event handlers as a general best practice.


Detailed logs for Eventarc, Cloud Run, GKE, Pub/Sub, and Workflows are available from Cloud Audit Logs.

Disaster recovery

You can take advantage of zones and regions to achieve reliability in the event of outages. To learn more about ensuring that RTO (Recovery Time Objective) and RPO (Recovery Point Objective) objectives are met for backup and recovery times when using Eventarc, see Architecting disaster recovery for cloud infrastructure outages.

What's next