Retry events

The retry characteristics of Eventarc match that of its transport layer, Cloud Pub/Sub. For more information, see Retry requests and Push backoff.

The default message retention duration set by Eventarc is 24 hours with an exponential backoff delay.

You can update the retry policy through the Pub/Sub subscription associated with the Eventarc trigger:

  1. Open the Trigger details page.
  2. Click the topic.
  3. Click the Subscriptions tab.

Any subscription automatically created by Eventarc will have this format: projects/PROJECT_ID/subscriptions/eventarc-REGION-TRIGGER_ID-sub-SUBSCRIPTION_ID. For more information on subscription limits, see Pub/Sub resource limits.

If Pub/Sub attempts to deliver a message but the destination can't acknowledge it, Pub/Sub will send the message again with a minimum exponential backoff of 10 seconds. If the destination continues to not acknowledge the message, more time is added to the delay in each retry (up to a maximum of 600 seconds) and the message is resent to the destination.

Note that Workflows acknowledges events as soon as the workflow execution starts.

Dead letter topics

If the destination doesn't receive the message, you can forward undelivered messages to a dead-letter topic (also known as a dead-letter queue). A dead-letter topic can store messages that the destination can't acknowledge. You must set a dead-letter topic when you create or update a Pub/Sub subscription, not when you create a Pub/Sub topic or when Eventarc creates a Pub/Sub topic. For more information, see Handle message failures.

Errors that don't warrant retries

When applications use Pub/Sub as the event source and the event is not delivered, the event is automatically retried, except for errors that don't warrant retries. Events to the workflow destination from any source won't be retried if the workflow doesn't execute. If the workflow execution starts but later fails, the executions are not retried. To resolve such service issues, you should handle errors and retries within the workflow.

Duplicate events

Duplicate events might be delivered to event handlers. According to the CloudEvents specification, the combination of source and id attributes is considered unique, and therefore any events with the same combination are considered duplicates. You should implement idempotent event handlers as a general best practice.

Make event handlers idempotent

Event handlers that can be retried should be idempotent, using the following general guidelines:

  • Many external APIs let you supply an idempotency key as a parameter. If you are using such an API, you should use the event ID as the idempotency key.
  • Idempotency works well with at-least-once delivery, because it makes it safe to retry. So a general best practice for writing reliable code is to combine idempotency with retries.
  • Make sure that your code is internally idempotent. For example:
    • Make sure that mutations can happen more than once without changing the outcome.
    • Query database state in a transaction before mutating the state.
    • Make sure that all side effects are themselves idempotent.
  • Impose a transactional check outside your service, independent of the code. For example, persist state somewhere recording that a given event ID has already been processed.
  • Deal with duplicate calls out-of-band. For example, have a separate clean up process that cleans up after duplicate calls.