Stay organized with collections
Save and categorize content based on your preferences.
Exponential backoff is a standard error handling
strategy for network applications in which a client periodically retries a
failed request with increasing delays between requests. Clients should use
exponential backoff for all requests to Memorystore for Valkey that return
HTTP 5xx and 429 response code errors.
Understanding how exponential backoff works is important if you are building
client applications that use the Memorystore for Valkey REST API
directly.
If you are using the Google Cloud console, the console sends
requests to Memorystore for Valkey on your behalf and handles any necessary
backoff.
Example algorithm
An exponential backoff algorithm retries requests by exponentially increasing
the wait time between retries up to a maximum backoff time. An example is:
Make a request to Memorystore for Valkey.
If the request fails, wait 1 + random_number_milliseconds seconds and retry
the request.
If the request fails, wait 2 + random_number_milliseconds seconds and retry
the request.
If the request fails, wait 4 + random_number_milliseconds seconds and retry
the request.
And so on, up to a maximum_backoff time.
Continue waiting and retrying up to some maximum number of retries, but
do not increase the wait period between retries.
where:
The wait time is min(((2^n)+random_number_milliseconds), maximum_backoff),
with n incremented by 1 for each iteration (request).
random_number_milliseconds is a random number of milliseconds less than or
equal to 1000. This helps to avoid cases where many clients get synchronized by
some situation and all retry at once, sending requests in synchronized
waves. The value of random_number_milliseconds should be recalculated after
each retry request.
maximum_backoff is typically 32 or 64 seconds. The appropriate value
depends on the use case.
It's okay to continue retrying once you reach the maximum_backoff time.
Retries after this point do not need to continue increasing backoff time. For
example, if a client uses an maximum_backoff time of 64 seconds, then after
reaching this value, the client can retry every 64 seconds. At some point,
clients should be prevented from retrying infinitely.
The maximum backoff and maximum number of retries that a client uses
depends on the use case and network conditions. For example, mobile
clients of an application may need to retry more times and for longer intervals
when compared to desktop clients of the same application.
If the retry requests fail after exceeding the maximum number of retries, report
or log an error using one of the methods listed under Getting support.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Exponential backoff\n\n[Exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) is a standard error handling\nstrategy for network applications in which a client periodically retries a\nfailed request with increasing delays between requests. Clients should use\nexponential backoff for all requests to Memorystore for Valkey that return\nHTTP `5xx` and `429` response code errors.\n\nUnderstanding how exponential backoff works is important if you are building\nclient applications that use the Memorystore for Valkey [REST API](/memorystore/docs/valkey/reference/rest)\ndirectly.\n\nIf you are using the [Google Cloud console](https://console.cloud.google.com/), the console sends\nrequests to Memorystore for Valkey on your behalf and handles any necessary\nbackoff.\n\nExample algorithm\n-----------------\n\nAn exponential backoff algorithm retries requests by exponentially increasing\nthe wait time between retries up to a maximum backoff time. An example is:\n\n1. Make a request to Memorystore for Valkey.\n\n2. If the request fails, wait 1 + `random_number_milliseconds` seconds and retry\n the request.\n\n3. If the request fails, wait 2 + `random_number_milliseconds` seconds and retry\n the request.\n\n4. If the request fails, wait 4 + `random_number_milliseconds` seconds and retry\n the request.\n\n5. And so on, up to a `maximum_backoff` time.\n\n6. Continue waiting and retrying up to some maximum number of retries, but\n do not increase the wait period between retries.\n\nwhere:\n\n- The wait time is min(((2\\^`n`)+`random_number_milliseconds`), `maximum_backoff`),\n with `n` incremented by 1 for each iteration (request).\n\n- `random_number_milliseconds` is a random number of milliseconds less than or\n equal to 1000. This helps to avoid cases where many clients get synchronized by\n some situation and all retry at once, sending requests in synchronized\n waves. The value of `random_number_milliseconds` should be recalculated after\n each retry request.\n\n- `maximum_backoff` is typically 32 or 64 seconds. The appropriate value\n depends on the use case.\n\nIt's okay to continue retrying once you reach the `maximum_backoff` time.\nRetries after this point do not need to continue increasing backoff time. For\nexample, if a client uses an `maximum_backoff` time of 64 seconds, then after\nreaching this value, the client can retry every 64 seconds. At some point,\nclients should be prevented from retrying infinitely.\n\nThe maximum backoff and maximum number of retries that a client uses\ndepends on the use case and network conditions. For example, mobile\nclients of an application may need to retry more times and for longer intervals\nwhen compared to desktop clients of the same application.\n\nIf the retry requests fail after exceeding the maximum number of retries, report\nor log an error using one of the methods listed under [Getting support](/memorystore/docs/valkey/getting-support)."]]