Tips & Tricks
This document describes best practices for designing, implementing, testing, and deploying Cloud Run functions.
Correctness
This section describes general best practices for designing and implementing Cloud Run functions.
Write idempotent functions
Your functions should produce the same result even if they are called multiple times. This lets you retry an invocation if the previous invocation fails part way through your code. For more information, see retrying event-driven functions.
Ensure HTTP functions send an HTTP response
If your function is HTTP-triggered, remember to send an HTTP response, as shown below. Failing to do so can result in your function executing until timeout. If this occurs, you will be charged for the entire timeout time. Timeouts may also cause unpredictable behavior or cold starts on subsequent invocations, resulting in unpredictable behavior or additional latency.
Node.js
Python
Go
Java
C#
Ruby
PHP
Do not start background activities
Background activity is anything that happens after your function has terminated.
A function invocation finishes once the function returns or otherwise signals
completion, such as by calling the callback
argument in Node.js event-driven
functions. Any code run after graceful termination cannot access the CPU and
will not make any progress.
In addition, when a subsequent invocation is executed in the same environment,
your background activity resumes, interfering with the new invocation. This may
lead to unexpected behavior and errors that are hard to diagnose. Accessing
the network after a function terminates usually leads to connections being reset
(ECONNRESET
error code).
Background activity can often be detected in logs from individual invocations, by finding anything that is logged after the line saying that the invocation finished. Background activity can sometimes be buried deeper in the code, especially when asynchronous operations such as callbacks or timers are present. Review your code to make sure all asynchronous operations finish before you terminate the function.
Always delete temporary files
Local disk storage in the temporary directory is an in-memory filesystem. Files that you write consume memory available to your function, and sometimes persist between invocations. Failing to explicitly delete these files may eventually lead to an out-of-memory error and a subsequent cold start.
You can see the memory used by an individual function by selecting it in the list of functions in the Google Cloud console and choosing the Memory usage plot.
If you need access to long term storage, consider using Cloud Run volume mounts with Cloud Storage or NFS volumes.
You can reduce memory requirements when processing larger files using pipelining. For example, you can process a file on Cloud Storage by creating a read stream, passing it through a stream-based process, and writing the output stream directly to Cloud Storage.
Functions Framework
To ensure that the same dependencies are installed consistently across environments, we recommend that you include the Functions Framework library in your package manager and pin the dependency to a specific version of Functions Framework.
To do this, include your preferred version in the relevant lock file (for example,
package-lock.json
for Node.js, or requirements.txt
for Python).
If Functions Framework is not explicitly listed as a dependency, it will automatically be added during the build process using the latest available version.
Tools
This section provides guidelines on how to use tools to implement, test, and interact with Cloud Run functions.
Local development
Function deployment takes a bit of time, so it is often faster to test the code of your function locally.
Error reporting
In languages that use exception handling, do not throw uncaught exceptions, because they force cold starts in future invocations. See the Error Reporting guide for information on how to properly report errors.
Do not manually exit
Manually exiting can cause unexpected behavior. Please use the following language-specific idioms instead:
Node.js
Do not use process.exit()
. HTTP functions should send a response with
res.status(200).send(message)
, and event-driven
functions will exit once they return (either implicitly or explicitly).
Python
Do not use sys.exit()
. HTTP functions should explicitly return
a response as a string, and event-driven functions will exit once
they return a value (either implicitly or explicitly).
Go
Do not use os.Exit()
. HTTP functions should explicitly return
a response as a string, and event-driven functions will exit once
they return a value (either implicitly or explicitly).
Java
Do not use System.exit()
. HTTP functions should send a response with
response.getWriter().write(message)
, and event-driven
functions will exit once they return (either implicitly or explicitly).
C#
Do not use System.Environment.Exit()
. HTTP functions should send a response with
context.Response.WriteAsync(message)
, and event-driven
functions will exit once they return (either implicitly or explicitly).
Ruby
Do not use exit()
or abort()
. HTTP functions should explicitly return
a response as a string, and event-driven functions will exit once
they return a value (either implicitly or explicitly).
PHP
Do not use exit()
or die()
. HTTP functions should explicitly return
a response as a string, and event-driven functions will exit once
they return a value (either implicitly or explicitly).
Use Sendgrid to send emails
Cloud Run functions does not allow outbound connections on port 25, so you cannot make non-secure connections to an SMTP server. The recommended way to send emails is to use a third party service such as SendGrid. You can find other options for sending email in the Sending Email from an Instance tutorial for Google Compute Engine.
Performance
This section describes best practices for optimizing performance.
Avoid low concurrency
Because cold starts are expensive, being able to reuse recently started instances during a spike is a great optimization to handle load. Limiting concurrency limits how existing instances can be leveraged, therefore incurring more cold starts.
Increasing concurrency helps defer multiple requests per instance, making spikes of load easier to handle. Note: 1st gen functions have concurrency limited to 1. We recommend that you migrate to Cloud Run functions.Use dependencies wisely
Because functions are stateless, the execution environment is often initialized from scratch (during what is known as a cold start). When a cold start occurs, the global context of the function is evaluated.
If your functions import modules, the load time for those modules can add to the invocation latency during a cold start. You can reduce this latency, as well as the time needed to deploy your function, by loading dependencies correctly and not loading dependencies your function doesn't use.
Use global variables to reuse objects in future invocations
There is no guarantee that the state of a Cloud Run function will be preserved for future invocations. However, Cloud Run functions often recycles the execution environment of a previous invocation. If you declare a variable in global scope, its value can be reused in subsequent invocations without having to be recomputed.
This way you can cache objects that may be expensive to recreate on each function invocation. Moving such objects from the function body to global scope may result in significant performance improvements. The following example creates a heavy object only once per function instance, and shares it across all function invocations reaching the given instance:
Node.js
Python
Go
Java
C#
Ruby
PHP
It is particularly important to cache network connections, library references, and API client objects in global scope. See Optimize networking for examples.
Reduce cold starts by setting a minimum number of instances
By default, Cloud Run functions scales the number of instances based on the number of incoming requests. You can change this default behavior by setting a minimum number of instances that Cloud Run functions must keep ready to serve requests. Setting a minimum number of instances reduces cold starts of your application. We recommend setting a minimum number of instances, and completing initialization at load time, if your application is latency-sensitive.
To learn how to set a minimum number of instances, see Using minimum instances.
Notes about cold start and initialization
Global initialization happens at load time. Without it, the first request would need to complete initialization and load modules, thereby incurring higher latency.
However, global initialization also has an impact on cold starts. To minimize this impact, initialize only what is needed for the first request, to keep the first request's latency as low as possible.
This is especially important if you configured min instances as described above for a latency-sensitive function. In that scenario, completing initialization at load time and caching useful data ensures that the first request doesn't need to do it and is served with low latency.
If you initialize variables in global scope, depending on the language, long initialization times can result in two behaviors: - for some combination of languages and async libraries, the function framework can run asynchronously and return immediately, causing code to continue running in the background, which could cause issues such as not being able to access the CPU. To avoid this, you should block on module initialization as described below. This also ensures that requests are not served until the initialization is complete. - on the other hand, if the initialization is synchronous, the long initialization time will cause longer cold starts, which could be an issue especially with low concurrency functions during spikes of load.
Example of prewarming an async node.js library
Node.js with Firestore is an example of async node.js library. In order to take advantage of min_instances, the following code completes loading and initialization at load time, blocking on the module loading.
TLA is used, which means ES6 is required, using an .mjs
extension for
the node.js code or adding type: module
to the package.json file.
{ "main": "main.js", "type": "module", "dependencies": { "@google-cloud/firestore": "^7.10.0", "@google-cloud/functions-framework": "^3.4.5" } }
Node.js
import Firestore from '@google-cloud/firestore'; import * as functions from '@google-cloud/functions-framework'; const firestore = new Firestore({preferRest: true}); // Pre-warm firestore connection pool, and preload our global config // document in cache. In order to ensure no other request comes in, // block the module loading with a synchronous global request: const config = await firestore.collection('collection').doc('config').get(); functions.http('fetch', (req, res) => { // Do something with config and firestore client, which are now preloaded // and will execute at lower latency. });
Examples of global initialization
Node.js
Python
Go
Java
C#
Ruby
PHP
PHP functions cannot preserve variables between requests. The scopes sample above uses lazy loading to cache global variable values in a file.
This is particularly important if you define several functions in a single file, and different functions use different variables. Unless you use lazy initialization, you may waste resources on variables that are initialized but never used.
Additional resources
Find out more about optimizing performance in the "Google Cloud Performance Atlas" video Cloud Run functions Cold Boot Time.