Serverless VPC Access
Serverless VPC Access makes it possible for you to connect directly to your Virtual Private Cloud (VPC) network from serverless environments such as Cloud Run, App Engine, or Cloud Run functions. Configuring Serverless VPC Access allows your serverless environment to send requests to your VPC network by using internal DNS and internal IP addresses (as defined by RFC 1918 and RFC 6598). The responses to these requests also use your internal network.
There are two main benefits to using Serverless VPC Access:
- Requests sent to your VPC network are never exposed to the internet.
- Communication through Serverless VPC Access can have less latency compared to the internet.
Serverless VPC Access sends internal traffic from your VPC network to your serverless environment only when that traffic is a response to a request that was sent from your serverless environment through the Serverless VPC Access connector. To learn about sending other internal traffic to your serverless environment, see Private Google Access.
To access resources across multiple VPC networks and Google Cloud projects, you must also configure Shared VPC or VPC Network Peering.
How it works
Serverless VPC Access is based on a resource called a connector. A connector handles traffic between your serverless environment and your VPC network. When you create a connector in your Google Cloud project, you attach it to a specific VPC network and region. You can then configure your serverless services to use the connector for outbound network traffic.
IP address ranges
There are two options for setting the IP address range for a connector:
- Subnet: You can specify an existing
/28
subnet if there are no resources that already use the subnet. - CIDR range: You can specify an unused
/28
CIDR range. When specifying this range, make sure that it doesn't overlap with any in-use CIDR ranges.
Traffic sent through the connector into your VPC network originates from the subnet or CIDR range that you specify.
Firewall rules
Firewall rules are necessary for the operation of the connector and its communication with other resources, including resources in your network.
Firewall rules for connectors in standalone VPC networks or Shared VPC host projects
If you create a connector in a standalone VPC network or in the host project of a Shared VPC network, Google Cloud creates all necessary firewall rules. These firewall rules exist only as long as the associated connector exists. They are visible in the Google Cloud console, but you cannot edit or delete them.
Firewall rule purpose | Name format | Type | Action | Priority | Protocols and ports |
---|---|---|---|---|---|
Allows traffic to the connector's VM instances from health check probes ranges (35.191.0.0/16 , 35.191.192.0/18 , 130.211.0.0/22 ) on certain ports |
aet-CONNECTOR_REGION-CONNECTOR_NAME-hcfw |
Ingress | Allow | 100 | TCP:667 |
Allows traffic to the connector's VM instances from Google's underlying serverless infrastructure (35.199.224.0/19 ) on certain ports |
aet-CONNECTOR_REGION-CONNECTOR_NAME-rsgfw |
Ingress | Allow | 100 | TCP:667, UDP:665-666, ICMP |
Allows traffic from the connector's VM instances to Google's underlying serverless infrastructure (35.199.224.0/19 ) on certain ports |
aet-CONNECTOR_REGION-CONNECTOR_NAME-earfw |
Egress | Allow | 100 | TCP:667, UDP:665-666, ICMP |
Blocks traffic from the connector's VM instances to Google's underlying serverless infrastructure (35.199.224.0/19 ) for all other ports |
aet-CONNECTOR_REGION-CONNECTOR_NAME-egrfw |
Egress | Deny | 100 | TCP:1-666, 668-65535, UDP:1-664, 667-65535 |
Allows all traffic from the connector's VM instances (based on their IP address) to all resources in the connector's VPC network | aet-CONNECTOR_REGION-CONNECTOR_NAME-sbntfw |
Ingress | Allow | 1000 | TCP, UDP, ICMP |
Allows all traffic from the connector's VM instances (based on their network tag) to all resources in the connector's VPC network | aet-CONNECTOR_REGION-CONNECTOR_NAME-tagfw |
Ingress | Allow | 1000 | TCP, UDP, ICMP |
You can further restrict your connector's access to resources in its target VPC network by using VPC firewall rules or rules in firewall policies. When adding firewall rules, be sure they use a priority higher than 100 so that they don't conflict with hidden firewall rules set by Google Cloud. For more information, see Restrict connector VM access VPC network resources.
Firewall rules for connectors in Shared VPC service projects
If you create a connector in a service project and the connector targets a Shared VPC network in the host project, you need to add firewall rules to allow necessary traffic for the connector's operation.
You can also restrict your connector's access to resources in its target VPC network by using VPC firewall rules or rules in firewall policies. For more information, see Access to VPC resources.
Throughput and scaling
A Serverless VPC Access connector consists of connector instances. Connector instances can use one of several machine types. Larger machine types provide more throughput. You can view the estimated throughput and cost for each machine type in the Google Cloud console and in the following table.
Machine type | Estimated throughput range in Mbps* | Price (connector instance plus network outbound data transfer costs) |
---|---|---|
f1-micro |
100-500 | f1-micro pricing |
e2-micro |
200-1000 | e2-micro pricing |
e2-standard-4 |
3200-16000 | e2 standard pricing |
* Maximum throughput ranges are estimates based on regular operation. Actual throughput depends on many factors. See VM network bandwidth.
You can set the minimum and maximum number of connector instances allowed for your connector. The minimum must be at least 2. The maximum can be at most 10, and must be larger than the minimum. If you don't specify the minimum and maximum number of instances for your connector, the default minimum of 2 and default maximum of 10 apply. A connector might temporarily exceed the value set for maximum instances when Google performs biweekly maintenance, such as security updates. During maintenance, additional instances might be added to ensure uninterrupted service. After maintenance, connectors return to the same number of instances that they had before the maintenance period. Maintenance usually lasts for a few minutes. To reduce impact during maintenance, use connection pools and don't rely on connections lasting more than one minute. Instances stop accepting requests one minute before shutdown.
Serverless VPC Access automatically scales out the number of instances in your connector as traffic increases. The instances added are of the type you specified for your connector. Connectors cannot mix machine types. Connectors don't scale in. To prevent connectors from scaling out more than you want, set the maximum number of instances to a low number. If your connector has scaled out and you prefer to have fewer instances, recreate the connector with the necessary number of instances.
Example
If you choose f1-micro
for your machine type, and you use the
default values for minimum and maximum number of instances (2 and 10
respectively), the estimated throughput for your connector is 100 Mbps at the
default minimum number of instances and 500 Mbps at the default maximum number
of instances.
Throughput chart
You can monitor current throughput from the Connector details page in the Google Cloud console. The Throughput chart on this page displays a detailed view of the connector's throughput metrics.
Network tags
Serverless VPC Access network tags let you refer to VPC connectors in firewall rules and routes.
Every Serverless VPC Access connector automatically receives the following two network tags (sometimes called instance tags):
Universal network tag (
vpc-connector
): Applies to all existing connectors and any connectors made in the future.Unique network tag (
vpc-connector-REGION-CONNECTOR_NAME
): Applies to the connector CONNECTOR_NAME in the region REGION.
These network tags cannot be deleted. New network tags cannot be added.
Use cases
You can use Serverless VPC Access to access Compute Engine VM instances, Memorystore instances, and any other resources with internal DNS or internal IP address. Some examples are:
- You use Memorystore to store data for a serverless service.
- Your serverless workloads use third-party software that you run on a Compute Engine VM.
- You run a backend service on a Managed Instance Group in Compute Engine and need your serverless environment to communicate with this backend without exposure to the internet.
- Your serverless environment needs to access data from your on-premises database through Cloud VPN.
Example
In this example, a Google Cloud project is running multiple services across the following serverless environments: App Engine, Cloud Run functions, and Cloud Run.
A Serverless VPC Access connector was created and assigned the IP
range 10.8.0.0/28
. Therefore, the source IP address for any request sent from
the connector is in this range.
There are two resources in the VPC network. One of the resources
has the internal IP address 10.0.0.4
. The other resource has the internal IP
address 10.1.0.2
, and is in a different region than the
Serverless VPC Access connector.
The connector handles sending and receiving both the requests and responses
directly from these internal IP addresses. When the connector sends requests to
the resource with internal IP address 10.1.0.2
, outbound data transfer
costs apply because that resource is in a
different region.
All requests and responses between the serverless environments and the resources in the VPC network travel internally.
Requests sent to external IP addresses still travel through the internet and don't use the Serverless VPC Access connector.
The following diagram shows this configuration.
Pricing
For Serverless VPC Access pricing, see Serverless VPC Access on the VPC pricing page.
Supported services
The following table shows which types of networks you can reach using Serverless VPC Access:
Connectivity service | Serverless VPC Access support |
---|---|
VPC | |
Shared VPC | |
Legacy networks | |
Networks connected to Cloud Interconnect | |
Networks connected to Cloud VPN | |
Networks connected to VPC Network Peering |
The following table shows which serverless environments support Serverless VPC Access:
Serverless environment | Serverless VPC Access support |
---|---|
Cloud Run | |
Knative serving* | |
Cloud Run functions | |
App Engine standard environment | All runtimes except PHP 5 |
App Engine flexible environment* |
*If you want to use internal IP addresses when connecting from Knative serving or the App Engine flexible environment, you don't need to configure Serverless VPC Access. Just make sure your service is deployed in a VPC network that has connectivity to the resources you want to reach.
Supported networking protocols
The following table describes the networking protocols supported by Serverless VPC Access connectors.
Protocol | Route only requests to private IPs through the connector | Route all traffic through the connector |
---|---|---|
TCP | ||
UDP | ||
ICMP | Supported only for external IP addresses |
Supported regions
Serverless VPC Access connectors are supported in every region that supports Cloud Run, Cloud Run functions, or App Engine standard environment.
To view available regions:
gcloud compute networks vpc-access locations list
What's next
- To configure Serverless VPC Access, see Configure Serverless VPC Access.