If your organization uses Shared VPC, you can set up a Serverless VPC Access connector in either the service project or the host project. This guide shows how to set up a connector in the host project.
If you need to set up a connector in a service project, see Configure connectors in service projects. To learn about the advantages of each method, see Connecting to a Shared VPC network.
Before you begin
Check the Identity and Access Management (IAM) roles for the account you are currently using. The active account must have the following roles on the host project:
Select the host project in your preferred environment.
Go to the Google Cloud console dashboard.
In the menu bar at the top of the dashboard, click the project dropdown menu and select the host project.
Set the default project in the gcloud CLI to the host project by running the following in your terminal:
gcloud config set projectHOST_PROJECT_ID
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host project
Create a Serverless VPC Access connector
To send requests to your VPC network and receive the corresponding responses, you must create a Serverless VPC Access connector. You can create a connector by using the Google Cloud console, Google Cloud CLI, or Terraform:
Enable the Serverless VPC Access API for your project.
Go to the Serverless VPC Access overview page.
Click Create connector.
In the Name field, enter a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (
-
) count as two characters.In the Region field, select a region for your connector. This must match the region of your serverless service.
If your service is in the region
us-central
oreurope-west
, useus-central1
oreurope-west1
.In the Network field, select the VPC network to attach your connector to.
Click the Subnetwork pulldown menu:
Select an unused
/28
subnet.- Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
- To confirm that your subnet is not used for
Private Service Connect or Cloud Load Balancing, check that
the subnet
purpose
isPRIVATE
by running the following command in the gcloud CLI: Replacegcloud compute networks subnets describe
SUBNET_NAME SUBNET_NAME
with the name of your subnet.
(Optional) To set scaling options for additional control over the connector, click Show Scaling Settings to display the scaling form.
- Set the minimum and maximum number of instances for your connector,
or use the defaults, which are 2 (min) and 10 (max). The
connector scales out to the maximum specified as traffic increases,
but the connector does not scale back in when traffic decreases. You
must use values between
2
and10
, and theMIN
value must be less than theMAX
value. - In the Instance Type pulldown menu, choose the machine type to be used for the
connector, or use the default
e2-micro
. Notice the cost sidebar on the right when you choose the instance type, which displays bandwidth and cost estimations.
- Set the minimum and maximum number of instances for your connector,
or use the defaults, which are 2 (min) and 10 (max). The
connector scales out to the maximum specified as traffic increases,
but the connector does not scale back in when traffic decreases. You
must use values between
Click Create.
A green check mark will appear next to the connector's name when it is ready to use.
Update
gcloud
components to the latest version:gcloud components update
Enable the Serverless VPC Access API for your project:
gcloud services enable vpcaccess.googleapis.com
Create a Serverless VPC Access connector:
gcloud compute networks vpc-access connectors create
CONNECTOR_NAME \ --region=REGION \ --subnet=SUBNET \ --subnet-project=HOST_PROJECT_ID \ # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max. --min-instances=MIN \ --max-instances=MAX \ # Optional: specify machine type, default is e2-micro --machine-type=MACHINE_TYPE Replace the following:
CONNECTOR_NAME
: a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (-
) count as two characters.REGION
: a region for your connector; this must match the region of your serverless service. If your service is in the regionus-central
oreurope-west
, useus-central1
oreurope-west1
.SUBNET
: the name of an unused/28
subnet.- Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
- To confirm that your subnet is not used for
Private Service Connect or Cloud Load Balancing, check
that the subnet
purpose
isPRIVATE
by running the following command in the gcloud CLI: Replace the following:gcloud compute networks subnets describe
SUBNET_NAME SUBNET_NAME
: the name of your subnet
HOST_PROJECT_ID
: the ID of the host projectMIN
: the minimum number of instances to use for the connector. Use an integer between2
and9
. Default is2
. To learn about connector scaling, see Throughput and scaling.MAX
: the maximum number of instances to use for the connector. Use an integer between3
and10
. Default is10
. If traffic requires it, the connector scales out to[MAX]
instances, but does not scale back in. To learn about connector scaling, see Throughput and scaling.MACHINE_TYPE
:f1-micro
,e2-micro
, ore2-standard-4
. To learn about connector throughput, including machine type and scaling, see Throughput and scaling.
For more details and optional arguments, see the
gcloud
reference.Verify that your connector is in the
READY
state before using it:gcloud compute networks vpc-access connectors describe
CONNECTOR_NAME \ --region=REGION Replace the following:
CONNECTOR_NAME
: the name of your connector; this is the name that you specified in the previous stepREGION
: the region of your connector; this is the region that you specified in the previous step
The output should contain the line
state: READY
.
You can use a Terraform resource
to enable the vpcaccess.googleapis.com
API.
You can use Terraform modules to create a VPC network and subnet and then create the connector.
Provide access to the connector
Provide access to the connector by granting the Serverless VPC Access User IAM role on the host project to the principal that deploys your App Engine service.
Open the IAM page.
Click the project dropdown menu and select the host project.
Click Add.
In the New principals field, add the principal that deploys your App Engine service.
In the Role field, select Serverless VPC Access User.
Click Save.
Run the following in your terminal:
gcloud projects add-iam-policy-bindingHOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/vpcaccess.user
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectPRINCIPAL
: the principal that deploys your App Engine service. Learn more about the--member
flag.
Make the connector discoverable
To see the connector, principals need certain viewing roles on both the host project and the service project. To make your connector appear when principals view available connectors in the Google Cloud console or from their terminal, add IAM roles for principals who deploy App Engine services.
Grant IAM roles on the host project
On the host project, grant principals who deploy App Engine services the
Serverless VPC Access Viewer (vpcaccess.viewer
)
role.
Open the IAM page.
Click the project dropdown menu and select the host project.
Click Add.
In the New principals field, enter the email address of the principal that should be able to see the connector from the service project. You can enter multiple emails in this field.
In the Role field, select Serverless VPC Access Viewer.
Click Save.
Run the following in your terminal:
gcloud projects add-iam-policy-bindingHOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/vpcaccess.viewer
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectPRINCIPAL
: the principal who deploys App Engine services. Learn more about the--member
flag.
Grant IAM roles on the service project
On the service project, grant principals who deploy App Engine services
the
Compute Network Viewer (compute.networkViewer
)
role.
Open the IAM page.
Click the project dropdown menu and select the service project.
Click Add.
In the New principals field, enter the email address of the principal that should be able to see the connector from the service project. You can enter multiple emails in this field.
In the Role field, select Compute Network Viewer.
Click Save.
Run the following in your terminal:
gcloud projects add-iam-policy-bindingSERVICE_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/compute.networkViewer
Replace the following:
SERVICE_PROJECT_ID
: the ID of the service projectPRINCIPAL
: the principal who deploys App Engine services. Learn more about the--member
flag.
Configure your service to use a connector
For each App Engine service that requires access to your Shared VPC, you must specify the connector for the service. The following steps show how to configure your service to use a connector.
Discontinue any use of
URLFetchService
. Serverless VPC Access is not compatible with the URL Fetch service.Add the
<vpc-access-connector>
element to your service'sappengine-web.xml
file:<vpc-access-connector> <name>projects/
HOST_PROJECT_ID /locations/REGION /connectors/CONNECTOR_NAME </name> </vpc-access-connector>Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectREGION
: the region of your connectorCONNECTOR_NAME
: the name of your connector
Deploy the service:
gcloud app deploy WEB-INF/appengine-web.xml
After deploying, your service is able to send requests to your Shared VPC network and receive the corresponding responses.
Next steps
- Monitor admin activity with Serverless VPC Access audit logging.
- Protect resources and data by creating a service perimeter with VPC Service Controls.
- Learn about the Identity and Access Management (IAM) roles associated with Serverless VPC Access. See Serverless VPC Access roles in the IAM documentation for a list of permissions associated with each role.
- Learn how to connect to Memorystore from the App Engine standard environment.