Invoke a VPC Service Controls-compliant private endpoint

You can target a private endpoint for HTTP calls from your workflow execution by using Service Directory's service registry with Workflows. By creating a private endpoint within a Virtual Private Cloud (VPC) network, the endpoint can be VPC Service Controls-compliant.

VPC Service Controls provides an extra layer of security defense that is independent of Identity and Access Management (IAM). While IAM enables granular identity-based access control, VPC Service Controls enables broader context-based perimeter security, including controlling data egress across the perimeter.

  • Service Directory is a service registry that stores information about registered network services, including their names, locations, and attributes. Regardless of their infrastructure, you can register services automatically, and capture their details. This allows you to discover, publish, and connect services at scale for all your service endpoints.

  • A VPC network provides connectivity for your virtual machine (VM) instances, and allows you to create private endpoints within your VPC network by using internal IP addresses. HTTP calls to a VPC network resource are sent over a private network while enforcing IAM and VPC Service Controls.

  • VPC Service Controls is a Google Cloud feature that allows you to set up a service perimeter and create a data transfer boundary. You can use VPC Service Controls with Workflows to help protect your services, and to reduce the risk of data exfiltration.

This document shows you how to register a VM in a VPC network as a Service Directory endpoint. This allows you to provide your workflow with a Service Directory service name. Your workflow execution uses the information retrieved from the service registry to send the appropriate HTTP request, without egressing to a public network.

This diagram provides an overview:

Sending an HTTP request to a port number on a VM instance using information from Service Directory

At a high level, you must do the following:

  1. Grant permissions to the Cloud Workflows service agent so that the service agent can view Service Directory resources and access VPC networks using Service Directory.
  2. Create a VPC network to provide networking functionality.
  3. Create a VPC firewall rule so that you can allow or deny traffic to or from VM instances in your VPC network.
  4. Create a VM instance in the VPC network. A Compute Engine VM instance is a virtual machine that is hosted on Google's infrastructure. The terms Compute Engine instance, VM instance, and VM are synonymous and are used interchangeably.
  5. Deploy an application on the VM. You can run an app on your VM instance and confirm that traffic is being served as expected.
  6. Configure Service Directory so that your workflow execution can invoke a Service Directory endpoint.
  7. Create and deploy your workflow. The private_service_name value in your workflow specifies the Service Directory endpoint that you registered in the previous step.

Grant permissions to the Cloud Workflows service agent

Some Google Cloud services have service agents that allow the services to access your resources. If an API requires a service agent, then Google creates the service agent after you activate and use the API.

  1. When you first deploy a workflow, the Cloud Workflows service agent is automatically created with the following format:

    You can manually create the service account in a project without any workflows with this command:

    gcloud beta services identity create \ \

    Replace PROJECT_ID with your Google Cloud project ID.

  2. To view Service Directory resources, grant the Service Directory Viewer role (servicedirectory.viewer) on the project to the Workflows service agent:

    gcloud projects add-iam-policy-binding PROJECT_ID \ \

    Replace PROJECT_NUMBER with your Google Cloud project number. You can find your project number on the Welcome page of the Google Cloud console or by running the following command:

    gcloud projects describe PROJECT_ID --format='value(projectNumber)'
  3. To access VPC networks using Service Directory, grant the Private Service Connect Authorized Service role (roles/servicedirectory.pscAuthorizedService) on the project to the Workflows service agent:

    gcloud projects add-iam-policy-binding PROJECT_ID \ \

Create a VPC network

A VPC network is a virtual version of a physical network that is implemented inside of Google's production network. It provides connectivity for your Compute Engine VM instances.

You can create an auto mode or custom mode VPC network. Each new network that you create must have a unique name within the same project.

For example, the following command creates an auto mode VPC network:

gcloud compute networks create NETWORK_NAME \

Replace NETWORK_NAME with a name for the VPC network.

For more information, see Create and manage VPC networks.

Create a VPC firewall rule

VPC firewall rules let you allow or deny traffic to or from VM instances in a VPC network based on port number, tag, or protocol.

VPC firewall rules are defined at the network level, and only apply to the network where they are created; however, the name you choose for a rule must be unique to the project.

For example, the following command creates a firewall rule for a specified VPC network and allows ingress traffic from any IPv4 address, The --rules flag value of all makes the rule applicable to all protocols and all destination ports.

gcloud compute firewall-rules create RULE_NAME \
    --network=projects/PROJECT_ID/global/networks/NETWORK_NAME \
    --direction=INGRESS \
    --action=ALLOW \
    --source-ranges= \

Replace RULE_NAME with a name for the firewall rule.

For more information, see Use VPC firewall rules.

Create a VM instance in the VPC network

VM instances include Google Kubernetes Engine (GKE) clusters, App Engine flexible environment instances, and other Google Cloud products built on Compute Engine VMs. To support private network access, a VPC network resource can be a VM instance, Cloud Interconnect IP address, or a Layer 4 internal load balancer.

Compute Engine instances can run public images for Linux and Windows Server that Google provides, as well as private custom images that you can create or import from your existing systems. You can also deploy Docker containers.

You can choose the machine properties of your instances, such as the number of virtual CPUs and the amount of memory, by using a set of predefined machine types or by creating your own custom machine types.

For example, the following command creates a Linux VM instance from a public image with a network interface attached to the VPC network you created previously.

  1. Create and start a VM instance:

    gcloud compute instances create VM_NAME \
        --image-family=debian-11 \
        --image-project=debian-cloud \
        --machine-type=e2-micro \
        --network-interface network=projects/PROJECT_ID/global/networks/NETWORK_NAME

    Replace VM_NAME with a name for the VM.

  2. If you are prompted to confirm the zone for the instance, type y.

    After you create the VM instance, note the INTERNAL_IP address that is returned.

  3. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  4. In the Name column, click the name of the appropriate VM instance.

  5. If the VM is running, to stop the VM, click Stop.

  6. To edit the VM, click Edit.

  7. In the Networking > Firewalls section, to permit HTTP or HTTPS traffic to the VM, select Allow HTTP traffic or Allow HTTPS traffic.

    For this example, select the Allow HTTP traffic checkbox.

    Compute Engine adds a network tag to your VM which associates the firewall rule with the VM. It then creates the corresponding ingress firewall rule that allows all incoming traffic on tcp:80 (HTTP) or tcp:443 (HTTPS).

  8. To save your changes, click Save.

  9. To restart the VM, click Start/Resume.

For more information, see Create and start a VM instance.

Deploy an application on the VM

To test the network configuration and to confirm that traffic is being served as expected, you can deploy a simple app on your VM that listens on a port.

For example, the following commands create a Node.js web service that listens on port 3000.

  1. Establish an SSH connection to your VM instance.

  2. Update your package repositories:

    sudo apt update
  3. Install NVM, Node.js, and npm.

    For more information, see Setting up a Node.js development environment.

  4. Interactively create a package.json file:

    npm init

    For example:

    "name": "test",
    "version": "1.0.0",
    "description": "",
    "main": "index.js",
    "scripts": {
    "test": "hello"
    "author": "",
    "license": "ISC"
  5. Install Express, a web application framework for Node.js:

    npm install express
  6. Write the code for the test app:

    vim app.js

    The following sample creates an app that responds to GET requests to the root path (/) with the text "Hello, world!"

    const express = require('express');
    const app = express();
    app.get('/', (req, res) => {
      res.status(200).send('Hello, world!').end();
    app.listen(3000, () => {
      console.log('Sample app listening on port 3000.');

    Note the port that the app is listening on. The same port number must be used when configuring the endpoint for the Service Directory service.

  7. Confirm that the app is listening on port 3000:

    node app.js

Compute Engine offers a range of deployment options. For more information, see Choose a Compute Engine deployment strategy for your workload.

Configure Service Directory

To support invoking a private endpoint from a workflow execution, you must set up a Service Directory namespace, register a service in the namespace, and add an endpoint to the service.

For example, the following commands create a namespace, a service, and an endpoint that specifies the VPC network and internal IP address of your VM instance.

  1. Create a namespace:

    gcloud service-directory namespaces create NAMESPACE \

    Replace the following:

    • NAMESPACE: the ID of the namespace or fully qualified identifier for the namespace.
    • REGION: the Google Cloud region that contains the namespace; for example, us-central1.
  2. Create a service:

    gcloud service-directory services create SERVICE \
        --namespace=NAMESPACE \

    Replace SERVICE with the name of the service that you are creating.

  3. Configure an endpoint.

    gcloud service-directory endpoints create ENDPOINT \
        --namespace=NAMESPACE \
        --service=SERVICE \
        --network=projects/PROJECT_NUMBER/locations/global/networks/NETWORK_NAME \
        --port=PORT_NUMBER \
        --address=IP_ADDRESS \

    Replace the following:

    • ENDPOINT: the name of the endpoint that you are creating.
    • PORT_NUMBER: the port that the endpoint is running on; for example, 3000.
    • IP_ADDRESS: the IPv6 or IPv4 address of the endpoint; this is the internal IP address that you noted previously.

For more information, see Configure Service Directory and Configure private network access.

Create and deploy your workflow

Calling or invoking a private endpoint from Workflows is done through an HTTP request. The most common HTTP request methods have a call shortcut (such as http.get and, but you can make any type of HTTP request by setting the call field to http.request and specifying the type of request using the method field. For more information, see Make an HTTP request.

  1. Create a source code file for your workflow:

    touch call-private-endpoint.JSON_OR_YAML

    Replace JSON_OR_YAML with yaml or json depending on the format of your workflow.

  2. In a text editor, copy the following workflow (which in this case uses an HTTP protocol for the url value) to your source code file:


        - checkHttp:
            call: http.get
              url: http://IP_ADDRESS
              private_service_name: "projects/PROJECT_ID/locations/REGION/namespaces/NAMESPACE/services/SERVICE"
            result: res
        - ret:
            return: ${res}


      "main": {
        "steps": [
            "checkHttp": {
              "call": "http.get",
              "args": {
                "url": "http://IP_ADDRESS",
                "private_service_name": "projects/PROJECT_ID/locations/REGION/namespaces/NAMESPACE/services/SERVICE"
              "result": "res"
            "ret": {
              "return": "${res}"

    The private_service_name value must be a string that specifies a registered Service Directory service name with the following format:


  3. Deploy the workflow. For test purposes, you can attach the Compute Engine default service account to the workflow to represent its identity:

    gcloud workflows deploy call-private-endpoint \
        --source=call-private-endpoint.JSON_OR_YAML \
        --location=REGION \
  4. Execute the workflow:

    gcloud workflows run call-private-endpoint \

    You should see a result similar to the following:

    argument: 'null'
    duration: 0.650784403s
    endTime: '2023-06-09T18:19:52.570690079Z'
    name: projects/968807934019/locations/us-central1/workflows/call-private-endpoint/executions/4aac88d3-0b54-419b-b364-b6eb973cc932
    result: '{"body":"Hello, world!","code":200,"headers":{"Connection":"keep-alive","Content-Length":"21","Content-Type":"text/html;
    charset=utf-8","Date":"Fri, 09 Jun 2023 18:19:52 GMT","Etag":"W/\"15-NFaeBgdti+9S7zm5kAdSuGJQm6Q\"","Keep-Alive":"timeout=5","X-Powered-By":"Express"}}'
    startTime: '2023-06-09T18:19:51.919905676Z'
    state: SUCCEEDED

What's next