Best practices: Execute scheduled jobs in a VPC Service Controls perimeter
Stay organized with collections
Save and categorize content based on your preferences.
This page describes best practices for executing scheduled Cloud Run
jobs for Google Cloud projects when using a VPC Service Controls perimeter.
Cloud Scheduler cannot trigger jobs inside a VPC Service Controls perimeter. You must take additional steps to
set up scheduled jobs. In particular, you must proxy the request through another component. We recommend using a Cloud Run service as the proxy.
The following diagram shows the architecture:
Before you begin
Set up Cloud Run for VPC Service Controls. This is a one-time setup that all
subsequent scheduled jobs use. You must also do some per-service setup later,
which is described in the instructions that follow.
Set up a scheduled job
To set up a scheduled job inside a VPC Service Controls perimeter:
If you don't have an existing Cloud Run job that you want to trigger,
test the feature by deploying the sample Cloud Run jobs container us-docker.pkg.dev/cloudrun/container/job:latest to Cloud Run.
Deploy the Cloud Run service that acts as a proxy. See Sample proxy service for a sample service that triggers a Cloud Run job in response to a request.
After deployment, the console displays the service's URL next to the text URL:.
Complete the per-service Cloud Run-specific VPC Service Controls setup. You need to connect the service to a
VPC network, and route all traffic through that network. Make
sure to set ingress to Internal.
Create a Cloud Scheduler cron job that triggers your Cloud Run proxy
service:
For URL, enter the Cloud Run proxy service URL that you noted in the previous step.
For HTTP method, select Get.
For Auth header, select Add OIDC token
For Service Account, select Compute Engine default service account or a
custom service account that has the run.routes.invoke permission or the
Cloud Run Invoker role.
For Audience, enter the same enter the Cloud Run proxy service URL
that you noted in the previous step.
Leave all other fields blank.
Click Create to create the Cloud Scheduler cron job.
Sample proxy service
The following section shows a sample python service that proxies requests and
triggers the Cloud Run job.
Create a file called main.py and paste the following code into it. Update the job name, region, and project ID to the values you need.
importos
fromflaskimportFlask
app=Flask(__name__)# pip install google-cloud-run
fromgoogle.cloudimportrun_v2
@app.route('/')
defhello():
client=run_v2.JobsClient()# UPDATE TO YOUR JOB NAME, REGION, AND PROJECT IDjob_name='projects/YOUR_PROJECT_ID/locations/YOUR_JOB_REGION/jobs/YOUR_JOB_NAME'print("Triggering job...")request=run_v2.RunJobRequest(name=job_name)operation=client.run_job(request=request)response=operation.result()print(response)return"Done!"if__name__=='__main__':
app.run(debug=True,host="0.0.0.0",port=int(os.environ.get("PORT",8080)))
Create a file named requirements.txt and paste the following code into it:
Build and deploy the container. Source-based deployments can be challenging
to set up in a VPC Service Controls environment, due to the need to set up
Cloud Build custom workers. If you have an existing build and deploy pipeline, use it to build the source code into a container and deploy the container as a Cloud Run service.
If you don't have an existing build and deploy setup, build the container locally and push it to Artifact Registry, for example:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Best practices: Execute scheduled jobs in a VPC Service Controls perimeter\n\nThis page describes best practices for executing scheduled Cloud Run\njobs for Google Cloud projects when using a VPC Service Controls perimeter.\n\nCloud Scheduler cannot trigger jobs inside a VPC Service Controls perimeter. You must take additional steps to\nset up scheduled jobs. In particular, you must proxy the request through another component. We recommend using a Cloud Run service as the proxy.\n\nThe following diagram shows the architecture:\n\nBefore you begin\n----------------\n\n[Set up Cloud Run for VPC Service Controls](/run/docs/securing/using-vpc-service-controls). This is a one-time setup that all\nsubsequent scheduled jobs use. You must also do some per-service setup later,\nwhich is described in the instructions that follow.\n\nSet up a scheduled job\n----------------------\n\nTo set up a scheduled job inside a VPC Service Controls perimeter:\n\n1. [Create a job](/run/docs/create-jobs), and note the name of your job.\n\n2. Complete the per-job Cloud Run-specific [VPC Service Controls setup](/run/docs/securing/using-vpc-service-controls#deploy-compliant-services). You need to [connect your job to a VPC network](/run/docs/configuring/connecting-vpc)\n and route all traffic through that network.\n\n If you don't have an existing Cloud Run job that you want to trigger,\n test the feature by deploying the sample Cloud Run jobs container `us-docker.pkg.dev/cloudrun/container/job:latest` to Cloud Run.\n3. Deploy the Cloud Run service that acts as a proxy. See [Sample proxy service](#proxy-service) for a sample service that triggers a Cloud Run job in response to a request.\n After deployment, the console displays the service's URL next to the text **URL:**.\n\n4. Complete the per-service Cloud Run-specific VPC Service Controls [setup](/run/docs/securing/using-vpc-service-controls#deploy-compliant-services). You need to connect the service to a\n VPC network, and route all traffic through that network. Make\n sure to set ingress to *Internal*.\n\n5. Create a Cloud Scheduler cron job that triggers your Cloud Run proxy\n service:\n\n 1. [Go to the Cloud Scheduler jobs console page](https://console.cloud.google.com/cloudscheduler)\n\n 2. Click **Create Job**.\n\n 3. Enter the values you want for the **Name** , **Region** , **Frequency** , and\n **Timezone** fields. For more information, see [Create a cron job using Cloud Scheduler](/scheduler/docs/schedule-run-cron-job#create-job).\n\n 4. Click **Configure the execution**.\n\n 5. Select Target type **HTTP**.\n\n 6. For **URL**, enter the Cloud Run proxy service URL that you noted in the previous step.\n\n 7. For HTTP method, select **Get**.\n\n 8. For Auth header, select **Add OIDC token**\n\n 9. For Service Account, select **Compute Engine default service account** or a\n custom service account that has the `run.routes.invoke` permission or the\n `Cloud Run Invoker` role.\n\n 10. For Audience, enter the same enter the Cloud Run proxy service URL\n that you noted in the previous step.\n\n 11. Leave all other fields blank.\n\n 12. Click **Create** to create the Cloud Scheduler cron job.\n\nSample proxy service\n--------------------\n\nThe following section shows a sample python service that proxies requests and\ntriggers the Cloud Run job.\n\n1. Create a file called `main.py` and paste the following code into it. Update the job name, region, and project ID to the values you need.\n\n ```bash\n import os\n from flask import Flask\n app = Flask(__name__)\n\n # pip install google-cloud-run\n from google.cloud import run_v2\n\n @app.route('/')\n def hello():\n\n client = run_v2.JobsClient()\n\n # UPDATE TO YOUR JOB NAME, REGION, AND PROJECT ID\n job_name = 'projects/YOUR_PROJECT_ID/locations/YOUR_JOB_REGION/jobs/YOUR_JOB_NAME'\n\n print(\"Triggering job...\")\n request = run_v2.RunJobRequest(name=job_name)\n operation = client.run_job(request=request)\n response = operation.result()\n print(response)\n return \"Done!\"\n\n if __name__ == '__main__':\n app.run(debug=True, host=\"0.0.0.0\", port=int(os.environ.get(\"PORT\", 8080)))\n ```\n2. Create a file named `requirements.txt` and paste the following code into it:\n\n ```bash\n google-cloud-run\n flask\n ```\n3. Create a Dockerfile with the following contents:\n\n ```bash\n FROM python:3.9-slim-buster\n\n WORKDIR /app\n\n COPY requirements.txt requirements.txt\n RUN pip install --no-cache-dir -r requirements.txt\n\n COPY . .\n\n CMD [\"python3\", \"main.py\"]\n ```\n4. Build and deploy the container. Source-based deployments can be challenging\n to set up in a VPC Service Controls environment, due to the need to set up\n Cloud Build custom workers. If you have an existing build and deploy pipeline, use it to build the source code into a container and deploy the container as a Cloud Run service.\n\n If you don't have an existing build and deploy setup, build the container locally and push it to Artifact Registry, for example: \n\n ```bash\n PROJECT_ID=YOUR_PROJECT_ID\n REGION=YOUR_REGION\n AR_REPO=YOUR_AR_REPO\n CLOUD_RUN_SERVICE=job-runner-service\n\n docker build -t $CLOUD_RUN_SERVICE .\n\n docker tag $CLOUD_RUN_SERVICE $REGION_ID-docker.pkg.dev/$PROJECT_ID/AR_REPO/$CLOUD_RUN_SERVICE\n\n docker push $REGION_ID-docker.pkg.dev/$PROJECT_ID/AR_REPO/$CLOUD_RUN_SERVICE\n ```\n\n Note the service URL returned by the deploy command.\n\nWhat's next\n-----------\n\nAfter you use this feature, learn more by reading the following:\n\n- [View job logs](/run/docs/logging)\n- [Monitor job performances](/run/docs/monitoring)\n- [Create a job](/run/docs/create-jobs)\n- [Execute a job](/run/docs/execute/jobs)\n- [Manage jobs](/run/docs/managing/jobs)\n- [Manage job executions](/run/docs/managing/job-executions)\n\n- [Set memory limits](/run/docs/configuring/jobs/memory-limits)\n\n- [Set environment variables](/run/docs/configuring/jobs/environment-variables)"]]