Deploy an agent

To deploy an agent on Vertex AI Agent Engine, use the following steps:

  1. Configure your agent for deployment. You can make the following optional configurations:
  2. Create an AgentEngine instance.
  3. Grant the deployed agent permissions.
  4. Get the agent resource ID.

You can also use Agent Starter Pack templates for deployment.

Before you begin

Before you deploy an agent, make sure you have completed the following tasks:

  1. Set up your environment.
  2. Develop an agent.

(Optional) Define the package requirements

Provide the set of packages required by the agent for deployment. The set of packages can either be a list of items to be installed by pip, or the path to a file that follows the Requirements File Format.

If the agent does not have any dependencies, you can set requirements to None:

requirements = None

If the agent uses a framework-specific template, you should specify the SDK version that is imported (such as 1.77.0) when developing the agent.

ADK

requirements = [
    "google-cloud-aiplatform[agent_engines,adk]",
    # any other dependencies
]

LangChain

requirements = [
    "google-cloud-aiplatform[agent_engines,langchain]",
    # any other dependencies
]

LangGraph

requirements = [
    "google-cloud-aiplatform[agent_engines,langgraph]",
    # any other dependencies
]

AG2

requirements = [
    "google-cloud-aiplatform[agent_engines,ag2]",
    # any other dependencies
]

LlamaIndex

The following instructions are for LlamaIndex Query Pipeline:

requirements = [
    "google-cloud-aiplatform[agent_engines,llama_index]",
    # any other dependencies
]

(Optional) Version constraints

To upper-bound or pin the version of a given package (such as google-cloud-aiplatform), specify the following:

requirements = [
    # See https://pypi.org/project/google-cloud-aiplatform for the latest version.
    "google-cloud-aiplatform[agent_engines,adk]==1.88.0",
]

You can add additional packages and constraints to the list:

requirements = [
    "google-cloud-aiplatform[agent_engines,adk]==1.88.0",
    "cloudpickle==3.0", # new
]

(Optional) Define a developmental branch

You can point to the version of a package that is on a GitHub branch or pull request. For example:

requirements = [
    "google-cloud-aiplatform[agent_engines,adk] @ git+https://github.com/googleapis/python-aiplatform.git@BRANCH_NAME", # new
    "cloudpickle==3.0",
]

(Optional) Define a requirements file format

You can maintain the list of requirements in a file (such as path/to/requirements.txt):

requirements = "path/to/requirements.txt"

where path/to/requirements.txt is a text file that follows the Requirements File Format. For example:

google-cloud-aiplatform[agent_engines,adk]
cloudpickle==3.0

(Optional) Define additional packages

You can include local files or directories that contain local required Python source files. Compared to package requirements, this lets you use private utilities you have developed that aren't otherwise available on PyPI or GitHub.

If the agent does not require any extra packages, you can set it to None:

extra_packages = None

(Optional) Define files and directories

To include a single file (such as agents/agent.py), add it to the extra_packages list:

extra_packages = ["agents/agent.py"]

To include the set of files in an entire directory (for example, agents/), specify the directory:

extra_packages = ["agents"] # directory that includes agents/agent.py

(Optional) Define wheel binaries

You can specify Python wheel binaries (for example, path/to/python_package.whl) in the package requirements:

requirements = [
    "google-cloud-aiplatform[agent_engines,adk]",
    "cloudpickle==3.0",
    "python_package.whl",  # install from the whl file that was uploaded
]
extra_packages = ["path/to/python_package.whl"]  # bundle the whl file for uploading

(Optional) Define a Cloud Storage directory

The staging artifacts are overwritten if they correspond to an existing sub-bucket (a folder in a Cloud Storage bucket). If necessary, you can specify the subbucket for the staging artifacts. You can set gcs_dir_name to None if you don't mind potentially overwriting the files in the default sub-bucket:

gcs_dir_name = None

To avoid overwriting the files (such as for different environments such as development, staging, and production), you can set up corresponding sub-buckets, and specify the sub-bucket to stage the artifact under:

gcs_dir_name = "dev" # or "staging" or "prod"

If you want or need to avoid collisions, you can generate a random uuid:

import uuid
gcs_dir_name = str(uuid.uuid4())

(Optional) Configure resource metadata

You can set metadata on the ReasoningEngine resource that gets created in Vertex AI:

display_name = "Currency Exchange Rate Agent (Staging)"

description = """
An agent that has access to tools for looking up the exchange rate.

If you run into any issues, please contact the dev team.
"""

For a full set of the parameters, see the API reference.

Create an AgentEngine instance

To deploy the agent on Vertex AI, use agent_engines.create and pass in the object as a parameter:

remote_agent = agent_engines.create(
    local_agent,                    # Required.
    requirements=requirements,      # Optional.
    extra_packages=extra_packages,  # Optional.
    gcs_dir_name=gcs_dir_name,      # Optional.
    display_name=display_name,      # Optional.
    description=description,        # Optional.
)

Deployment takes a few minutes, during which the following steps happen in the background:

  1. A bundle of the following artifacts are generated locally:

    • *.pkl a pickle file corresponding to local_agent.
    • requirements.txt a text file containing the package requirements.
    • dependencies.tar.gz a tar file containing any extra packages.
  2. The bundle is uploaded to Cloud Storage (under the corresponding sub-bucket) for staging the artifacts.

  3. The Cloud Storage URIs for the respective artifacts are specified in the PackageSpec.

  4. The Vertex AI Agent Engine service receives the request and builds containers and turns up HTTP servers on the backend.

Deployment latency is dependent on the total time it takes to install the required packages. Once deployed, remote_agent corresponds to an instance of local_agent that is running on Vertex AI and can be queried or deleted. It is separate from local instances of the agent.

(Optional) Grant the deployed agent permissions

If the deployed agent needs to be granted any additional permissions, you can follow the instructions in Set up your service agent permissions.

ADK

Deployed ADK agents need to be granted the following permissions to use managed sessions:

  • Vertex AI User (roles/aiplatform.user)

Get the agent resource ID

Each deployed agent has a unique identifier. You can run the following command to get the resource_name identifier for your deployed agent:

remote_agent.resource_name

The response should look like the following string:

"projects/PROJECT_NUMBER/locations/LOCATION/reasoningEngines/RESOURCE_ID"

where

  • PROJECT_ID is the Google Cloud project ID where the deployed agent runs.

  • LOCATION is the region where the deployed agent runs.

  • RESOURCE_ID is the ID of the deployed agent as a reasoningEngine resource.

Deploy in production with Agent Starter Pack

The Agent Starter Pack is a collection of production-ready generative AI agent templates built for Vertex AI Agent Engine. It accelerates deployment by providing:

  • Pre-built Agent Templates: ReAct, RAG, multi-agent, and more.
  • Interactive Playground: Test and interact with your agent.
  • Automated Infrastructure: Uses Terraform for streamlined resource management.
  • CI/CD Pipelines: Automated deployment workflows leveraging Cloud Build.
  • Observability: Includes built-in support for Cloud Trace and Cloud Logging.

Get Started: Quickstart

Best practices for deployment

  1. Pin your package versions (for reproducible builds). Common packages to keep track of include the following: google-cloud-aiplatform, cloudpickle, langchain, langchain-core, langchain-google-vertexai, and pydantic.

  2. Minimize the number of dependencies in your agent. This reduces the number of breaking changes when updating your dependencies and makes it easier to update your agent over time for newer features.

What's next