Vertex AI SDK

The vertexai module.

vertexai.init(*, project: Optional[str] = None, location: Optional[str] = None, experiment: Optional[str] = None, experiment_description: Optional[str] = None, experiment_tensorboard: Optional[Union[str, google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, bool]] = None, staging_bucket: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, encryption_spec_key_name: Optional[str] = None, network: Optional[str] = None, service_account: Optional[str] = None, api_endpoint: Optional[str] = None, api_key: Optional[str] = None, api_transport: Optional[str] = None, request_metadata: Optional[Sequence[Tuple[str, str]]] = None)

Updates common initialization parameters with provided options.

  • Parameters

    • project (str) – The default project to use when making API calls.

    • location (str) – The default location to use when making API calls. If not set defaults to us-central-1.

    • experiment (str) – Optional. The experiment name.

    • experiment_description (str) – Optional. The description of the experiment.

    • experiment_tensorboard (Union[str, **tensorboard_resource.Tensorboard, *[bool](https://python.readthedocs.io/en/latest/library/functions.html#bool)]*) – Optional. The Vertex AI TensorBoard instance, Tensorboard resource name, or Tensorboard resource ID to use as a backing Tensorboard for the provided experiment.

      Example tensorboard resource name format: “projects/123/locations/us-central1/tensorboards/456”

      If experiment_tensorboard is provided and experiment is not, the provided experiment_tensorboard will be set as the global Tensorboard. Any subsequent calls to aiplatform.init() with experiment and without experiment_tensorboard will automatically assign the global Tensorboard to the experiment.

      If experiment_tensorboard is ommitted or set to True or None the global Tensorboard will be assigned to the experiment. If a global Tensorboard is not set, the default Tensorboard instance will be used, and created if it does not exist.

      To disable creating and using Tensorboard with experiment, set experiment_tensorboard to False. Any subsequent calls to aiplatform.init() should include this setting as well.

    • staging_bucket (str) – The default staging bucket to use to stage artifacts when making API calls. In the form gs://…

    • credentials (google.auth.credentials.Credentials) – The default custom credentials to use when making API calls. If not provided credentials will be ascertained from the environment.

    • encryption_spec_key_name (Optional[str]) – Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

      If set, this resource and all sub-resources will be secured by this key.

    • network (str) – Optional. The full name of the Compute Engine network to which jobs and resources should be peered. E.g. “projects/12345/global/networks/myVPC”. Private services access must already be configured for the network. If specified, all eligible jobs and resources created will be peered with this VPC.

    • service_account (str) – Optional. The service account used to launch jobs and deploy models. Jobs that use service_account: BatchPredictionJob, CustomJob, PipelineJob, HyperparameterTuningJob, CustomTrainingJob, CustomPythonPackageTrainingJob, CustomContainerTrainingJob, ModelEvaluationJob.

    • api_endpoint (str) – Optional. The desired API endpoint, e.g., us-central1-aiplatform.googleapis.com

    • api_key (str) – Optional. The API key to use for service calls. NOTE: Not all services support API keys.

    • api_transport (str) – Optional. The transport method which is either ‘grpc’ or ‘rest’. NOTE: “rest” transport functionality is currently in a beta state (preview).

    • request_metadata – Optional. Additional gRPC metadata to send with every client request.

  • Raises

    ValueError – If experiment_description is provided but experiment is not.

Classes for working with the Gemini models.

class vertexai.generative_models.Candidate()

Bases: object

A response candidate generated by the model.

class vertexai.generative_models.ChatSession(model: vertexai.generative_models._generative_models._GenerativeModel, *, history: Optional[List[vertexai.generative_models._generative_models.Content]] = None, response_validation: bool = True)

Bases: object

Chat session holds the chat history.

send_message(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = 'None', safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = 'None', tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = 'None', stream: Literal[False] = 'False')

send_message(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = 'None', safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = 'None', tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = 'None', stream: Literal[True] = 'True')

Generates content.

  • Parameters

    • content – Content to send to the model. Supports a value that can be converted to a Part or a list of such values. Supports * str, Image, Part, * List[Union[str, Image, Part]],

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • stream – Whether to stream the response.

  • Returns

    A single GenerationResponse object if stream == False A stream of GenerationResponse objects if stream == True

  • Raises

    ResponseValidationError – If the response was blocked or is incomplete.

send_message_async(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = 'None', safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = 'None', tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = 'None', stream: Literal[False] = 'False')

send_message_async(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = 'None', safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = 'None', tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = 'None', stream: Literal[True] = 'True')

Generates content asynchronously.

  • Parameters

    • content – Content to send to the model. Supports a value that can be converted to a Part or a list of such values. Supports * str, Image, Part, * List[Union[str, Image, Part]],

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • stream – Whether to stream the response.

  • Returns

    An awaitable for a single GenerationResponse object if stream == False An awaitable for a stream of GenerationResponse objects if stream == True

  • Raises

    ResponseValidationError – If the response was blocked or is incomplete.

class vertexai.generative_models.Content(*, parts: Optional[List[vertexai.generative_models._generative_models.Part]] = None, role: Optional[str] = None)

Bases: object

The multi-part content of a message.

Usage:

```python
``
```

\`
response = model.generate_content(contents=[

> Content(role=”user”, parts=[Part.from_text(“Why is sky blue?”)])

class vertexai.generative_models.FinishReason(value)

Bases: proto.enums.Enum

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Values:

FINISH_REASON_UNSPECIFIED (0):

    The finish reason is unspecified.

STOP (1):

    Token generation reached a natural stopping
    point or a configured stop sequence.

MAX_TOKENS (2):

    Token generation reached the configured
    maximum output tokens.

SAFETY (3):

    Token generation stopped because the content potentially
    contains safety violations. NOTE: When streaming,
    [content][google.cloud.aiplatform.v1beta1.Candidate.content]
    is empty if content filters blocks the output.

RECITATION (4):

    Token generation stopped because the content
    potentially contains copyright violations.

OTHER (5):

    All other reasons that stopped the token
    generation.

BLOCKLIST (6):

    Token generation stopped because the content
    contains forbidden terms.

PROHIBITED_CONTENT (7):

    Token generation stopped for potentially
    containing prohibited content.

SPII (8):

    Token generation stopped because the content
    potentially contains Sensitive Personally
    Identifiable Information (SPII).

MALFORMED_FUNCTION_CALL (9):

    The function call generated by the model is
    invalid.

class vertexai.generative_models.FunctionDeclaration(*, name: str, parameters: Dict[str, Any], description: Optional[str] = None)

Bases: object

A representation of a function declaration.

Usage:

Create function declaration and tool:


```python
``
```

\`
get_current_weather_func = generative_models.FunctionDeclaration(

> name=”get_current_weather”,
> description=”Get the current weather in a given location”,
> parameters={

> > “type”: “object”,
> > “properties”: {

> > > “location”: {

> > >     “type”: “string”,
> > >     “description”: “The city and state, e.g. San Francisco, CA”

> > > },
> > > “unit”: {

> > > > “type”: “string”,
> > > > “enum”: [

> > > > > “celsius”,
> > > > > “fahrenheit”,

> > > > ]

> > > }

> > },
> > “required”: [

> > > “location”

> > ]

> },

)
weather_tool = generative_models.Tool(

> function_declarations=[get_current_weather_func],

Use tool in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

Use tool in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

Constructs a FunctionDeclaration.

  • Parameters

    • name – The name of the function that the model can call.

    • parameters – Describes the parameters to this function in JSON Schema Object format.

    • description – Description and purpose of the function. Model uses it to decide how and whether to call the function.

class vertexai.generative_models.GenerationConfig(*, temperature: Optional[float] = None, top_p: Optional[float] = None, top_k: Optional[int] = None, candidate_count: Optional[int] = None, max_output_tokens: Optional[int] = None, stop_sequences: Optional[List[str]] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, response_mime_type: Optional[str] = None, response_schema: Optional[Dict[str, Any]] = None, seed: Optional[int] = None, routing_config: Optional[google.cloud.aiplatform_v1beta1.types.content.GenerationConfig.RoutingConfig] = None)

Bases: object

Parameters for the generation.

Constructs a GenerationConfig object.

  • Parameters

    • temperature – Controls the randomness of predictions. Range: [0.0, 1.0]

    • top_p – If specified, nucleus sampling will be used. Range: (0.0, 1.0]

    • top_k – If specified, top-k sampling will be used.

    • candidate_count – Number of candidates to generate.

    • seed – Random seed for the generation.

    • max_output_tokens – The maximum number of output tokens to generate per message.

    • stop_sequences – A list of stop sequences.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • response_mime_type – Output response mimetype of the generated candidate text. Supported mimetypes:

      • text/plain: (default) Text output.

      • application/json: JSON response in the candidates.

      The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.

    • response_schema – Output response schema of the genreated candidate text. Only valid when response_mime_type is application/json.

    • routing_config – Model routing preference set in the request.

Usage:

```python
``
```

\`
response = model.generate_content(

> “Why is sky blue?”,
> generation_config=GenerationConfig(

> > temperature=0.1,
> > top_p=0.95,
> > top_k=20,
> > candidate_count=1,
> > max_output_tokens=100,
> > stop_sequences=[”nnn”],
> > seed=5,

> )

class vertexai.generative_models.GenerationResponse()

Bases: object

The response from the model.

class vertexai.generative_models.GenerativeModel(model_name: str, *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, system_instruction: Optional[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]]] = None)

Bases: vertexai.generative_models._generative_models._GenerativeModel

Initializes GenerativeModel.

Usage:

`\`
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
\``
  • Parameters

    • model_name – Model Garden model resource name. Alternatively, a tuned model endpoint resource name can be provided.

    • generation_config – Default generation config to use in generate_content.

    • safety_settings – Default safety settings to use in generate_content.

    • tools – Default tools to use in generate_content.

    • tool_config – Default tool config to use in generate_content.

    • system_instruction – Default system instruction to use in generate_content. Note: Only text should be used in parts. Content of each part will become a separate paragraph.

compute_tokens(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]])

Computes tokens.

  • Parameters

    contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

  • Returns

    tokens_info: Lists of tokens_info from the input.

      The input contents: ContentsType could have
      multiple string instances and each tokens_info
      item represents each string instance. Each token
      info consists tokens list, token_ids list and
      a role.
    
  • Return type

    A ComputeTokensResponse object that has the following attributes

async compute_tokens_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]])

Computes tokens asynchronously.

  • Parameters

    contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

  • Returns

    tokens_info: Lists of tokens_info from the input.

      The input contents: ContentsType could have
      multiple string instances and each tokens_info
      item represents each string instance. Each token
      info consists tokens list, token_ids list and
      a role.
    
  • Return type

    And awaitable for a ComputeTokensResponse object that has the following attributes

count_tokens(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None)

Counts tokens.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • tools – A list of tools (functions) that the model can try calling.

  • Returns

    total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.

  • Return type

    A CountTokensResponse object that has the following attributes

async count_tokens_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None)

Counts tokens asynchronously.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • tools – A list of tools (functions) that the model can try calling.

  • Returns

    total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.

  • Return type

    And awaitable for a CountTokensResponse object that has the following attributes

generate_content(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, stream: bool = False)

Generates content.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • tool_config – Config shared for all tools provided in the request.

    • stream – Whether to stream the response.

  • Returns

    A single GenerationResponse object if stream == False A stream of GenerationResponse objects if stream == True

async generate_content_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, stream: bool = False)

Generates content asynchronously.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • tool_config – Config shared for all tools provided in the request.

    • stream – Whether to stream the response.

  • Returns

    An awaitable for a single GenerationResponse object if stream == False An awaitable for a stream of GenerationResponse objects if stream == True

start_chat(*, history: Optional[List[vertexai.generative_models._generative_models.Content]] = None, response_validation: bool = True)

Creates a stateful chat session.

  • Parameters

    • history – Previous history to initialize the chat session.

    • response_validation – Whether to validate responses before adding them to chat history. By default, send_message will raise error if the request or response is blocked or if the response is incomplete due to going over the max token limit. If set to False, the chat session history will always accumulate the request and response messages even if the reponse if blocked or incomplete. This can result in an unusable chat session state.

  • Returns

    A ChatSession object.

class vertexai.generative_models.HarmBlockThreshold(value)

Bases: proto.enums.Enum

Probability based thresholds levels for blocking.

Values:

HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

    Unspecified harm block threshold.

BLOCK_LOW_AND_ABOVE (1):

    Block low threshold and above (i.e. block
    more).

BLOCK_MEDIUM_AND_ABOVE (2):

    Block medium threshold and above.

BLOCK_ONLY_HIGH (3):

    Block only high threshold (i.e. block less).

BLOCK_NONE (4):

    Block none.

class vertexai.generative_models.HarmCategory(value)

Bases: proto.enums.Enum

Harm categories that will block the content.

Values:

HARM_CATEGORY_UNSPECIFIED (0):

    The harm category is unspecified.

HARM_CATEGORY_HATE_SPEECH (1):

    The harm category is hate speech.

HARM_CATEGORY_DANGEROUS_CONTENT (2):

    The harm category is dangerous content.

HARM_CATEGORY_HARASSMENT (3):

    The harm category is harassment.

HARM_CATEGORY_SEXUALLY_EXPLICIT (4):

    The harm category is sexually explicit
    content.

class vertexai.generative_models.Image()

Bases: object

The image that can be sent to a generative model.

property data(: [bytes](https://python.readthedocs.io/en/latest/library/stdtypes.html#bytes )

Returns the image data.

static from_bytes(data: bytes)

Loads image from image bytes.

  • Parameters

    data – Image bytes.

  • Returns

    Loaded image as an Image object.

static load_from_file(location: str)

Loads image from file.

  • Parameters

    location – Local path from where to load the image.

  • Returns

    Loaded image as an Image object.

class vertexai.generative_models.Part()

Bases: object

A part of a multi-part Content message.

Usage:

```python
``
```

\`
text_part = Part.from_text(“Why is sky blue?”)
image_part = Part.from_image(Image.load_from_file(“image.jpg”))
video_part = Part.from_uri(uri=”gs://…/video.mp4”, mime_type=”video/mp4”)
function_response_part = Part.from_function_response(

> name=”get_current_weather”,
> response={

> > “content”: {“weather_there”: “super nice”},

> }

)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)


```python
``
```



```python
`
```

exception vertexai.generative_models.ResponseValidationError(message: str, request_contents: List[vertexai.generative_models._generative_models.Content], responses: List[vertexai.generative_models._generative_models.GenerationResponse])

Bases: vertexai.generative_models._generative_models.ResponseBlockedError

with_traceback()

Exception.with_traceback(tb) – set self.traceback to tb and return self.

class vertexai.generative_models.SafetySetting(*, category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory, threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold, method: Optional[google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod] = None)

Bases: object

Parameters for the generation.

Safety settings.

  • Parameters

    • category – Harm category.

    • threshold – The harm block threshold.

    • method – Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.

class HarmBlockMethod(value)

Bases: proto.enums.Enum

Probability vs severity.

Values:

HARM_BLOCK_METHOD_UNSPECIFIED (0):

    The harm block method is unspecified.

SEVERITY (1):

    The harm block method uses both probability
    and severity scores.

PROBABILITY (2):

    The harm block method uses the probability
    score.

class HarmBlockThreshold(value)

Bases: proto.enums.Enum

Probability based thresholds levels for blocking.

Values:

HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

    Unspecified harm block threshold.

BLOCK_LOW_AND_ABOVE (1):

    Block low threshold and above (i.e. block
    more).

BLOCK_MEDIUM_AND_ABOVE (2):

    Block medium threshold and above.

BLOCK_ONLY_HIGH (3):

    Block only high threshold (i.e. block less).

BLOCK_NONE (4):

    Block none.

class HarmCategory(value)

Bases: proto.enums.Enum

Harm categories that will block the content.

Values:

HARM_CATEGORY_UNSPECIFIED (0):

    The harm category is unspecified.

HARM_CATEGORY_HATE_SPEECH (1):

    The harm category is hate speech.

HARM_CATEGORY_DANGEROUS_CONTENT (2):

    The harm category is dangerous content.

HARM_CATEGORY_HARASSMENT (3):

    The harm category is harassment.

HARM_CATEGORY_SEXUALLY_EXPLICIT (4):

    The harm category is sexually explicit
    content.

class vertexai.generative_models.Tool(function_declarations: List[vertexai.generative_models._generative_models.FunctionDeclaration])

Bases: object

A collection of functions that the model may use to generate response.

Usage:

Create tool from function declarations:


```python
``
```

\`
get_current_weather_func = generative_models.FunctionDeclaration(…)
weather_tool = generative_models.Tool(

> function_declarations=[get_current_weather_func],

Use tool in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

Use tool in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

class vertexai.generative_models.ToolConfig(function_calling_config: vertexai.generative_models._generative_models.ToolConfig.FunctionCallingConfig)

Bases: object

Config shared for all tools provided in the request.

Usage:

Create ToolConfig


```python
``
```

\`
tool_config = ToolConfig(

> function_calling_config=ToolConfig.FunctionCallingConfig(

>     mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
>     allowed_function_names=[“get_current_weather_func”],

Use ToolConfig in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],
> tool_config=tool_config,

Use ToolConfig in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],
> tool_config=tool_config,

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

class vertexai.generative_models.grounding()

Bases: object

Grounding namespace.

class GoogleSearchRetrieval()

Bases: object

Tool to retrieve public web data for grounding, powered by Google Search.

Initializes a Google Search Retrieval tool.

Classes for working with the Gemini models.

class vertexai.preview.generative_models.AutomaticFunctionCallingResponder(max_automatic_function_calls: int = 1)

Bases: object

Responder that automatically responds to model’s function calls.

Initializes the responder.

  • Parameters

    max_automatic_function_calls – Maximum number of automatic function calls.

class vertexai.preview.generative_models.CallableFunctionDeclaration(name: str, function: Callable[[...], Any], parameters: Dict[str, Any], description: Optional[str] = None)

Bases: vertexai.generative_models._generative_models.FunctionDeclaration

A function declaration plus a function.

Constructs a FunctionDeclaration.

  • Parameters

    • name – The name of the function that the model can call.

    • parameters – Describes the parameters to this function in JSON Schema Object format.

    • description – Description and purpose of the function. Model uses it to decide how and whether to call the function.

classmethod from_func(func: Callable[[...], Any])

Automatically creates a CallableFunctionDeclaration from a Python function.

The function parameter schema is automatically extracted. :param func: The function from which to extract schema.

  • Returns

    CallableFunctionDeclaration.

class vertexai.preview.generative_models.Candidate()

Bases: object

A response candidate generated by the model.

class vertexai.preview.generative_models.ChatSession(model: vertexai.generative_models._generative_models._GenerativeModel, *, history: Optional[List[vertexai.generative_models._generative_models.Content]] = None, response_validation: bool = True, responder: Optional[vertexai.generative_models._generative_models.AutomaticFunctionCallingResponder] = None, raise_on_blocked: Optional[bool] = None)

Bases: vertexai.generative_models._generative_models._PreviewChatSession

Chat session holds the chat history.

send_message(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, stream: bool = False)

Generates content.

  • Parameters

    • content – Content to send to the model. Supports a value that can be converted to a Part or a list of such values. Supports * str, Image, Part, * List[Union[str, Image, Part]],

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • stream – Whether to stream the response.

  • Returns

    A single GenerationResponse object if stream == False A stream of GenerationResponse objects if stream == True

  • Raises

    ResponseValidationError – If the response was blocked or is incomplete.

send_message_async(content: Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, stream: bool = False)

Generates content asynchronously.

  • Parameters

    • content – Content to send to the model. Supports a value that can be converted to a Part or a list of such values. Supports * str, Image, Part, * List[Union[str, Image, Part]],

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • stream – Whether to stream the response.

  • Returns

    An awaitable for a single GenerationResponse object if stream == False An awaitable for a stream of GenerationResponse objects if stream == True

  • Raises

    ResponseValidationError – If the response was blocked or is incomplete.

class vertexai.preview.generative_models.Content(*, parts: Optional[List[vertexai.generative_models._generative_models.Part]] = None, role: Optional[str] = None)

Bases: object

The multi-part content of a message.

Usage:

```python
``
```

\`
response = model.generate_content(contents=[

> Content(role=”user”, parts=[Part.from_text(“Why is sky blue?”)])

class vertexai.preview.generative_models.FinishReason(value)

Bases: proto.enums.Enum

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Values:

FINISH_REASON_UNSPECIFIED (0):

    The finish reason is unspecified.

STOP (1):

    Token generation reached a natural stopping
    point or a configured stop sequence.

MAX_TOKENS (2):

    Token generation reached the configured
    maximum output tokens.

SAFETY (3):

    Token generation stopped because the content potentially
    contains safety violations. NOTE: When streaming,
    [content][google.cloud.aiplatform.v1beta1.Candidate.content]
    is empty if content filters blocks the output.

RECITATION (4):

    Token generation stopped because the content
    potentially contains copyright violations.

OTHER (5):

    All other reasons that stopped the token
    generation.

BLOCKLIST (6):

    Token generation stopped because the content
    contains forbidden terms.

PROHIBITED_CONTENT (7):

    Token generation stopped for potentially
    containing prohibited content.

SPII (8):

    Token generation stopped because the content
    potentially contains Sensitive Personally
    Identifiable Information (SPII).

MALFORMED_FUNCTION_CALL (9):

    The function call generated by the model is
    invalid.

class vertexai.preview.generative_models.FunctionDeclaration(*, name: str, parameters: Dict[str, Any], description: Optional[str] = None)

Bases: object

A representation of a function declaration.

Usage:

Create function declaration and tool:


```python
``
```

\`
get_current_weather_func = generative_models.FunctionDeclaration(

> name=”get_current_weather”,
> description=”Get the current weather in a given location”,
> parameters={

> > “type”: “object”,
> > “properties”: {

> > > “location”: {

> > >     “type”: “string”,
> > >     “description”: “The city and state, e.g. San Francisco, CA”

> > > },
> > > “unit”: {

> > > > “type”: “string”,
> > > > “enum”: [

> > > > > “celsius”,
> > > > > “fahrenheit”,

> > > > ]

> > > }

> > },
> > “required”: [

> > > “location”

> > ]

> },

)
weather_tool = generative_models.Tool(

> function_declarations=[get_current_weather_func],

Use tool in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

Use tool in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

Constructs a FunctionDeclaration.

  • Parameters

    • name – The name of the function that the model can call.

    • parameters – Describes the parameters to this function in JSON Schema Object format.

    • description – Description and purpose of the function. Model uses it to decide how and whether to call the function.

class vertexai.preview.generative_models.GenerationConfig(*, temperature: Optional[float] = None, top_p: Optional[float] = None, top_k: Optional[int] = None, candidate_count: Optional[int] = None, max_output_tokens: Optional[int] = None, stop_sequences: Optional[List[str]] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, response_mime_type: Optional[str] = None, response_schema: Optional[Dict[str, Any]] = None, seed: Optional[int] = None, routing_config: Optional[google.cloud.aiplatform_v1beta1.types.content.GenerationConfig.RoutingConfig] = None)

Bases: object

Parameters for the generation.

Constructs a GenerationConfig object.

  • Parameters

    • temperature – Controls the randomness of predictions. Range: [0.0, 1.0]

    • top_p – If specified, nucleus sampling will be used. Range: (0.0, 1.0]

    • top_k – If specified, top-k sampling will be used.

    • candidate_count – Number of candidates to generate.

    • seed – Random seed for the generation.

    • max_output_tokens – The maximum number of output tokens to generate per message.

    • stop_sequences – A list of stop sequences.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • response_mime_type – Output response mimetype of the generated candidate text. Supported mimetypes:

      • text/plain: (default) Text output.

      • application/json: JSON response in the candidates.

      The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.

    • response_schema – Output response schema of the genreated candidate text. Only valid when response_mime_type is application/json.

    • routing_config – Model routing preference set in the request.

Usage:

```python
``
```

\`
response = model.generate_content(

> “Why is sky blue?”,
> generation_config=GenerationConfig(

> > temperature=0.1,
> > top_p=0.95,
> > top_k=20,
> > candidate_count=1,
> > max_output_tokens=100,
> > stop_sequences=[”nnn”],
> > seed=5,

> )

class vertexai.preview.generative_models.GenerationResponse()

Bases: object

The response from the model.

class vertexai.preview.generative_models.GenerativeModel(model_name: str, *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, system_instruction: Optional[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]]] = None)

Bases: vertexai.preview.generative_models._PreviewGenerativeModel

Initializes GenerativeModel.

Usage:

`\`
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
\``
  • Parameters

    • model_name – Model Garden model resource name. Alternatively, a tuned model endpoint resource name can be provided.

    • generation_config – Default generation config to use in generate_content.

    • safety_settings – Default safety settings to use in generate_content.

    • tools – Default tools to use in generate_content.

    • tool_config – Default tool config to use in generate_content.

    • system_instruction – Default system instruction to use in generate_content. Note: Only text should be used in parts. Content of each part will become a separate paragraph.

compute_tokens(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]])

Computes tokens.

  • Parameters

    contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

  • Returns

    tokens_info: Lists of tokens_info from the input.

      The input contents: ContentsType could have
      multiple string instances and each tokens_info
      item represents each string instance. Each token
      info consists tokens list, token_ids list and
      a role.
    
  • Return type

    A ComputeTokensResponse object that has the following attributes

async compute_tokens_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]])

Computes tokens asynchronously.

  • Parameters

    contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

  • Returns

    tokens_info: Lists of tokens_info from the input.

      The input contents: ContentsType could have
      multiple string instances and each tokens_info
      item represents each string instance. Each token
      info consists tokens list, token_ids list and
      a role.
    
  • Return type

    And awaitable for a ComputeTokensResponse object that has the following attributes

count_tokens(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None)

Counts tokens.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • tools – A list of tools (functions) that the model can try calling.

  • Returns

    total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.

  • Return type

    A CountTokensResponse object that has the following attributes

async count_tokens_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None)

Counts tokens asynchronously.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • tools – A list of tools (functions) that the model can try calling.

  • Returns

    total_tokens: The total number of tokens counted across all instances from the request. total_billable_characters: The total number of billable characters counted across all instances from the request.

  • Return type

    And awaitable for a CountTokensResponse object that has the following attributes

classmethod from_cached_content(cached_content: Union[str, caching.CachedContent], *, generation_config: Optional[Union[GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None)

Creates a model from cached content.

Creates a model instance with an existing cached content. The cached content becomes the prefix of the requesting contents.

  • Parameters

    • cached_content – The cached content resource name or object.

    • generation_config – The generation config to use for this model.

    • safety_settings – The safety settings to use for this model.

  • Returns

    A model instance with the cached content wtih cached content as prefix of all its requests.

generate_content(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, stream: bool = False)

Generates content.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • tool_config – Config shared for all tools provided in the request.

    • stream – Whether to stream the response.

  • Returns

    A single GenerationResponse object if stream == False A stream of GenerationResponse objects if stream == True

async generate_content_async(contents: Union[List[vertexai.generative_models._generative_models.Content], List[Dict[str, Any]], str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part, List[Union[str, vertexai.generative_models._generative_models.Image, vertexai.generative_models._generative_models.Part]]], *, generation_config: Optional[Union[vertexai.generative_models._generative_models.GenerationConfig, Dict[str, Any]]] = None, safety_settings: Optional[Union[List[vertexai.generative_models._generative_models.SafetySetting], Dict[google.cloud.aiplatform_v1beta1.types.content.HarmCategory, google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold]]] = None, tools: Optional[List[vertexai.generative_models._generative_models.Tool]] = None, tool_config: Optional[vertexai.generative_models._generative_models.ToolConfig] = None, stream: bool = False)

Generates content asynchronously.

  • Parameters

    • contents – Contents to send to the model. Supports either a list of Content objects (passing a multi-turn conversation) or a value that can be converted to a single Content object (passing a single message). Supports * str, Image, Part, * List[Union[str, Image, Part]], * List[Content]

    • generation_config – Parameters for the generation.

    • safety_settings – Safety settings as a mapping from HarmCategory to HarmBlockThreshold.

    • tools – A list of tools (functions) that the model can try calling.

    • tool_config – Config shared for all tools provided in the request.

    • stream – Whether to stream the response.

  • Returns

    An awaitable for a single GenerationResponse object if stream == False An awaitable for a stream of GenerationResponse objects if stream == True

start_chat(*, history: Optional[List[vertexai.generative_models._generative_models.Content]] = None, response_validation: bool = True, responder: Optional[vertexai.generative_models._generative_models.AutomaticFunctionCallingResponder] = None)

Creates a stateful chat session.

  • Parameters

    • history – Previous history to initialize the chat session.

    • response_validation – Whether to validate responses before adding them to chat history. By default, send_message will raise error if the request or response is blocked or if the response is incomplete due to going over the max token limit. If set to False, the chat session history will always accumulate the request and response messages even if the response if blocked or incomplete. This can result in an unusable chat session state.

    • responder – An responder object that can automatically respond to some model messages. Supported responder classes: AutomaticFunctionCallingResponder.

  • Returns

    A ChatSession object.

class vertexai.preview.generative_models.HarmBlockThreshold(value)

Bases: proto.enums.Enum

Probability based thresholds levels for blocking.

Values:

HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

    Unspecified harm block threshold.

BLOCK_LOW_AND_ABOVE (1):

    Block low threshold and above (i.e. block
    more).

BLOCK_MEDIUM_AND_ABOVE (2):

    Block medium threshold and above.

BLOCK_ONLY_HIGH (3):

    Block only high threshold (i.e. block less).

BLOCK_NONE (4):

    Block none.

class vertexai.preview.generative_models.HarmCategory(value)

Bases: proto.enums.Enum

Harm categories that will block the content.

Values:

HARM_CATEGORY_UNSPECIFIED (0):

    The harm category is unspecified.

HARM_CATEGORY_HATE_SPEECH (1):

    The harm category is hate speech.

HARM_CATEGORY_DANGEROUS_CONTENT (2):

    The harm category is dangerous content.

HARM_CATEGORY_HARASSMENT (3):

    The harm category is harassment.

HARM_CATEGORY_SEXUALLY_EXPLICIT (4):

    The harm category is sexually explicit
    content.

class vertexai.preview.generative_models.Image()

Bases: object

The image that can be sent to a generative model.

property data(: [bytes](https://python.readthedocs.io/en/latest/library/stdtypes.html#bytes )

Returns the image data.

static from_bytes(data: bytes)

Loads image from image bytes.

  • Parameters

    data – Image bytes.

  • Returns

    Loaded image as an Image object.

static load_from_file(location: str)

Loads image from file.

  • Parameters

    location – Local path from where to load the image.

  • Returns

    Loaded image as an Image object.

class vertexai.preview.generative_models.Part()

Bases: object

A part of a multi-part Content message.

Usage:

```python
``
```

\`
text_part = Part.from_text(“Why is sky blue?”)
image_part = Part.from_image(Image.load_from_file(“image.jpg”))
video_part = Part.from_uri(uri=”gs://…/video.mp4”, mime_type=”video/mp4”)
function_response_part = Part.from_function_response(

> name=”get_current_weather”,
> response={

> > “content”: {“weather_there”: “super nice”},

> }

)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)


```python
``
```



```python
`
```

exception vertexai.preview.generative_models.ResponseBlockedError(message: str, request_contents: List[vertexai.generative_models._generative_models.Content], responses: List[vertexai.generative_models._generative_models.GenerationResponse])

Bases: Exception

with_traceback()

Exception.with_traceback(tb) – set self.traceback to tb and return self.

exception vertexai.preview.generative_models.ResponseValidationError(message: str, request_contents: List[vertexai.generative_models._generative_models.Content], responses: List[vertexai.generative_models._generative_models.GenerationResponse])

Bases: vertexai.generative_models._generative_models.ResponseBlockedError

with_traceback()

Exception.with_traceback(tb) – set self.traceback to tb and return self.

class vertexai.preview.generative_models.SafetySetting(*, category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory, threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold, method: Optional[google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod] = None)

Bases: object

Parameters for the generation.

Safety settings.

  • Parameters

    • category – Harm category.

    • threshold – The harm block threshold.

    • method – Specify if the threshold is used for probability or severity score. If not specified, the threshold is used for probability score.

class HarmBlockMethod(value)

Bases: proto.enums.Enum

Probability vs severity.

Values:

HARM_BLOCK_METHOD_UNSPECIFIED (0):

    The harm block method is unspecified.

SEVERITY (1):

    The harm block method uses both probability
    and severity scores.

PROBABILITY (2):

    The harm block method uses the probability
    score.

class HarmBlockThreshold(value)

Bases: proto.enums.Enum

Probability based thresholds levels for blocking.

Values:

HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):

    Unspecified harm block threshold.

BLOCK_LOW_AND_ABOVE (1):

    Block low threshold and above (i.e. block
    more).

BLOCK_MEDIUM_AND_ABOVE (2):

    Block medium threshold and above.

BLOCK_ONLY_HIGH (3):

    Block only high threshold (i.e. block less).

BLOCK_NONE (4):

    Block none.

class HarmCategory(value)

Bases: proto.enums.Enum

Harm categories that will block the content.

Values:

HARM_CATEGORY_UNSPECIFIED (0):

    The harm category is unspecified.

HARM_CATEGORY_HATE_SPEECH (1):

    The harm category is hate speech.

HARM_CATEGORY_DANGEROUS_CONTENT (2):

    The harm category is dangerous content.

HARM_CATEGORY_HARASSMENT (3):

    The harm category is harassment.

HARM_CATEGORY_SEXUALLY_EXPLICIT (4):

    The harm category is sexually explicit
    content.

class vertexai.preview.generative_models.Tool(function_declarations: List[vertexai.generative_models._generative_models.FunctionDeclaration])

Bases: object

A collection of functions that the model may use to generate response.

Usage:

Create tool from function declarations:


```python
``
```

\`
get_current_weather_func = generative_models.FunctionDeclaration(…)
weather_tool = generative_models.Tool(

> function_declarations=[get_current_weather_func],

Use tool in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

Use tool in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

class vertexai.preview.generative_models.ToolConfig(function_calling_config: vertexai.generative_models._generative_models.ToolConfig.FunctionCallingConfig)

Bases: object

Config shared for all tools provided in the request.

Usage:

Create ToolConfig


```python
``
```

\`
tool_config = ToolConfig(

> function_calling_config=ToolConfig.FunctionCallingConfig(

>     mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
>     allowed_function_names=[“get_current_weather_func”],

Use ToolConfig in GenerativeModel.generate_content:


```python
``
```

\`
model = GenerativeModel(“gemini-pro”)
print(model.generate_content(

> “What is the weather like in Boston?”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],
> tool_config=tool_config,

Use ToolConfig in chat:


```python
``
```

\`
model = GenerativeModel(

> “gemini-pro”,
> # You can specify tools when creating a model to avoid having to send them with every request.
> tools=[weather_tool],
> tool_config=tool_config,

)
chat = model.start_chat()
print(chat.send_message(“What is the weather like in Boston?”))
print(chat.send_message(

> Part.from_function_response(

>     name=”get_current_weather”,
>     response={

>     > “content”: {“weather_there”: “super nice”},

>     }

> ),

vertexai.preview.generative_models.grounding()

alias of vertexai.preview.generative_models.preview_grounding

Classes for working with language models.

class vertexai.language_models.ChatMessage(content: str, author: str)

Bases: object

A chat message.

content()

Content of the message.

author()

Author of the message.

class vertexai.language_models.ChatModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.language_models._language_models._ChatModelBase, vertexai.language_models._language_models._TunableChatModelMixin, vertexai.language_models._language_models._RlhfTunableModelMixin

ChatModel represents a language model that is capable of chat.

Examples:

chat_model = ChatModel.from_pretrained("chat-bison@001")

chat = chat_model.start_chat(
    context="My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.",
    examples=[
        InputOutputTextPair(
            input_text="Who do you work for?",
            output_text="I work for Ned.",
        ),
        InputOutputTextPair(
            input_text="What do I like?",
            output_text="Ned likes watching movies.",
        ),
    ],
    temperature=0.3,
)

chat.send_message("Do you know any cool events this weekend?")

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

classmethod get_tuned_model(tuned_model_name: str)

Loads the specified tuned language model.

list_tuned_model_names()

Lists the names of tuned models.

  • Returns

    A list of tuned models that can be used with the get_tuned_model method.

start_chat(*, context: Optional[str] = None, examples: Optional[List[vertexai.language_models.InputOutputTextPair]] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)

Starts a chat session with the model.

  • Parameters

    • context – Context shapes how the model responds throughout the conversation. For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style

    • examples – List of structured messages to the model to learn how to respond to the conversation. A list of InputOutputTextPair objects.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.

    • message_history – A list of previously sent and received messages.

    • stop_sequences – Customized stop sequences to stop the decoding process.

  • Returns

    A ChatSession object.

tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, default_context: Optional[str] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage: \ tuning_job = model.tune_model(...) ... do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete ``

  • Parameters

    • training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format

    • train_steps – Number of training batches to tune on (batch size is 8 samples).

    • learning_rate – Deprecated. Use learning_rate_multiplier instead. Learning rate to use in tuning.

    • learning_rate_multiplier – Learning rate multiplier to use in tuning.

    • tuning_job_location – GCP location where the tuning job should be run.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • default_context – The context to use for all training samples by default.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • tuning_evaluation_spec – Specification for the model evaluation during tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • ValueError – If the “tuned_model_location” value is not supported

    • RuntimeError – If the model does not support tuning

    • AttributeError – If any attribute in the “tuning_evaluation_spec” is not supported

tune_model_rlhf(*, prompt_data: Union[str, pandas.core.frame.DataFrame], preference_data: Union[str, pandas.core.frame.DataFrame], model_display_name: Optional[str] = None, prompt_sequence_length: Optional[int] = None, target_sequence_length: Optional[int] = None, reward_model_learning_rate_multiplier: Optional[float] = None, reinforcement_learning_rate_multiplier: Optional[float] = None, reward_model_train_steps: Optional[int] = None, reinforcement_learning_train_steps: Optional[int] = None, kl_coeff: Optional[float] = None, default_context: Optional[str] = None, tuning_job_location: Optional[str] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None)

Tunes a model using reinforcement learning from human feedback.

This method launches and returns an asynchronous model tuning job. Usage: \ tuning_job = model.tune_model_rlhf(...) ... do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete ``

  • Parameters

    • prompt_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-text-models-rlhf#prompt-dataset

    • preference_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-text-models-rlhf#human-preference-dataset

    • model_display_name – Custom display name for the tuned model. If not provided, a default name will be created.

    • prompt_sequence_length – Maximum tokenized sequence length for input text. Higher values increase memory overhead. This value should be at most 8192. Default value is 512.

    • target_sequence_length – Maximum tokenized sequence length for target text. Higher values increase memory overhead. This value should be at most 1024. Default value is 64.

    • reward_model_learning_rate_multiplier – Constant used to adjust the base learning rate used when training a reward model. Multiply by a number > 1 to increase the magnitude of updates applied at each training step or multiply by a number < 1 to decrease the magnitude of updates. Default value is 1.0.

    • reinforcement_learning_rate_multiplier – Constant used to adjust the base learning rate used during reinforcement learning. Multiply by a number > 1 to increase the magnitude of updates applied at each training step or multiply by a number < 1 to decrease the magnitude of updates. Default value is 1.0.

    • reward_model_train_steps – Number of steps to use when training a reward model. Default value is 1000.

    • reinforcement_learning_train_steps – Number of reinforcement learning steps to perform when tuning a base model. Default value is 1000.

    • kl_coeff – Coefficient for KL penalty. This regularizes the policy model and penalizes if it diverges from its initial distribution. If set to 0, the reference language model is not loaded into memory. Default value is 0.1.

    • default_context – This field lets the model know what task to perform. Base models have been trained over a large set of varied instructions. You can give a simple and intuitive description of the task and the model will follow it, e.g. “Classify this movie review as positive or negative” or “Translate this sentence to Danish”. Do not specify this if your dataset already prepends the instruction to the inputs field.

    • tuning_job_location – GCP location where the tuning job should be run.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • tuning_evaluation_spec – Evaluation settings to use during tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • RuntimeError – If the model does not support tuning

class vertexai.language_models.ChatSession(model: vertexai.language_models.ChatModel, context: Optional[str] = None, examples: Optional[List[vertexai.language_models.InputOutputTextPair]] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)

Bases: vertexai.language_models._language_models._ChatSessionBase

ChatSession represents a chat session with a language model.

Within a chat session, the model keeps context and remembers the previous conversation.

property message_history(: List[vertexai.language_models.ChatMessage )

List of previous messages.

send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None, grounding_source: Optional[Union[vertexai.language_models._language_models.WebSearch, vertexai.language_models._language_models.VertexAISearch, vertexai.language_models._language_models.InlineContext]] = None)

Sends message to the language model and gets a response.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of candidates to return.

    • grounding_source – If specified, grounding feature will be enabled using the grounding source. Default: None.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None, grounding_source: Optional[Union[vertexai.language_models._language_models.WebSearch, vertexai.language_models._language_models.VertexAISearch, vertexai.language_models._language_models.InlineContext]] = None)

Asynchronously sends message to the language model and gets a response.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of candidates to return.

    • grounding_source – If specified, grounding feature will be enabled using the grounding source. Default: None.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Sends message to the language model and gets a streamed response.

The response is only added to the history once it’s fully read.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

async send_message_streaming_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Asynchronously sends message to the language model and gets a streamed response.

The response is only added to the history once it’s fully read.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

class vertexai.language_models.CodeChatModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.language_models._language_models._ChatModelBase, vertexai.language_models._language_models._TunableChatModelMixin

CodeChatModel represents a model that is capable of completing code.

Examples

code_chat_model = CodeChatModel.from_pretrained(”codechat-bison@001”)

code_chat = code_chat_model.start_chat(

context=”I’m writing a large-scale enterprise application.”,
max_output_tokens=128,
temperature=0.2,

)

code_chat.send_message(“Please help write a function to calculate the min of two numbers”)

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

classmethod get_tuned_model(tuned_model_name: str)

Loads the specified tuned language model.

list_tuned_model_names()

Lists the names of tuned models.

  • Returns

    A list of tuned models that can be used with the get_tuned_model method.

start_chat(*, context: Optional[str] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)

Starts a chat session with the code chat model.

  • Parameters

    • context – Context shapes how the model responds throughout the conversation. For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].

    • temperature – Controls the randomness of predictions. Range: [0, 1].

    • stop_sequences – Customized stop sequences to stop the decoding process.

  • Returns

    A ChatSession object.

tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, default_context: Optional[str] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage: \ tuning_job = model.tune_model(...) ... do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete ``

  • Parameters

    • training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format

    • train_steps – Number of training batches to tune on (batch size is 8 samples).

    • learning_rate – Deprecated. Use learning_rate_multiplier instead. Learning rate to use in tuning.

    • learning_rate_multiplier – Learning rate multiplier to use in tuning.

    • tuning_job_location – GCP location where the tuning job should be run.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • default_context – The context to use for all training samples by default.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • tuning_evaluation_spec – Specification for the model evaluation during tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • ValueError – If the “tuned_model_location” value is not supported

    • RuntimeError – If the model does not support tuning

    • AttributeError – If any attribute in the “tuning_evaluation_spec” is not supported

class vertexai.language_models.CodeChatSession(model: vertexai.language_models.CodeChatModel, context: Optional[str] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)

Bases: vertexai.language_models._language_models._ChatSessionBase

CodeChatSession represents a chat session with code chat language model.

Within a code chat session, the model keeps context and remembers the previous converstion.

property message_history(: List[vertexai.language_models.ChatMessage )

List of previous messages.

send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None)

Sends message to the code chat model and gets a response.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of candidates to return.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, candidate_count: Optional[int] = None)

Asynchronously sends message to the code chat model and gets a response.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.

    • candidate_count – Number of candidates to return.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Sends message to the language model and gets a streamed response.

The response is only added to the history once it’s fully read.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.

  • Returns

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

send_message_streaming_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Asynchronously sends message to the language model and gets a streamed response.

The response is only added to the history once it’s fully read.

  • Parameters

    • message – Message to send to the model

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.

    • stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.

  • Returns

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

class vertexai.language_models.CodeGenerationModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.language_models._CodeGenerationModel, vertexai.language_models._language_models._TunableTextModelMixin, vertexai.language_models._language_models._ModelWithBatchPredict

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

batch_predict(*, dataset: Union[str, List[str]], destination_uri_prefix: str, model_parameters: Optional[Dict] = None)

Starts a batch prediction job with the model.

  • Parameters

    • dataset – The location of the dataset. gs:// and bq:// URIs are supported.

    • destination_uri_prefix – The URI prefix for the prediction. gs:// and bq:// URIs are supported.

    • model_parameters – Model-specific parameters to send to the model.

  • Returns

    A BatchPredictionJob object

  • Raises

    ValueError – When source or destination URI is not supported.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

classmethod get_tuned_model(tuned_model_name: str)

Loads the specified tuned language model.

list_tuned_model_names()

Lists the names of tuned models.

  • Returns

    A list of tuned models that can be used with the get_tuned_model method.

predict(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None)

Gets model response for a single prompt.

  • Parameters

    • prefix – Code before the current point.

    • suffix – Code after the current point.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].

    • temperature – Controls the randomness of predictions. Range: [0, 1].

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of response candidates to return.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

async predict_async(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None)

Asynchronously gets model response for a single prompt.

  • Parameters

    • prefix – Code before the current point.

    • suffix – Code after the current point.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].

    • temperature – Controls the randomness of predictions. Range: [0, 1].

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of response candidates to return.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

predict_streaming(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Predicts the code based on previous code.

The result is a stream (generator) of partial responses.

  • Parameters

    • prefix – Code before the current point.

    • suffix – Code after the current point.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].

    • temperature – Controls the randomness of predictions. Range: [0, 1].

    • stop_sequences – Customized stop sequences to stop the decoding process.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

async predict_streaming_async(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)

Asynchronously predicts the code based on previous code.

The result is a stream (generator) of partial responses.

  • Parameters

    • prefix – Code before the current point.

    • suffix – Code after the current point.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].

    • temperature – Controls the randomness of predictions. Range: [0, 1].

    • stop_sequences – Customized stop sequences to stop the decoding process.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, max_context_length: Optional[str] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage:

``

` tuning_job = model.tune_model(…) … do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete

  • Parameters

    • training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format

    • train_steps – Number of training batches to tune on (batch size is 8 samples).

    • learning_rate_multiplier – Learning rate multiplier to use in tuning.

    • tuning_job_location – GCP location where the tuning job should be run.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • tuning_evaluation_spec – Specification for the model evaluation during tuning.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • max_context_length – The max context length used for tuning. Can be either ‘8k’ or ‘32k’

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • ValueError – If the “tuned_model_location” value is not supported

    • RuntimeError – If the model does not support tuning

class vertexai.language_models.GroundingSource()

Bases: object

class InlineContext(inline_context: str)

Bases: vertexai.language_models._language_models._GroundingSourceBase

InlineContext represents a grounding source using provided inline context. .. attribute:: inline_context

The content used as inline context.

  • type

    str

class VertexAISearch(data_store_id: str, location: str, project: Optional[str] = None, disable_attribution: bool = False)

Bases: vertexai.language_models._language_models._GroundingSourceBase

VertexAISearchDatastore represents a grounding source using Vertex AI Search datastore .. attribute:: data_store_id

Data store ID of the Vertex AI Search datastore.

  • type

    str

location()

GCP multi region where you have set up your Vertex AI Search data store. Possible values can be global, us, eu, etc. Learn more about Vertex AI Search location here: https://cloud.google.com/generative-ai-app-builder/docs/locations

project()

The project where you have set up your Vertex AI Search. If not specified, will assume that your Vertex AI Search is within your current project.

  • Type

    Optional[str]

disable_attribution()

If set to True, skip finding claim attributions (i.e not generate grounding citation). Default: False.

class WebSearch(disable_attribution: bool = False)

Bases: vertexai.language_models._language_models._GroundingSourceBase

WebSearch represents a grounding source using public web search. .. attribute:: disable_attribution

If set to True, skip finding claim attributions (i.e not generate grounding citation). Default: False.

  • type

    bool

class vertexai.language_models.InputOutputTextPair(input_text: str, output_text: str)

Bases: object

InputOutputTextPair represents a pair of input and output texts.

class vertexai.language_models.TextEmbedding(values: List[float], statistics: Optional[vertexai.language_models.TextEmbeddingStatistics] = None, _prediction_response: Optional[google.cloud.aiplatform.models.Prediction] = None)

Bases: object

Text embedding vector and statistics.

class vertexai.language_models.TextEmbeddingInput(text: str, task_type: Optional[str] = None, title: Optional[str] = None)

Bases: object

Structural text embedding input.

text()

The main text content to embed.

task_type()

The name of the downstream task the embeddings will be used for. Valid values: RETRIEVAL_QUERY

Specifies the given text is a query in a search/retrieval setting.

RETRIEVAL_DOCUMENT

Specifies the given text is a document from the corpus being searched.

SEMANTIC_SIMILARITY

Specifies the given text will be used for STS.

CLASSIFICATION

Specifies that the given text will be classified.

CLUSTERING

Specifies that the embeddings will be used for clustering.

QUESTION_ANSWERING

Specifies that the embeddings will be used for question answering.

FACT_VERIFICATION

Specifies that the embeddings will be used for fact verification.

CODE_RETRIEVAL_QUERY

Specifies that the embeddings will be used for code retrieval.
  • Type

    Optional[str]

title()

Optional identifier of the text content.

  • Type

    Optional[str]

class vertexai.language_models.TextEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.language_models._TextEmbeddingModel, vertexai.language_models._language_models._TunableTextEmbeddingModelMixin, vertexai.language_models._language_models._CountTokensMixin

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

count_tokens(prompts: List[str])

Counts the tokens and billable characters for a given prompt.

Note: this does not make a prediction request to the model, it only counts the tokens in the request.

  • Parameters

    prompts (List[str]) – Required. A list of prompts to ask the model. For example: [“What should I do today?”, “How’s it going?”]

  • Returns

    A CountTokensResponse object that contains the number of tokens in the text and the number of billable characters.

classmethod deploy_tuned_model(tuned_model_name: str, machine_type: Optional[str] = None, accelerator: Optional[str] = None, accelerator_count: Optional[int] = None)

Loads the specified tuned language model.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_embeddings(texts: List[Union[str, vertexai.language_models.TextEmbeddingInput]], *, auto_truncate: bool = True, output_dimensionality: Optional[int] = None)

Calculates embeddings for the given texts.

  • Parameters

    • texts – A list of texts or TextEmbeddingInput objects to embed.

    • auto_truncate – Whether to automatically truncate long texts. Default: True.

    • output_dimensionality – Optional dimensions of embeddings. Range: [1, 768]. Default: None.

  • Returns

    A list of TextEmbedding objects.

async get_embeddings_async(texts: List[Union[str, vertexai.language_models.TextEmbeddingInput]], *, auto_truncate: bool = True, output_dimensionality: Optional[int] = None)

Asynchronously calculates embeddings for the given texts.

  • Parameters

    • texts – A list of texts or TextEmbeddingInput objects to embed.

    • auto_truncate – Whether to automatically truncate long texts. Default: True.

    • output_dimensionality – Optional dimensions of embeddings. Range: [1, 768]. Default: None.

  • Returns

    A list of TextEmbedding objects.

classmethod get_tuned_model(*args, **kwargs)

Loads the specified tuned language model.

list_tuned_model_names()

Lists the names of tuned models.

  • Returns

    A list of tuned models that can be used with the get_tuned_model method.

tune_model(*, training_data: Optional[str] = None, corpus_data: Optional[str] = None, queries_data: Optional[str] = None, test_data: Optional[str] = None, validation_data: Optional[str] = None, batch_size: Optional[int] = None, train_steps: Optional[int] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, task_type: Optional[str] = None, machine_type: Optional[str] = None, accelerator: Optional[str] = None, accelerator_count: Optional[int] = None, output_dimensionality: Optional[int] = None, learning_rate_multiplier: Optional[float] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage:

``

` tuning_job = model.tune_model(…) … do some other work tuned_model = tuning_job.deploy_tuned_model() # Blocks until tuning is complete

  • Parameters

    • training_data – URI pointing to training data in TSV format.

    • corpus_data – URI pointing to data in JSON lines format.

    • queries_data – URI pointing to data in JSON lines format.

    • test_data – URI pointing to data in TSV format.

    • validation_data – URI pointing to data in TSV format.

    • batch_size – The training batch size.

    • train_steps – The number of steps to perform model tuning. Must be greater than 30.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • task_type – The task type expected to be used during inference. Valid values are DEFAULT, RETRIEVAL_QUERY, RETRIEVAL_DOCUMENT, SEMANTIC_SIMILARITY, CLASSIFICATION, CLUSTERING, FACT_VERIFICATION, and QUESTION_ANSWERING.

    • machine_type – The machine type to use for training. For information about selecting the machine type that matches the accelerator type and count you have selected, see https://cloud.google.com/compute/docs/gpus.

    • accelerator – The accelerator type to use for tuning, for example NVIDIA_TESLA_V100. For possible values, see https://cloud.google.com/vertex-ai/generative-ai/docs/models/tune-embeddings#using-accelerators.

    • accelerator_count – The number of accelerators to use when training. Using a greater number of accelerators may make training faster, but has no effect on quality.

    • output_dimensionality – The desired embedding dimension of your tuned model, up to 768. This is only supported for models text-embedding-004 and text-multilingual-embedding-002.

    • learning_rate_multiplier – A multiplier to apply to the recommended learning rate during tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the provided parameter combinations or values are not supported.

    • RuntimeError – If the model does not support tuning

class vertexai.language_models.TextGenerationModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.language_models._language_models._TextGenerationModel, vertexai.language_models._language_models._TunableTextModelMixin, vertexai.language_models._language_models._ModelWithBatchPredict, vertexai.language_models._language_models._RlhfTunableModelMixin

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

batch_predict(*, dataset: Union[str, List[str]], destination_uri_prefix: str, model_parameters: Optional[Dict] = None)

Starts a batch prediction job with the model.

  • Parameters

    • dataset – The location of the dataset. gs:// and bq:// URIs are supported.

    • destination_uri_prefix – The URI prefix for the prediction. gs:// and bq:// URIs are supported.

    • model_parameters – Model-specific parameters to send to the model.

  • Returns

    A BatchPredictionJob object

  • Raises

    ValueError – When source or destination URI is not supported.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

classmethod get_tuned_model(tuned_model_name: str)

Loads the specified tuned language model.

list_tuned_model_names()

Lists the names of tuned models.

  • Returns

    A list of tuned models that can be used with the get_tuned_model method.

predict(prompt: str, *, max_output_tokens: Optional[int] = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None, grounding_source: Optional[Union[vertexai.language_models._language_models.WebSearch, vertexai.language_models._language_models.VertexAISearch, vertexai.language_models._language_models.InlineContext]] = None, logprobs: Optional[int] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, logit_bias: Optional[Dict[str, float]] = None, seed: Optional[int] = None)

Gets model response for a single prompt.

  • Parameters

    • prompt – Question to ask the model.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of response candidates to return.

    • grounding_source – If specified, grounding feature will be enabled using the grounding source. Default: None.

    • logprobs – Returns the top logprobs most likely candidate tokens with their log probabilities at each generation step. The chosen tokens and their log probabilities at each step are always returned. The chosen token may or may not be in the top logprobs most likely candidates. The minimum value for logprobs is 0, which means only the chosen tokens and their log probabilities are returned. The maximum value for logprobs is 5.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • logit_bias – Mapping from token IDs (integers) to their bias values (floats). The bias values are added to the logits before sampling. Larger positive bias increases the probability of choosing the token. Smaller negative bias decreases the probability of choosing the token. Range: [-100.0, 100.0]

    • seed – Decoder generates random noise with a pseudo random number generator, temperature * noise is added to logits before sampling. The pseudo random number generator (prng) takes a seed as input, it generates the same output with the same seed. If seed is not set, the seed used in decoder will not be deterministic, thus the generated random noise will not be deterministic. If seed is set, the generated random noise will be deterministic.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

async predict_async(prompt: str, *, max_output_tokens: Optional[int] = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, candidate_count: Optional[int] = None, grounding_source: Optional[Union[vertexai.language_models._language_models.WebSearch, vertexai.language_models._language_models.VertexAISearch, vertexai.language_models._language_models.InlineContext]] = None, logprobs: Optional[int] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, logit_bias: Optional[Dict[str, float]] = None, seed: Optional[int] = None)

Asynchronously gets model response for a single prompt.

  • Parameters

    • prompt – Question to ask the model.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • candidate_count – Number of response candidates to return.

    • grounding_source – If specified, grounding feature will be enabled using the grounding source. Default: None.

    • logprobs – Returns the top logprobs most likely candidate tokens with their log probabilities at each generation step. The chosen tokens and their log probabilities at each step are always returned. The chosen token may or may not be in the top logprobs most likely candidates. The minimum value for logprobs is 0, which means only the chosen tokens and their log probabilities are returned. The maximum value for logprobs is 5.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • logit_bias – Mapping from token IDs (integers) to their bias values (floats). The bias values are added to the logits before sampling. Larger positive bias increases the probability of choosing the token. Smaller negative bias decreases the probability of choosing the token. Range: [-100.0, 100.0]

    • seed – Decoder generates random noise with a pseudo random number generator, temperature * noise is added to logits before sampling. The pseudo random number generator (prng) takes a seed as input, it generates the same output with the same seed. If seed is not set, the seed used in decoder will not be deterministic, thus the generated random noise will not be deterministic. If seed is set, the generated random noise will be deterministic.

  • Returns

    A MultiCandidateTextGenerationResponse object that contains the text produced by the model.

predict_streaming(prompt: str, *, max_output_tokens: int = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, logprobs: Optional[int] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, logit_bias: Optional[Dict[str, float]] = None, seed: Optional[int] = None)

Gets a streaming model response for a single prompt.

The result is a stream (generator) of partial responses.

  • Parameters

    • prompt – Question to ask the model.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • logprobs – Returns the top logprobs most likely candidate tokens with their log probabilities at each generation step. The chosen tokens and their log probabilities at each step are always returned. The chosen token may or may not be in the top logprobs most likely candidates. The minimum value for logprobs is 0, which means only the chosen tokens and their log probabilities are returned. The maximum value for logprobs is 5.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • logit_bias – Mapping from token IDs (integers) to their bias values (floats). The bias values are added to the logits before sampling. Larger positive bias increases the probability of choosing the token. Smaller negative bias decreases the probability of choosing the token. Range: [-100.0, 100.0]

    • seed – Decoder generates random noise with a pseudo random number generator, temperature * noise is added to logits before sampling. The pseudo random number generator (prng) takes a seed as input, it generates the same output with the same seed. If seed is not set, the seed used in decoder will not be deterministic, thus the generated random noise will not be deterministic. If seed is set, the generated random noise will be deterministic.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

async predict_streaming_async(prompt: str, *, max_output_tokens: int = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None, logprobs: Optional[int] = None, presence_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None, logit_bias: Optional[Dict[str, float]] = None, seed: Optional[int] = None)

Asynchronously gets a streaming model response for a single prompt.

The result is a stream (generator) of partial responses.

  • Parameters

    • prompt – Question to ask the model.

    • max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].

    • temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.

    • top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.

    • top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.

    • stop_sequences – Customized stop sequences to stop the decoding process.

    • logprobs – Returns the top logprobs most likely candidate tokens with their log probabilities at each generation step. The chosen tokens and their log probabilities at each step are always returned. The chosen token may or may not be in the top logprobs most likely candidates. The minimum value for logprobs is 0, which means only the chosen tokens and their log probabilities are returned. The maximum value for logprobs is 5.

    • presence_penalty – Positive values penalize tokens that have appeared in the generated text, thus increasing the possibility of generating more diversed topics. Range: [-2.0, 2.0]

    • frequency_penalty – Positive values penalize tokens that repeatedly appear in the generated text, thus decreasing the possibility of repeating the same content. Range: [-2.0, 2.0]

    • logit_bias – Mapping from token IDs (integers) to their bias values (floats). The bias values are added to the logits before sampling. Larger positive bias increases the probability of choosing the token. Smaller negative bias decreases the probability of choosing the token. Range: [-100.0, 100.0]

    • seed – Decoder generates random noise with a pseudo random number generator, temperature * noise is added to logits before sampling. The pseudo random number generator (prng) takes a seed as input, it generates the same output with the same seed. If seed is not set, the seed used in decoder will not be deterministic, thus the generated random noise will not be deterministic. If seed is set, the generated random noise will be deterministic.

  • Yields

    A stream of TextGenerationResponse objects that contain partial responses produced by the model.

tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, max_context_length: Optional[str] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage:

``

` tuning_job = model.tune_model(…) … do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete

  • Parameters

    • training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format

    • train_steps – Number of training batches to tune on (batch size is 8 samples).

    • learning_rate_multiplier – Learning rate multiplier to use in tuning.

    • tuning_job_location – GCP location where the tuning job should be run.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • tuning_evaluation_spec – Specification for the model evaluation during tuning.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • max_context_length – The max context length used for tuning. Can be either ‘8k’ or ‘32k’

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • ValueError – If the “tuned_model_location” value is not supported

    • RuntimeError – If the model does not support tuning

tune_model_rlhf(*, prompt_data: Union[str, pandas.core.frame.DataFrame], preference_data: Union[str, pandas.core.frame.DataFrame], model_display_name: Optional[str] = None, prompt_sequence_length: Optional[int] = None, target_sequence_length: Optional[int] = None, reward_model_learning_rate_multiplier: Optional[float] = None, reinforcement_learning_rate_multiplier: Optional[float] = None, reward_model_train_steps: Optional[int] = None, reinforcement_learning_train_steps: Optional[int] = None, kl_coeff: Optional[float] = None, default_context: Optional[str] = None, tuning_job_location: Optional[str] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None)

Tunes a model using reinforcement learning from human feedback.

This method launches and returns an asynchronous model tuning job. Usage: \ tuning_job = model.tune_model_rlhf(...) ... do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete ``

  • Parameters

    • prompt_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-text-models-rlhf#prompt-dataset

    • preference_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-text-models-rlhf#human-preference-dataset

    • model_display_name – Custom display name for the tuned model. If not provided, a default name will be created.

    • prompt_sequence_length – Maximum tokenized sequence length for input text. Higher values increase memory overhead. This value should be at most 8192. Default value is 512.

    • target_sequence_length – Maximum tokenized sequence length for target text. Higher values increase memory overhead. This value should be at most 1024. Default value is 64.

    • reward_model_learning_rate_multiplier – Constant used to adjust the base learning rate used when training a reward model. Multiply by a number > 1 to increase the magnitude of updates applied at each training step or multiply by a number < 1 to decrease the magnitude of updates. Default value is 1.0.

    • reinforcement_learning_rate_multiplier – Constant used to adjust the base learning rate used during reinforcement learning. Multiply by a number > 1 to increase the magnitude of updates applied at each training step or multiply by a number < 1 to decrease the magnitude of updates. Default value is 1.0.

    • reward_model_train_steps – Number of steps to use when training a reward model. Default value is 1000.

    • reinforcement_learning_train_steps – Number of reinforcement learning steps to perform when tuning a base model. Default value is 1000.

    • kl_coeff – Coefficient for KL penalty. This regularizes the policy model and penalizes if it diverges from its initial distribution. If set to 0, the reference language model is not loaded into memory. Default value is 0.1.

    • default_context – This field lets the model know what task to perform. Base models have been trained over a large set of varied instructions. You can give a simple and intuitive description of the task and the model will follow it, e.g. “Classify this movie review as positive or negative” or “Translate this sentence to Danish”. Do not specify this if your dataset already prepends the instruction to the inputs field.

    • tuning_job_location – GCP location where the tuning job should be run.

    • accelerator_type – Type of accelerator to use. Can be “TPU” or “GPU”.

    • tuning_evaluation_spec – Evaluation settings to use during tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • RuntimeError – If the model does not support tuning

class vertexai.language_models.TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, errors: typing.Tuple[int] = (), safety_attributes: typing.Dict[str, float] =

Bases: object

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

  • type

    str

is_blocked()

Whether the the request was blocked.

errors()

The error codes indicate why the response was blocked. Learn more information about safety errors here: this documentation https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_errors

  • Type

    Tuple[int]

safety_attributes()

Scores for safety attributes. Learn more about the safety attributes here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_descriptions

grounding_metadata()

Metadata for grounding.

  • Type

    Optional[vertexai.language_models._language_models.GroundingMetadata]

property raw_prediction_response(: google.cloud.aiplatform.models.Predictio )

Raw prediction response.

Classes for working with language models.

class vertexai.language_models._language_models._TunableModelMixin(model_id: str, endpoint_name: Optional[str] = None)

Model that can be tuned with supervised fine tuning (SFT).

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Vertex LLM. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, corpus_data: Optional[str] = None, queries_data: Optional[str] = None, test_data: Optional[str] = None, validation_data: Optional[str] = None, batch_size: Optional[int] = None, train_steps: Optional[int] = None, learning_rate: Optional[float] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, tuning_evaluation_spec: Optional[vertexai.language_models.TuningEvaluationSpec] = None, default_context: Optional[str] = None, task_type: Optional[str] = None, machine_type: Optional[str] = None, accelerator: Optional[str] = None, accelerator_count: Optional[int] = None, accelerator_type: Optional[Literal['TPU', 'GPU']] = None, max_context_length: Optional[str] = None, output_dimensionality: Optional[int] = None)

Tunes a model based on training data.

This method launches and returns an asynchronous model tuning job. Usage: \ tuning_job = model.tune_model(...) ... do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete ``

  • Parameters

    • training_data – A URI to training data in TSV (for embedding models), or JSON lines format, or a Pandas DataFrame.

    • corpus_data – A URI to corpus in JSON lines format.

    • queries_data – A URI to queries in JSON lines format.

    • test_data – A URI to test data in TSV format.

    • validation_data – A URI to validation data in TSV format.

    • batch_size – Size of batch (for embedding models).

    • train_steps – Number of training batches to tune on (batch size is 8 samples).

    • learning_rate – Deprecated. Use learning_rate_multiplier instead. Learning rate to use in tuning.

    • learning_rate_multiplier – Learning rate multiplier to use in tuning.

    • tuning_job_location – GCP location where the tuning job should be run.

    • tuned_model_location – GCP location where the tuned model should be deployed.

    • model_display_name – Custom display name for the tuned model.

    • tuning_evaluation_spec – Specification for the model evaluation during tuning.

    • default_context – The context to use for all training samples by default.

    • task_type – Type of task. Can be “RETRIEVAL_QUERY”, “RETRIEVAL_DOCUMENT”, “SEMANTIC_SIMILARITY”, “CLASSIFICATION”, “CLUSTERING”, “QUESTION_ANSWERING”, or “FACT_VERIFICATION”.

    • machine_type – Machine type. E.g., “a2-highgpu-1g”. See also: https://cloud.google.com/vertex-ai/docs/training/configure-compute.

    • accelerator – Kind of accelerator. E.g., “NVIDIA_TESLA_A100”. See also: https://cloud.google.com/vertex-ai/docs/training/configure-compute.

    • accelerator_count – Count of accelerators.

    • accelerator_type – Type of accelerator to use. Type can be “TPU” or “GPU”. Type is ignored, if accelerator is specified.

    • max_context_length – The max context length used for tuning. Can be either ‘8k’ or ‘32k’

    • output_dimensionality – The output dimensionality of the tuned model, for text embedding tuning.

  • Returns

    A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.

  • Raises

    • ValueError – If the “tuning_job_location” value is not supported

    • ValueError – If the “tuned_model_location” value is not supported

    • RuntimeError – If the model does not support tuning

vertexai.preview.end_run(state: google.cloud.aiplatform_v1.types.execution.Execution.State = State.COMPLETE)

Ends the the current experiment run.

\py aiplatform.start_run('my-run') ... aiplatform.end_run() ``

vertexai.preview.get_experiment_df(experiment: Optional[str] = None, *, include_time_series: bool = True)

Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.

Example:

``
`

py aiplatform.init(experiment=’exp-1’) aiplatform.start_run(run=’run-1’) aiplatform.log_params({‘learning_rate’: 0.1}) aiplatform.log_metrics({‘accuracy’: 0.9})

aiplatform.start_run(run=’run-2’) aiplatform.log_params({‘learning_rate’: 0.2}) aiplatform.log_metrics({‘accuracy’: 0.95})

aiplatform.get_experiment_df()

``
`

Will result in the following DataFrame:

\ experiment_name | run_name | param.learning_rate | metric.accuracy exp-1 | run-1 | 0.1 | 0.9 exp-1 | run-2 | 0.2 | 0.95 ``

  • Parameters

    • experiment (str) – Name of the Experiment to filter results. If not set, return results of current active experiment.

    • include_time_series (bool) – Optional. Whether or not to include time series metrics in df. Default is True. Setting to False will largely improve execution time and reduce quota contributing calls. Recommended when time series metrics are not needed or number of runs in Experiment is large. For time series metrics consider querying a specific run using get_time_series_data_frame.

  • Returns

    Pandas Dataframe of Experiment with metrics and parameters.

  • Raises

    • NotFound exception if experiment does not exist.

    • ValueError if given experiment is not associated with a wrong schema.

vertexai.preview.log_classification_metrics(*, labels: Optional[List[str]] = None, matrix: Optional[List[List[int]]] = None, fpr: Optional[List[float]] = None, tpr: Optional[List[float]] = None, threshold: Optional[List[float]] = None, display_name: Optional[str] = None)

Create an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve.

``
`

py my_run = aiplatform.ExperimentRun(‘my-run’, experiment=’my-experiment’) classification_metrics = my_run.log_classification_metrics(

display_name=’my-classification-metrics’, labels=[‘cat’, ‘dog’], matrix=[[9, 1], [1, 9]], fpr=[0.1, 0.5, 0.9], tpr=[0.1, 0.7, 0.9], threshold=[0.9, 0.5, 0.1],

  • Parameters

    • labels (List[str]) – Optional. List of label names for the confusion matrix. Must be set if ‘matrix’ is set.

    • matrix (List[List[int]) – Optional. Values for the confusion matrix. Must be set if ‘labels’ is set.

    • fpr (List[float]) – Optional. List of false positive rates for the ROC curve. Must be set if ‘tpr’ or ‘thresholds’ is set.

    • tpr (List[float]) – Optional. List of true positive rates for the ROC curve. Must be set if ‘fpr’ or ‘thresholds’ is set.

    • threshold (List[float]) – Optional. List of thresholds for the ROC curve. Must be set if ‘fpr’ or ‘tpr’ is set.

    • display_name (str) – Optional. The user-defined name for the classification metric artifact.

  • Raises

    ValueError – if ‘labels’ and ‘matrix’ are not set together or if ‘labels’ and ‘matrix’ are not in the same length or if ‘fpr’ and ‘tpr’ and ‘threshold’ are not set together or if ‘fpr’ and ‘tpr’ and ‘threshold’ are not in the same length

vertexai.preview.log_metrics(metrics: Dict[str, Union[float, int, str]])

Log single or multiple Metrics with specified key and value pairs.

Metrics with the same key will be overwritten.

\py aiplatform.start_run('my-run', experiment='my-experiment') aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8}) ``

vertexai.preview.log_params(params: Dict[str, Union[float, int, str]])

Log single or multiple parameters with specified key and value pairs.

Parameters with the same key will be overwritten.

\py aiplatform.start_run('my-run') aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2}) ``

vertexai.preview.log_time_series_metrics(metrics: Dict[str, float], step: Optional[int] = None, wall_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None)

Logs time series metrics to to this Experiment Run.

Requires the experiment or experiment run has a backing Vertex Tensorboard resource.

``
`

py my_tensorboard = aiplatform.Tensorboard(…) aiplatform.init(experiment=’my-experiment’, experiment_tensorboard=my_tensorboard) aiplatform.start_run(‘my-run’)

increments steps as logged

for i in range(10):

aiplatform.log_time_series_metrics({‘loss’: loss})

explicitly log steps

for i in range(10):

aiplatform.log_time_series_metrics({‘loss’: loss}, step=i)

``
`
  • Parameters

    • metrics (Dict[str, **Union[str, *[float](https://python.readthedocs.io/en/latest/library/functions.html#float)]*]) – Required. Dictionary of where keys are metric names and values are metric values.

    • step (int) – Optional. Step index of this data point within the run.

      If not provided, the latest step amongst all time series metrics already logged will be used.

    • wall_time (timestamp_pb2.Timestamp) – Optional. Wall clock timestamp when this data point is generated by the end user.

      If not provided, this will be generated based on the value from time.time()

  • Raises

    RuntimeError – If current experiment run doesn’t have a backing Tensorboard resource.

vertexai.preview.start_run(run: str, *, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None, resume=False)

Start a run to current session.

\py aiplatform.init(experiment='my-experiment') aiplatform.start_run('my-run') aiplatform.log_params({'learning_rate':0.1}) ``

Use as context manager. Run will be ended on context exit:

``
`

py aiplatform.init(experiment=’my-experiment’) with aiplatform.start_run(‘my-run’) as my_run:

my_run.log_params({‘learning_rate’:0.1})

``
`

Resume a previously started run:

``
`

py aiplatform.init(experiment=’my-experiment’) with aiplatform.start_run(‘my-run’, resume=True) as my_run:

my_run.log_params({‘learning_rate’:0.1})

``
`
  • Parameters

    • run (str) – Required. Name of the run to assign current session with.

    • Union[str (tensorboard) – Optional. Backing Tensorboard Resource to enable and store time series metrics logged to this Experiment Run using log_time_series_metrics.

      If not provided will the the default backing tensorboard of the currently set experiment.

    • tensorboard_resource.Tensorboard] – Optional. Backing Tensorboard Resource to enable and store time series metrics logged to this Experiment Run using log_time_series_metrics.

      If not provided will the the default backing tensorboard of the currently set experiment.

    • resume (bool) – Whether to resume this run. If False a new run will be created.

  • Raises

    ValueError – if experiment is not set. Or if run execution or metrics artifact is already created but with a different schema.

Classes for working with language models.

class vertexai.preview.language_models.ChatMessage(content: str, author: str)

Bases: object

A chat message.

content()

Content of the message.

author()

Author of the message.

vertexai.preview.language_models.ChatModel()

alias of vertexai.preview.language_models._PreviewChatModel

vertexai.preview.language_models.ChatSession()

alias of vertexai.preview.language_models._PreviewChatSession

vertexai.preview.language_models.CodeChatModel()

alias of vertexai.preview.language_models._PreviewCodeChatModel

vertexai.preview.language_models.CodeChatSession()

alias of vertexai.preview.language_models._PreviewCodeChatSession

vertexai.preview.language_models.CodeGenerationModel()

alias of vertexai.preview.language_models._PreviewCodeGenerationModel

class vertexai.preview.language_models.CountTokensResponse(total_tokens: int, total_billable_characters: int, _count_tokens_response: Any)

Bases: object

The response from a count_tokens request. .. attribute:: total_tokens

The total number of tokens counted across all instances passed to the request.

  • type

    int

total_billable_characters()

The total number of billable characters counted across all instances from the request.

class vertexai.preview.language_models.EvaluationClassificationMetric(label_name: Optional[str] = None, auPrc: Optional[float] = None, auRoc: Optional[float] = None, logLoss: Optional[float] = None, confidenceMetrics: Optional[List[Dict[str, Any]]] = None, confusionMatrix: Optional[Dict[str, Any]] = None)

Bases: vertexai.language_models._evaluatable_language_models._EvaluationMetricBase

The evaluation metric response for classification metrics.

  • Parameters

    • label_name (str) – Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().

    • auPrc (float) – Optional. The area under the precision recall curve.

    • auRoc (float) – Optional. The area under the receiver operating characteristic curve.

    • logLoss (float) – Optional. Logarithmic loss.

    • confidenceMetrics (List[Dict[str, **Any]]) – Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

    • confusionMatrix (Dict[str, **Any]) – Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

property input_dataset_paths(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

The Google Cloud Storage paths to the dataset used for this evaluation.

property task_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

The type of evaluation task for the evaluation..

class vertexai.preview.language_models.EvaluationMetric(bleu: Optional[float] = None, rougeLSum: Optional[float] = None)

Bases: vertexai.language_models._evaluatable_language_models._EvaluationMetricBase

The evaluation metric response.

  • Parameters

    • bleu (float) – Optional. BLEU (Bilingual evauation understudy). Scores based on sacrebleu implementation.

    • rougeLSum (float) – Optional. ROUGE-L (Longest Common Subsequence) scoring at summary level.

property input_dataset_paths(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

The Google Cloud Storage paths to the dataset used for this evaluation.

property task_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

The type of evaluation task for the evaluation..

class vertexai.preview.language_models.EvaluationQuestionAnsweringSpec(ground_truth_data: Union[List[str], str, pandas.core.frame.DataFrame], task_name: str = 'question-answering')

Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec

Spec for question answering model evaluation tasks.

class vertexai.preview.language_models.EvaluationTextClassificationSpec(ground_truth_data: Union[List[str], str, pandas.core.frame.DataFrame], target_column_name: str, class_names: List[str])

Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec

Spec for text classification model evaluation tasks.

  • Parameters

    • target_column_name (str) – Required. The label column in the dataset provided in ground_truth_data. Required when task_name=’text-classification’.

    • class_names (List[str]) – Required. A list of all possible label names in your dataset. Required when task_name=’text-classification’.

class vertexai.preview.language_models.EvaluationTextGenerationSpec(ground_truth_data: Union[List[str], str, pandas.core.frame.DataFrame])

Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec

Spec for text generation model evaluation tasks.

class vertexai.preview.language_models.EvaluationTextSummarizationSpec(ground_truth_data: Union[List[str], str, pandas.core.frame.DataFrame], task_name: str = 'summarization')

Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec

Spec for text summarization model evaluation tasks.

class vertexai.preview.language_models.InputOutputTextPair(input_text: str, output_text: str)

Bases: object

InputOutputTextPair represents a pair of input and output texts.

class vertexai.preview.language_models.TextEmbedding(values: List[float], statistics: Optional[vertexai.language_models.TextEmbeddingStatistics] = None, _prediction_response: Optional[google.cloud.aiplatform.models.Prediction] = None)

Bases: object

Text embedding vector and statistics.

class vertexai.preview.language_models.TextEmbeddingInput(text: str, task_type: Optional[str] = None, title: Optional[str] = None)

Bases: object

Structural text embedding input.

text()

The main text content to embed.

task_type()

The name of the downstream task the embeddings will be used for. Valid values: RETRIEVAL_QUERY

Specifies the given text is a query in a search/retrieval setting.

RETRIEVAL_DOCUMENT

Specifies the given text is a document from the corpus being searched.

SEMANTIC_SIMILARITY

Specifies the given text will be used for STS.

CLASSIFICATION

Specifies that the given text will be classified.

CLUSTERING

Specifies that the embeddings will be used for clustering.

QUESTION_ANSWERING

Specifies that the embeddings will be used for question answering.

FACT_VERIFICATION

Specifies that the embeddings will be used for fact verification.

CODE_RETRIEVAL_QUERY

Specifies that the embeddings will be used for code retrieval.
  • Type

    Optional[str]

title()

Optional identifier of the text content.

  • Type

    Optional[str]

vertexai.preview.language_models.TextEmbeddingModel()

alias of vertexai.preview.language_models._PreviewTextEmbeddingModel

vertexai.preview.language_models.TextGenerationModel()

alias of vertexai.preview.language_models._PreviewTextGenerationModel

class vertexai.preview.language_models.TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, errors: typing.Tuple[int] = (), safety_attributes: typing.Dict[str, float] =

Bases: object

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

  • type

    str

is_blocked()

Whether the the request was blocked.

errors()

The error codes indicate why the response was blocked. Learn more information about safety errors here: this documentation https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_errors

  • Type

    Tuple[int]

safety_attributes()

Scores for safety attributes. Learn more about the safety attributes here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_descriptions

grounding_metadata()

Metadata for grounding.

  • Type

    Optional[vertexai.language_models._language_models.GroundingMetadata]

property raw_prediction_response(: google.cloud.aiplatform.models.Predictio )

Raw prediction response.

class vertexai.preview.language_models.TuningEvaluationSpec(evaluation_data: Optional[str] = None, evaluation_interval: Optional[int] = None, enable_early_stopping: Optional[bool] = None, enable_checkpoint_selection: Optional[bool] = None, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None)

Bases: object

Specification for model evaluation to perform during tuning.

evaluation_data()

GCS URI of the evaluation dataset. This will run model evaluation as part of the tuning job.

  • Type

    Optional[str]

evaluation_interval()

The evaluation will run at every evaluation_interval tuning steps. Default: 20.

  • Type

    Optional[int]

enable_early_stopping()

If True, the tuning may stop early before completing all the tuning steps. Requires evaluation_data.

  • Type

    Optional[bool]

enable_checkpoint_selection()

If set to True, the tuning process returns the best model checkpoint (based on model evaluation). If set to False, the latest model checkpoint is returned. If unset, the selection is only enabled for *-bison@001 models.

  • Type

    Optional[bool]

tensorboard()

Vertex Tensorboard where to write the evaluation metrics. The Tensorboard must be in the same location as the tuning job.

  • Type

    Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]]

Classes for working with vision models.

class vertexai.vision_models.GeneratedImage(image_bytes: Optional[bytes], generation_parameters: Dict[str, Any], gcs_uri: Optional[str] = None)

Bases: vertexai.vision_models.Image

Generated image.

Creates a GeneratedImage object.

  • Parameters

    • image_bytes – Image file bytes. Image can be in PNG or JPEG format.

    • generation_parameters – Image generation parameter values.

    • gcs_uri – Image file Google Cloud Storage uri.

property generation_parameters()

Image generation parameters as a dictionary.

static load_from_file(location: str)

Loads image from file.

  • Parameters

    location – Local path from where to load the image.

  • Returns

    Loaded image as a GeneratedImage object.

save(location: str, include_generation_parameters: bool = True)

Saves image to a file.

  • Parameters

    • location – Local path where to save the image.

    • include_generation_parameters – Whether to include the image generation parameters in the image’s EXIF metadata.

show()

Shows the image.

This method only works when in a notebook environment.

class vertexai.vision_models.Image(image_bytes: Optional[bytes] = None, gcs_uri: Optional[str] = None)

Bases: object

Image.

Creates an Image object.

  • Parameters

    • image_bytes – Image file bytes. Image can be in PNG or JPEG format.

    • gcs_uri – Image URI in Google Cloud Storage.

static load_from_file(location: str)

Loads image from local file or Google Cloud Storage.

  • Parameters

    location – Local path or Google Cloud Storage uri from where to load the image.

  • Returns

    Loaded image as an Image object.

save(location: str)

Saves image to a file.

  • Parameters

    location – Local path where to save the image.

show()

Shows the image.

This method only works when in a notebook environment.

class vertexai.vision_models.ImageCaptioningModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates captions from image.

Examples:

model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en', output_gcs_uri: Optional[str] = None)

Generates captions for a given image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • number_of_results – Number of captions to produce. Range: 1-3.

    • language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”

    • output_gcs_uri – Google Cloud Storage uri to store the captioned images.

  • Returns

    A list of image caption strings.

class vertexai.vision_models.ImageGenerationModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates images from text prompt.

Examples:

model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
    # Optional:
    number_of_images=1,
    seed=0,
)
response[0].show()
response[0].save("image1.png")

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

edit_image(*, prompt: str, base_image: vertexai.vision_models.Image, mask: Optional[vertexai.vision_models.Image] = None, negative_prompt: Optional[str] = None, number_of_images: int = 1, guidance_scale: Optional[float] = None, edit_mode: Optional[Literal['inpainting-insert', 'inpainting-remove', 'outpainting', 'product-image']] = None, mask_mode: Optional[Literal['background', 'foreground', 'semantic']] = None, segmentation_classes: Optional[List[str]] = None, mask_dilation: Optional[float] = None, product_position: Optional[Literal['fixed', 'reposition']] = None, output_mime_type: Optional[Literal['image/png', 'image/jpeg']] = None, compression_quality: Optional[float] = None, language: Optional[str] = None, seed: Optional[int] = None, output_gcs_uri: Optional[str] = None, safety_filter_level: Optional[Literal['block_most', 'block_some', 'block_few', 'block_fewest']] = None, person_generation: Optional[Literal['dont_allow', 'allow_adult', 'allow_all']] = None)

Edits an existing image based on text prompt.

  • Parameters

    • prompt – Text prompt for the image.

    • base_image – Base image from which to generate the new image.

    • mask – Mask for the base image.

    • negative_prompt – A description of what you want to omit in the generated images.

    • number_of_images – Number of images to generate. Range: 1..8.

    • guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)

    • edit_mode – Describes the editing mode for the request. Supported values are: * inpainting-insert: fills the mask area based on the text prompt (requires mask and text) * inpainting-remove: removes the object(s) in the mask area. (requires mask) * outpainting: extend the image based on the mask area. (Requires mask) * product-image: Changes the background for the predominant product or subject in the image

    • mask_mode – Solicits generation of the mask (v/s providing mask as an input). Supported values are: * background: Automatically generates a mask for all regions except the primary subject(s) of the image * foreground: Automatically generates a mask for the primary subjects(s) of the image. * semantic: Segment one or more of the segmentation classes using class ID

    • segmentation_classes – List of class IDs for segmentation. Max of 5 IDs

    • mask_dilation – Defines the dilation percentage of the mask provided. Float between 0 and 1. Defaults to 0.03

    • product_position – Defines whether the product should stay fixed or be repositioned. Supported Values: * fixed: Fixed position * reposition: Can be moved (default)

    • output_mime_type – Which image format should the output be saved as. Supported values: * image/png: Save as a PNG image * image/jpeg: Save as a JPEG image

    • compression_quality – Level of compression if the output mime type is selected to be image/jpeg. Float between 0 to 100

    • language – Language of the text prompt for the image. Default: None. Supported values are “en” for English, “hi” for Hindi, “ja” for Japanese, “ko” for Korean, and “auto” for automatic language detection.

    • seed – Image generation random seed.

    • output_gcs_uri – Google Cloud Storage uri to store the edited images.

    • safety_filter_level – Adds a filter level to Safety filtering. Supported values are: * “block_most” : Strongest filtering level, most strict blocking * “block_some” : Block some problematic prompts and responses * “block_few” : Block fewer problematic prompts and responses * “block_fewest” : Block very few problematic prompts and responses

    • person_generation – Allow generation of people by the model Supported values are: * “dont_allow” : Block generation of people * “allow_adult” : Generate adults, but not children * “allow_all” : Generate adults and children

  • Returns

    An ImageGenerationResponse object.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

generate_images(prompt: str, *, negative_prompt: Optional[str] = None, number_of_images: int = 1, aspect_ratio: Optional[Literal['1:1', '9:16', '16:9', '4:3', '3:4']] = None, guidance_scale: Optional[float] = None, language: Optional[str] = None, seed: Optional[int] = None, output_gcs_uri: Optional[str] = None, add_watermark: Optional[bool] = True, safety_filter_level: Optional[Literal['block_most', 'block_some', 'block_few', 'block_fewest']] = None, person_generation: Optional[Literal['dont_allow', 'allow_adult', 'allow_all']] = None)

Generates images from text prompt.

  • Parameters

    • prompt – Text prompt for the image.

    • negative_prompt – A description of what you want to omit in the generated images.

    • number_of_images – Number of images to generate. Range: 1..8.

    • aspect_ratio – Changes the aspect ratio of the generated image Supported values are: * “1:1” : 1:1 aspect ratio * “9:16” : 9:16 aspect ratio * “16:9” : 16:9 aspect ratio * “4:3” : 4:3 aspect ratio * “3:4” : 3:4 aspect_ratio

    • guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)

    • language – Language of the text prompt for the image. Default: None. Supported values are “en” for English, “hi” for Hindi, “ja” for Japanese, “ko” for Korean, and “auto” for automatic language detection.

    • seed – Image generation random seed.

    • output_gcs_uri – Google Cloud Storage uri to store the generated images.

    • add_watermark – Add a watermark to the generated image

    • safety_filter_level – Adds a filter level to Safety filtering. Supported values are: * “block_most” : Strongest filtering level, most strict blocking * “block_some” : Block some problematic prompts and responses * “block_few” : Block fewer problematic prompts and responses * “block_fewest” : Block very few problematic prompts and responses

    • person_generation – Allow generation of people by the model Supported values are: * “dont_allow” : Block generation of people * “allow_adult” : Generate adults, but not children * “allow_all” : Generate adults and children

  • Returns

    An ImageGenerationResponse object.

upscale_image(image: Union[vertexai.vision_models.Image, vertexai.preview.vision_models.GeneratedImage], new_size: Optional[int] = 2048, upscale_factor: Optional[Literal['x2', 'x4']] = None, output_mime_type: Optional[Literal['image/png', 'image/jpeg']] = 'image/png', output_compression_quality: Optional[int] = None, output_gcs_uri: Optional[str] = None)

Upscales an image.

This supports upscaling images generated through the generate_images() method, or upscaling a new image.

Examples:

# Upscale a generated image
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
)
model.upscale_image(image=response[0])

# Upscale a new 1024x1024 image
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image)

# Upscale a new arbitrary sized image using a x2 or x4 upscaling factor
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, upscale_factor="x2")

# Upscale an image and get the result in JPEG format
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, output_mime_type="image/jpeg",
output_compression_quality=90)
  • Parameters

    • image (Union[GeneratedImage, **Image]) – Required. The generated image to upscale.

    • new_size (int) – The size of the biggest dimension of the upscaled image. Only 2048 and 4096 are currently supported. Results in a 2048x2048 or 4096x4096 image. Defaults to 2048 if not provided.

    • upscale_factor – The upscaling factor. Supported values are “x2” and “x4”. Defaults to None.

    • output_mime_type – The mime type of the output image. Supported values are “image/png” and “image/jpeg”. Defaults to “image/png”.

    • output_compression_quality – The compression quality of the output image as an int (0-100). Only applicable if the output mime type is “image/jpeg”. Defaults to None.

    • output_gcs_uri – Google Cloud Storage uri to store the upscaled images.

  • Returns

    An Image object.

class vertexai.vision_models.ImageGenerationResponse(images: List[GeneratedImage])

Bases: object

Image generation response.

images()

The list of generated images.

  • Type

    List[vertexai.preview.vision_models.GeneratedImage]

_getitem_(idx: int)

Gets the generated image by index.

_iter_()

Iterates through the generated images.

class vertexai.vision_models.ImageQnAModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Answers questions about an image.

Examples:

model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)

Answers questions about an image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • question – Question to ask about the image.

    • number_of_results – Number of captions to produce. Range: 1-3.

  • Returns

    A list of answers.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

class vertexai.vision_models.ImageTextModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.vision_models.ImageCaptioningModel, vertexai.vision_models.ImageQnAModel

Generates text from images.

Examples:

model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")

captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)

Answers questions about an image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • question – Question to ask about the image.

    • number_of_results – Number of captions to produce. Range: 1-3.

  • Returns

    A list of answers.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en', output_gcs_uri: Optional[str] = None)

Generates captions for a given image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • number_of_results – Number of captions to produce. Range: 1-3.

    • language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”

    • output_gcs_uri – Google Cloud Storage uri to store the captioned images.

  • Returns

    A list of image caption strings.

class vertexai.vision_models.MultiModalEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates embedding vectors from images and videos.

Examples:

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")

embeddings = model.get_embeddings(
    image=image,
    video=video,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_embeddings(image: Optional[vertexai.vision_models.Image] = None, video: Optional[vertexai.vision_models.Video] = None, contextual_text: Optional[str] = None, dimension: Optional[int] = None, video_segment_config: Optional[vertexai.vision_models.VideoSegmentConfig] = None)

Gets embedding vectors from the provided image.

  • Parameters

    • image (Image) – Optional. The image to generate embeddings for. One of image, video, or contextual_text is required.

    • video (Video) – Optional. The video to generate embeddings for. One of image, video or contextual_text is required.

    • contextual_text (str) – Optional. Contextual text for your input image or video. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of image, video or contextual_text is required.

    • dimension (int) – Optional. The number of embedding dimensions. Lower values offer decreased latency when using these embeddings for subsequent tasks, while higher values offer better accuracy. Available values: 128, 256, 512, and 1408 (default).

    • video_segment_config (VideoSegmentConfig) – Optional. The specific video segments (in seconds) the embeddings are generated for.

  • Returns

    The image and text embedding vectors.

  • Return type

    MultiModalEmbeddingResponse

class vertexai.vision_models.MultiModalEmbeddingResponse(_prediction_response: Any, image_embedding: Optional[List[float]] = None, video_embeddings: Optional[List[vertexai.vision_models.VideoEmbedding]] = None, text_embedding: Optional[List[float]] = None)

Bases: object

The multimodal embedding response.

image_embedding()

Optional. The embedding vector generated from your image.

video_embeddings()

Optional. The embedding vectors generated from your video.

  • Type

    List[VideoEmbedding]

text_embedding()

Optional. The embedding vector generated from the contextual text provided for your image or video.

class vertexai.vision_models.Video(video_bytes: Optional[bytes] = None, gcs_uri: Optional[str] = None)

Bases: object

Video.

Creates a Video object.

  • Parameters

    • video_bytes – Video file bytes. Video can be in AVI, FLV, MKV, MOV, MP4, MPEG, MPG, WEBM, and WMV formats.

    • gcs_uri – Image URI in Google Cloud Storage.

static load_from_file(location: str)

Loads video from local file or Google Cloud Storage.

  • Parameters

    location – Local path or Google Cloud Storage uri from where to load the video.

  • Returns

    Loaded video as an Video object.

save(location: str)

Saves video to a file.

  • Parameters

    location – Local path where to save the video.

class vertexai.vision_models.VideoEmbedding(start_offset_sec: int, end_offset_sec: int, embedding: List[float])

Bases: object

Embeddings generated from video with offset times.

Creates a VideoEmbedding object.

  • Parameters

    • start_offset_sec – Start time offset (in seconds) of generated embeddings.

    • end_offset_sec – End time offset (in seconds) of generated embeddings.

    • embedding – Generated embedding for interval.

class vertexai.vision_models.VideoSegmentConfig(start_offset_sec: int = 0, end_offset_sec: int = 120, interval_sec: int = 16)

Bases: object

The specific video segments (in seconds) the embeddings are generated for.

Creates a VideoSegmentConfig object.

  • Parameters

    • start_offset_sec – Start time offset (in seconds) to generate embeddings for.

    • end_offset_sec – End time offset (in seconds) to generate embeddings for.

    • interval_sec – Interval to divide video for generated embeddings.

Classes for working with vision models.

class vertexai.preview.vision_models.GeneratedImage(image_bytes: Optional[bytes], generation_parameters: Dict[str, Any], gcs_uri: Optional[str] = None)

Bases: vertexai.vision_models.Image

Generated image.

Creates a GeneratedImage object.

  • Parameters

    • image_bytes – Image file bytes. Image can be in PNG or JPEG format.

    • generation_parameters – Image generation parameter values.

    • gcs_uri – Image file Google Cloud Storage uri.

property generation_parameters()

Image generation parameters as a dictionary.

static load_from_file(location: str)

Loads image from file.

  • Parameters

    location – Local path from where to load the image.

  • Returns

    Loaded image as a GeneratedImage object.

save(location: str, include_generation_parameters: bool = True)

Saves image to a file.

  • Parameters

    • location – Local path where to save the image.

    • include_generation_parameters – Whether to include the image generation parameters in the image’s EXIF metadata.

show()

Shows the image.

This method only works when in a notebook environment.

class vertexai.preview.vision_models.Image(image_bytes: Optional[bytes] = None, gcs_uri: Optional[str] = None)

Bases: object

Image.

Creates an Image object.

  • Parameters

    • image_bytes – Image file bytes. Image can be in PNG or JPEG format.

    • gcs_uri – Image URI in Google Cloud Storage.

static load_from_file(location: str)

Loads image from local file or Google Cloud Storage.

  • Parameters

    location – Local path or Google Cloud Storage uri from where to load the image.

  • Returns

    Loaded image as an Image object.

save(location: str)

Saves image to a file.

  • Parameters

    location – Local path where to save the image.

show()

Shows the image.

This method only works when in a notebook environment.

class vertexai.preview.vision_models.ImageCaptioningModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates captions from image.

Examples:

model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en', output_gcs_uri: Optional[str] = None)

Generates captions for a given image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • number_of_results – Number of captions to produce. Range: 1-3.

    • language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”

    • output_gcs_uri – Google Cloud Storage uri to store the captioned images.

  • Returns

    A list of image caption strings.

class vertexai.preview.vision_models.ImageGenerationModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates images from text prompt.

Examples:

model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
    # Optional:
    number_of_images=1,
    seed=0,
)
response[0].show()
response[0].save("image1.png")

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

edit_image(*, prompt: str, base_image: vertexai.vision_models.Image, mask: Optional[vertexai.vision_models.Image] = None, negative_prompt: Optional[str] = None, number_of_images: int = 1, guidance_scale: Optional[float] = None, edit_mode: Optional[Literal['inpainting-insert', 'inpainting-remove', 'outpainting', 'product-image']] = None, mask_mode: Optional[Literal['background', 'foreground', 'semantic']] = None, segmentation_classes: Optional[List[str]] = None, mask_dilation: Optional[float] = None, product_position: Optional[Literal['fixed', 'reposition']] = None, output_mime_type: Optional[Literal['image/png', 'image/jpeg']] = None, compression_quality: Optional[float] = None, language: Optional[str] = None, seed: Optional[int] = None, output_gcs_uri: Optional[str] = None, safety_filter_level: Optional[Literal['block_most', 'block_some', 'block_few', 'block_fewest']] = None, person_generation: Optional[Literal['dont_allow', 'allow_adult', 'allow_all']] = None)

Edits an existing image based on text prompt.

  • Parameters

    • prompt – Text prompt for the image.

    • base_image – Base image from which to generate the new image.

    • mask – Mask for the base image.

    • negative_prompt – A description of what you want to omit in the generated images.

    • number_of_images – Number of images to generate. Range: 1..8.

    • guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)

    • edit_mode – Describes the editing mode for the request. Supported values are: * inpainting-insert: fills the mask area based on the text prompt (requires mask and text) * inpainting-remove: removes the object(s) in the mask area. (requires mask) * outpainting: extend the image based on the mask area. (Requires mask) * product-image: Changes the background for the predominant product or subject in the image

    • mask_mode – Solicits generation of the mask (v/s providing mask as an input). Supported values are: * background: Automatically generates a mask for all regions except the primary subject(s) of the image * foreground: Automatically generates a mask for the primary subjects(s) of the image. * semantic: Segment one or more of the segmentation classes using class ID

    • segmentation_classes – List of class IDs for segmentation. Max of 5 IDs

    • mask_dilation – Defines the dilation percentage of the mask provided. Float between 0 and 1. Defaults to 0.03

    • product_position – Defines whether the product should stay fixed or be repositioned. Supported Values: * fixed: Fixed position * reposition: Can be moved (default)

    • output_mime_type – Which image format should the output be saved as. Supported values: * image/png: Save as a PNG image * image/jpeg: Save as a JPEG image

    • compression_quality – Level of compression if the output mime type is selected to be image/jpeg. Float between 0 to 100

    • language – Language of the text prompt for the image. Default: None. Supported values are “en” for English, “hi” for Hindi, “ja” for Japanese, “ko” for Korean, and “auto” for automatic language detection.

    • seed – Image generation random seed.

    • output_gcs_uri – Google Cloud Storage uri to store the edited images.

    • safety_filter_level – Adds a filter level to Safety filtering. Supported values are: * “block_most” : Strongest filtering level, most strict blocking * “block_some” : Block some problematic prompts and responses * “block_few” : Block fewer problematic prompts and responses * “block_fewest” : Block very few problematic prompts and responses

    • person_generation – Allow generation of people by the model Supported values are: * “dont_allow” : Block generation of people * “allow_adult” : Generate adults, but not children * “allow_all” : Generate adults and children

  • Returns

    An ImageGenerationResponse object.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

generate_images(prompt: str, *, negative_prompt: Optional[str] = None, number_of_images: int = 1, aspect_ratio: Optional[Literal['1:1', '9:16', '16:9', '4:3', '3:4']] = None, guidance_scale: Optional[float] = None, language: Optional[str] = None, seed: Optional[int] = None, output_gcs_uri: Optional[str] = None, add_watermark: Optional[bool] = True, safety_filter_level: Optional[Literal['block_most', 'block_some', 'block_few', 'block_fewest']] = None, person_generation: Optional[Literal['dont_allow', 'allow_adult', 'allow_all']] = None)

Generates images from text prompt.

  • Parameters

    • prompt – Text prompt for the image.

    • negative_prompt – A description of what you want to omit in the generated images.

    • number_of_images – Number of images to generate. Range: 1..8.

    • aspect_ratio – Changes the aspect ratio of the generated image Supported values are: * “1:1” : 1:1 aspect ratio * “9:16” : 9:16 aspect ratio * “16:9” : 16:9 aspect ratio * “4:3” : 4:3 aspect ratio * “3:4” : 3:4 aspect_ratio

    • guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)

    • language – Language of the text prompt for the image. Default: None. Supported values are “en” for English, “hi” for Hindi, “ja” for Japanese, “ko” for Korean, and “auto” for automatic language detection.

    • seed – Image generation random seed.

    • output_gcs_uri – Google Cloud Storage uri to store the generated images.

    • add_watermark – Add a watermark to the generated image

    • safety_filter_level – Adds a filter level to Safety filtering. Supported values are: * “block_most” : Strongest filtering level, most strict blocking * “block_some” : Block some problematic prompts and responses * “block_few” : Block fewer problematic prompts and responses * “block_fewest” : Block very few problematic prompts and responses

    • person_generation – Allow generation of people by the model Supported values are: * “dont_allow” : Block generation of people * “allow_adult” : Generate adults, but not children * “allow_all” : Generate adults and children

  • Returns

    An ImageGenerationResponse object.

upscale_image(image: Union[vertexai.vision_models.Image, vertexai.preview.vision_models.GeneratedImage], new_size: Optional[int] = 2048, upscale_factor: Optional[Literal['x2', 'x4']] = None, output_mime_type: Optional[Literal['image/png', 'image/jpeg']] = 'image/png', output_compression_quality: Optional[int] = None, output_gcs_uri: Optional[str] = None)

Upscales an image.

This supports upscaling images generated through the generate_images() method, or upscaling a new image.

Examples:

# Upscale a generated image
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
)
model.upscale_image(image=response[0])

# Upscale a new 1024x1024 image
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image)

# Upscale a new arbitrary sized image using a x2 or x4 upscaling factor
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, upscale_factor="x2")

# Upscale an image and get the result in JPEG format
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image, output_mime_type="image/jpeg",
output_compression_quality=90)
  • Parameters

    • image (Union[GeneratedImage, **Image]) – Required. The generated image to upscale.

    • new_size (int) – The size of the biggest dimension of the upscaled image. Only 2048 and 4096 are currently supported. Results in a 2048x2048 or 4096x4096 image. Defaults to 2048 if not provided.

    • upscale_factor – The upscaling factor. Supported values are “x2” and “x4”. Defaults to None.

    • output_mime_type – The mime type of the output image. Supported values are “image/png” and “image/jpeg”. Defaults to “image/png”.

    • output_compression_quality – The compression quality of the output image as an int (0-100). Only applicable if the output mime type is “image/jpeg”. Defaults to None.

    • output_gcs_uri – Google Cloud Storage uri to store the upscaled images.

  • Returns

    An Image object.

class vertexai.preview.vision_models.ImageGenerationResponse(images: List[GeneratedImage])

Bases: object

Image generation response.

images()

The list of generated images.

  • Type

    List[vertexai.preview.vision_models.GeneratedImage]

_getitem_(idx: int)

Gets the generated image by index.

_iter_()

Iterates through the generated images.

class vertexai.preview.vision_models.ImageQnAModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Answers questions about an image.

Examples:

model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)

Answers questions about an image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • question – Question to ask about the image.

    • number_of_results – Number of captions to produce. Range: 1-3.

  • Returns

    A list of answers.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

class vertexai.preview.vision_models.ImageTextModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai.vision_models.ImageCaptioningModel, vertexai.vision_models.ImageQnAModel

Generates text from images.

Examples:

model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")

captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)

Answers questions about an image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • question – Question to ask about the image.

    • number_of_results – Number of captions to produce. Range: 1-3.

  • Returns

    A list of answers.

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en', output_gcs_uri: Optional[str] = None)

Generates captions for a given image.

  • Parameters

    • image – The image to get captions for. Size limit: 10 MB.

    • number_of_results – Number of captions to produce. Range: 1-3.

    • language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”

    • output_gcs_uri – Google Cloud Storage uri to store the captioned images.

  • Returns

    A list of image caption strings.

class vertexai.preview.vision_models.MultiModalEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Generates embedding vectors from images and videos.

Examples:

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")

embeddings = model.get_embeddings(
    image=image,
    video=video,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

get_embeddings(image: Optional[vertexai.vision_models.Image] = None, video: Optional[vertexai.vision_models.Video] = None, contextual_text: Optional[str] = None, dimension: Optional[int] = None, video_segment_config: Optional[vertexai.vision_models.VideoSegmentConfig] = None)

Gets embedding vectors from the provided image.

  • Parameters

    • image (Image) – Optional. The image to generate embeddings for. One of image, video, or contextual_text is required.

    • video (Video) – Optional. The video to generate embeddings for. One of image, video or contextual_text is required.

    • contextual_text (str) – Optional. Contextual text for your input image or video. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of image, video or contextual_text is required.

    • dimension (int) – Optional. The number of embedding dimensions. Lower values offer decreased latency when using these embeddings for subsequent tasks, while higher values offer better accuracy. Available values: 128, 256, 512, and 1408 (default).

    • video_segment_config (VideoSegmentConfig) – Optional. The specific video segments (in seconds) the embeddings are generated for.

  • Returns

    The image and text embedding vectors.

  • Return type

    MultiModalEmbeddingResponse

class vertexai.preview.vision_models.MultiModalEmbeddingResponse(_prediction_response: Any, image_embedding: Optional[List[float]] = None, video_embeddings: Optional[List[vertexai.vision_models.VideoEmbedding]] = None, text_embedding: Optional[List[float]] = None)

Bases: object

The multimodal embedding response.

image_embedding()

Optional. The embedding vector generated from your image.

video_embeddings()

Optional. The embedding vectors generated from your video.

  • Type

    List[VideoEmbedding]

text_embedding()

Optional. The embedding vector generated from the contextual text provided for your image or video.

class vertexai.preview.vision_models.Video(video_bytes: Optional[bytes] = None, gcs_uri: Optional[str] = None)

Bases: object

Video.

Creates a Video object.

  • Parameters

    • video_bytes – Video file bytes. Video can be in AVI, FLV, MKV, MOV, MP4, MPEG, MPG, WEBM, and WMV formats.

    • gcs_uri – Image URI in Google Cloud Storage.

static load_from_file(location: str)

Loads video from local file or Google Cloud Storage.

  • Parameters

    location – Local path or Google Cloud Storage uri from where to load the video.

  • Returns

    Loaded video as an Video object.

save(location: str)

Saves video to a file.

  • Parameters

    location – Local path where to save the video.

class vertexai.preview.vision_models.VideoEmbedding(start_offset_sec: int, end_offset_sec: int, embedding: List[float])

Bases: object

Embeddings generated from video with offset times.

Creates a VideoEmbedding object.

  • Parameters

    • start_offset_sec – Start time offset (in seconds) of generated embeddings.

    • end_offset_sec – End time offset (in seconds) of generated embeddings.

    • embedding – Generated embedding for interval.

class vertexai.preview.vision_models.VideoSegmentConfig(start_offset_sec: int = 0, end_offset_sec: int = 120, interval_sec: int = 16)

Bases: object

The specific video segments (in seconds) the embeddings are generated for.

Creates a VideoSegmentConfig object.

  • Parameters

    • start_offset_sec – Start time offset (in seconds) to generate embeddings for.

    • end_offset_sec – End time offset (in seconds) to generate embeddings for.

    • interval_sec – Interval to divide video for generated embeddings.

class vertexai.preview.vision_models.WatermarkVerificationModel(model_id: str, endpoint_name: Optional[str] = None)

Bases: vertexai._model_garden._model_garden_models._ModelGardenModel

Verifies if an image has a watermark.

Creates a _ModelGardenModel.

This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.

  • Parameters

    • model_id – Identifier of a Model Garden Model. Example: “text-bison@001

    • endpoint_name – Vertex Endpoint resource name for the model

classmethod from_pretrained(model_name: str)

Loads a _ModelGardenModel.

  • Parameters

    model_name – Name of the model.

  • Returns

    An instance of a class derieved from _ModelGardenModel.

  • Raises

verify_image(image: vertexai.vision_models.Image)

Verifies the watermark of an image.

  • Parameters

    image – The image to verify.

  • Returns

    A WatermarkVerificationResponse, containing the confidence level of the image being watermarked.

class vertexai.preview.vision_models.WatermarkVerificationResponse(_prediction_response: Any, watermark_verification_result: Optional[str] = None)

Bases: object

Classes for tuning models.

class vertexai.preview.tuning.TuningJob(tuning_job_name: str)

Bases: google.cloud.aiplatform.base._VertexAiResourceNounPlus

Represents a TuningJob that runs with Google owned models.

Initializes class with project, location, and api_client.

  • Parameters

    • project (str) – Project of the resource noun.

    • location (str) – The location of the resource noun.

    • credentials (google.auth.credentials.Credentials) – Optional custom credentials to use when accessing interacting with resource noun.

    • resource_name (str) – A fully-qualified resource name or ID.

client_class()

alias of vertexai.tuning._tuning.TuningJobClientWithOverride

property create_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was created.

property display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Display name of this resource.

property encryption_spec(: Optional[google.cloud.aiplatform_v1.types.encryption_spec.EncryptionSpec )

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

property gca_resource(: [proto.message.Message](https://proto-plus-python.readthedocs.io/en/latest/reference/message.html#proto.message.Message )

The underlying resource proto representation.

property labels(: Dict[str, str )

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

classmethod list(filter: Optional[str] = None)

Lists TuningJobs.

  • Parameters

    filter – The standard list filter.

  • Returns

    A list of TuningJob objects.

property name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Name of this resource.

refresh()

Refreshed the tuning job from the service.

property resource_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Full qualified resource name.

to_dict()

Returns the resource proto as a dictionary.

property update_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was last updated.

Classes for supervised tuning.

class vertexai.preview.tuning.sft.SupervisedTuningJob(tuning_job_name: str)

Bases: vertexai.tuning._tuning.TuningJob

Initializes class with project, location, and api_client.

  • Parameters

    • project (str) – Project of the resource noun.

    • location (str) – The location of the resource noun.

    • credentials (google.auth.credentials.Credentials) – Optional custom credentials to use when accessing interacting with resource noun.

    • resource_name (str) – A fully-qualified resource name or ID.

client_class()

alias of vertexai.tuning._tuning.TuningJobClientWithOverride

property create_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was created.

property display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Display name of this resource.

property encryption_spec(: Optional[google.cloud.aiplatform_v1.types.encryption_spec.EncryptionSpec )

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

property gca_resource(: [proto.message.Message](https://proto-plus-python.readthedocs.io/en/latest/reference/message.html#proto.message.Message )

The underlying resource proto representation.

property labels(: Dict[str, str )

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

classmethod list(filter: Optional[str] = None)

Lists TuningJobs.

  • Parameters

    filter – The standard list filter.

  • Returns

    A list of TuningJob objects.

property name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Name of this resource.

refresh()

Refreshed the tuning job from the service.

property resource_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Full qualified resource name.

to_dict()

Returns the resource proto as a dictionary.

property update_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was last updated.

vertexai.preview.tuning.sft.train(*, source_model: Union[str, vertexai.generative_models.GenerativeModel], train_dataset: str, validation_dataset: Optional[str] = None, tuned_model_display_name: Optional[str] = None, epochs: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, adapter_size: Optional[Literal[1, 4, 8, 16]] = None, labels: Optional[Dict[str, str]] = None)

Tunes a model using supervised training.

  • Parameters

    • source_model (str) – Model name for tuning, e.g., “gemini-1.0-pro-002”.

    • train_dataset – Cloud Storage path to file containing training dataset for tuning. The dataset should be in JSONL format.

    • validation_dataset – Cloud Storage path to file containing validation dataset for tuning. The dataset should be in JSONL format.

    • tuned_model_display_name – The display name of the [TunedModel][google.cloud.aiplatform.v1.Model]. The name can be up to 128 characters long and can consist of any UTF-8 characters.

    • epochs – Number of training epoches for this tuning job.

    • learning_rate_multiplier – Learning rate multiplier for tuning.

    • adapter_size – Adapter size for tuning.

    • labels – User-defined metadata to be associated with trained models

  • Returns

    A TuningJob object.

Vertex Gen AI Evaluation Service Module.

class vertexai.evaluation.CustomMetric(name: str, metric_function: Callable[[Dict[str, Any]], Dict[str, Any]])

Bases: vertexai.evaluation.metrics._base._Metric

The custom evaluation metric.

A fully-customized CustomMetric that can be used to evaluate a single model by defining a metric function for a computation-based metric. The CustomMetric is computed on the client-side using the user-defined metric function in SDK only, not by the Vertex Gen AI Evaluation Service.

Attributes:

name: The name of the metric.
metric_function: The user-defined evaluation function to compute a metric

> score. Must use the dataset row dictionary as the metric function
> input and return per-instance metric result as a dictionary output.
> The metric score must mapped to the name of the CustomMetric as key.

Initializes the evaluation metric.

class vertexai.evaluation.EvalResult(summary_metrics: Dict[str, float], metrics_table: Optional[pd.DataFrame] = None, metadata: Optional[Dict[str, str]] = None)

Bases: object

Evaluation result.

summary_metrics()

The summary evaluation metrics for an evaluation run.

metrics_table()

A table containing eval inputs, ground truth, and metric results per row.

  • Type

    Optional[pd.DataFrame]

metadata()

The metadata for the evaluation run.

  • Type

    Optional[Dict[str, str]]

class vertexai.evaluation.EvalTask(*, dataset: Union[pd.DataFrame, str, Dict[str, Any]], metrics: List[Union[Literal['exact_match', 'bleu', 'rouge_1', 'rouge_2', 'rouge_l', 'rouge_l_sum', 'tool_call_valid', 'tool_name_match', 'tool_parameter_key_match', 'tool_parameter_kv_match'], vertexai.evaluation.CustomMetric, vertexai.evaluation.metrics._base._AutomaticMetric, vertexai.evaluation.metrics.pointwise_metric.PointwiseMetric, vertexai.evaluation.metrics.pairwise_metric.PairwiseMetric]], experiment: Optional[str] = None, metric_column_mapping: Optional[Dict[str, str]] = None, output_uri_prefix: Optional[str] = '')

Bases: object

A class representing an EvalTask.

An Evaluation Tasks is defined to measure the model’s ability to perform a certain task in response to specific prompts or inputs. Evaluation tasks must contain an evaluation dataset, and a list of metrics to evaluate. Evaluation tasks help developers compare propmpt templates, track experiments, compare models and their settings, and assess the quality of the model’s generated text.

Dataset Details:

Default dataset column names:

* prompt_column_name: “prompt”


* reference_column_name: “reference”


* response_column_name: “response”


* baseline_model_response_column_name: “baseline_model_response”

Requirement for different use cases:

* Bring-your-own-response: A response column is required. Response

    column name can be customized by providing response_column_name
    parameter. If a pairwise metric is used and a baseline model is
    not provided, a baseline_model_response column is required.
    Baseline model response column name can be customized by providing
    baseline_model_response_column_name parameter. If the response
    column or baseline_model_response column is present while the
    corresponding model is specified, an error will be raised.


* Perform model inference without a prompt template: A prompt column

    in the evaluation dataset representing the input prompt to the
    model is required and is used directly as input to the model.


* Perform model inference with a prompt template: Evaluation dataset

    must contain column names corresponding to the variable names in
    the prompt template. For example, if prompt template is
    “Instruction: {instruction}, context: {context}”, the dataset must
    contain instruction and context columns.

Metrics Details:

The supported metrics descriptions, rating rubrics, and the required input variables can be found on the Vertex AI public documentation page. Evaluation methods and metrics.

Usage Examples:

  1. To perform bring-your-own-response(BYOR) evaluation, provide the model responses in the response column in the dataset. If a pairwise metric is used for BYOR evaluation, provide the baseline model responses in the baseline_model_response column.
``

` eval_dataset = pd.DataFrame({

“prompt” : […], “reference”: […], “response” : […], “baseline_model_response”: […],

}) eval_task = EvalTask(

dataset=eval_dataset, metrics=[

“bleu”, “rouge_l_sum”, MetricPromptTemplateExamples.Pointwise.FLUENCY, MetricPromptTemplateExamples.Pairwise.SAFETY

], experiment=”my-experiment”,

) eval_result = eval_task.evaluate(experiment_run_name=”eval-experiment-run”)

``
`
  1. To perform evaluation with Gemini model inference, specify the model parameter with a GenerativeModel instance. The input column name to the model is prompt and must be present in the dataset.
``

` eval_dataset = pd.DataFrame({

“reference”: […], “prompt” : […],

}) result = EvalTask(

dataset=eval_dataset, metrics=[“exact_match”, “bleu”, “rouge_1”, “rouge_l_sum”], experiment=”my-experiment”,

).evaluate(

model=GenerativeModel(“gemini-1.5-pro”),
experiment_run_name=”gemini-eval-run”
  1. If a prompt_template is specified, the prompt column is not required. Prompts can be assembled from the evaluation dataset, and all prompt template variable names must be present in the dataset columns.
``

` eval_dataset = pd.DataFrame({

“context” : […], “instruction”: […],

}) result = EvalTask(

dataset=eval_dataset, metrics=[MetricPromptTemplateExamples.Pointwise.SUMMARIZATION_QUALITY],

).evaluate(

model=GenerativeModel(“gemini-1.5-pro”),
prompt_template=”{instruction}. Article: {context}. Summary:”,
  1. To perform evaluation with custom model inference, specify the model parameter with a custom inference function. The input column name to the custom inference function is prompt and must be present in the dataset.
``

` from openai import OpenAI client = OpenAI() def custom_model_fn(input: str) -> str:

response = client.chat.completions.create(

model=”gpt-3.5-turbo”,
messages=[

> {“role”: “user”, “content”: input}

]

) return response.choices[0].message.content

eval_dataset = pd.DataFrame({

“prompt”  : […],
“reference”: […],

}) result = EvalTask(

dataset=eval_dataset, metrics=[MetricPromptTemplateExamples.Pointwise.SAFETY], experiment=”my-experiment”,

).evaluate(

model=custom_model_fn,
experiment_run_name=”gpt-eval-run”
  1. To perform pairwise metric evaluation with model inference step, specify the baseline_model input to a PairwiseMetric instance and the candidate model input to the EvalTask.evaluate() function. The input column name to both models is prompt and must be present in the dataset.
``

` baseline_model = GenerativeModel(“gemini-1.0-pro”) candidate_model = GenerativeModel(“gemini-1.5-pro”)

pairwise_groundedness = PairwiseMetric(

metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(

    “pairwise_groundedness”

),
baseline_model=baseline_model,

) eval_dataset = pd.DataFrame({

“prompt” : […],

}) result = EvalTask(

dataset=eval_dataset, metrics=[pairwise_groundedness], experiment=”my-pairwise-experiment”,

).evaluate(

model=candidate_model,
experiment_run_name=”gemini-pairwise-eval-run”,

Initializes an EvalTask.

  • Parameters

    • dataset – The dataset to be evaluated. Supports the following dataset formats: * pandas.DataFrame: Used directly for evaluation. * Dict: Converted to a pandas DataFrame before evaluation. * str: Interpreted as a file path or URI. Supported formats include:

      * Local JSONL or CSV files:  Loaded from the local filesystem.
      
>     * GCS JSONL or CSV files: Loaded from Google Cloud Storage

>     (e.g., ‘gs://bucket/data.csv’).


>     * BigQuery table URI: Loaded from Google Cloud BigQuery

>     (e.g., ‘bq://project-id.dataset.table_name’).



* **metrics** – The list of metric names, or Metric instances to evaluate.
Prompt template is required for PairwiseMetric.


* **experiment** – The name of the experiment to log the evaluations to.


* **metric_column_mapping** – An optional dictionary column mapping that
overrides the metric prompt template input variable names with
mapped the evaluation dataset column names, used during evaluation.
For example, if the input_variables of the metric prompt template
are [“context”, “reference”], the metric_column_mapping can be

> {

>     “context”: “news_context”,
>     “reference”: “ground_truth”,
>     “response”: “model_1_response”

> }

if the dataset has columns “news_context”, “ground_truth” and
“model_1_response”.



* **output_uri_prefix** – GCS location to store the metrics_table from
evaluation results.

property dataset(: pd.DataFram )

Returns evaluation dataset.

display_runs()

Displays experiment runs associated with this EvalTask.

evaluate(*, model: Optional[Union[vertexai.generative_models.GenerativeModel, Callable[[str], str]]] = None, prompt_template: Optional[str] = None, experiment_run_name: Optional[str] = None, response_column_name: Optional[str] = None, baseline_model_response_column_name: Optional[str] = None, evaluation_service_qps: Optional[float] = None, retry_timeout: float = 600.0, output_file_name: Optional[str] = None)

Runs an evaluation for the EvalTask.

  • Parameters

    • model – A GenerativeModel instance or a custom model function to generate responses to evaluate. If not provided, the evaluation is computed with the response column in the dataset.

    • prompt_template – The prompt template to use for the evaluation. If not set, the prompt template that was used to create the EvalTask will be used.

    • experiment_run_name – The name of the experiment run to log the evaluation to if an experiment is set for this EvalTask. If not provided, a random unique experiment run name is used.

    • response_column_name – The column name of model response in the dataset. If provided, this will override the response_column_name of the EvalTask.

    • baseline_model_response_column_name – The column name of baseline model response in the dataset for pairwise metrics.

    • evaluation_service_qps – The custom QPS limit for the evaluation service.

    • retry_timeout – How long to keep retrying the evaluation requests for the whole evaluation dataset, in seconds. whole evaluation dataset, in seconds.

    • output_file_name – The file name with csv suffix to store the output metrics_table.

  • Returns

    The evaluation result.

property experiment(: Optional[str )

Returns experiment name.

property metrics(: List[Union[str, vertexai.evaluation.CustomMetric] )

Returns metrics.

class vertexai.evaluation.MetricPromptTemplateExamples()

Bases: object

Examples of metric prompt templates for model-based evaluation.

class Pairwise()

Bases: object

Example PairwiseMetric instances.

class Pointwise()

Bases: object

Example PointwiseMetric instances.

classmethod get_prompt_template(metric_name: str)

Returns the prompt template for the given metric name.

classmethod list_example_metric_names()

Returns a list of all metric prompt templates.

class vertexai.evaluation.PairwiseMetric(*, metric: str, metric_prompt_template: Union[vertexai.evaluation.metrics.metric_prompt_template.PairwiseMetricPromptTemplate, str], baseline_model: Optional[Union[vertexai.generative_models.GenerativeModel, Callable[[str], str]]] = None)

Bases: vertexai.evaluation.metrics._base._ModelBasedMetric

A Model-based Pairwise Metric.

A model-based evaluation metric that compares two generative models’ responses side-by-side, and allows users to A/B test their generative models to determine which model is performing better.

For more details on when to use pairwise metrics, see Evaluation methods and metrics.

Result Details:

  • In EvalResult.summary_metrics, win rates for both the baseline and

candidate model are computed. The win rate is computed as proportion of wins of one model’s responses to total attempts as a decimal value between 0 and 1.

  • In EvalResult.metrics_table, a pairwise metric produces two

evaluation results per dataset row:

* pairwise_choice: The choice shows whether the candidate model or
the baseline model performs better, or if they are equally good.


* explanation: The rationale behind each verdict using
chain-of-thought reasoning. The explanation helps users scrutinize
the judgment and builds appropriate trust in the decisions.

See documentation page for more details on understanding the metric results.

Usage Examples:

``

` baseline_model = GenerativeModel(“gemini-1.0-pro”) candidate_model = GenerativeModel(“gemini-1.5-pro”)

pairwise_groundedness = PairwiseMetric(

metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(

    “pairwise_groundedness”

),
baseline_model=baseline_model,

) eval_dataset = pd.DataFrame({

“prompt” : […],

}) pairwise_task = EvalTask(

dataset=eval_dataset, metrics=[pairwise_groundedness], experiment=”my-pairwise-experiment”,

) pairwise_result = pairwise_task.evaluate(

model=candidate_model, experiment_run_name=”gemini-pairwise-eval-run”,

Initializes a pairwise evaluation metric.

  • Parameters

    • metric – The pairwise evaluation metric name.

    • metric_prompt_template – Pairwise metric prompt template for performing the pairwise model-based evaluation. A freeform string is also accepted.

    • baseline_model – The baseline model for side-by-side comparison. If not specified, baseline_model_response column is required in the dataset to perform bring-your-own-response(BYOR) evaluation.

class vertexai.evaluation.PairwiseMetricPromptTemplate(*, criteria: Dict[str, str], rating_rubric: Dict[str, str], input_variables: Optional[List[str]] = None, instruction: Optional[str] = None, metric_definition: Optional[str] = None, evaluation_steps: Optional[Dict[str, str]] = None, few_shot_examples: Optional[List[str]] = None)

Bases: vertexai.evaluation.metrics.metric_prompt_template._MetricPromptTemplate

Pairwise metric prompt template for pairwise model-based metrics.

Initializes a pairwise metric prompt template.

  • Parameters

    • criteria – The standards and measures used to evaluate the model responses. It is a dictionary of criterion names and criterion definitions.

    • rating_rubric – A dictionary mapping of rating name and rating definition, used to assign ratings or scores based on specific criteria.

    • input_variables – An optional list of input fields to use in the metric prompt template for generating model-based evaluation results. Candidate model “response” column and “baseline_model_response” column are included by default. If metric_column_mapping is provided, the mapping values of the input fields will be used to retrieve data from the evaluation dataset.

    • instruction – The general instruction to the model that performs the evaluation. If not provided, a default pairwise metric instruction will be used.

    • metric_definition – The optional metric definition. It is a string describing the metric to be evaluated at a high level. If not provided, this field will not be included in the prompt template.

    • evaluation_steps – The optional gudelines of evaluation steps. A dictionary of evaluation step name and evaluation step definition. If not provided, a default pairwise metric evaluation steps will be used.

    • few_shot_examples – The optional list of few-shot examples to be used in the prompt, to provide the model with demonstrations of how to perform the evaluation, and improve the evaluation accuracy. If not provided, this field will not be included in the prompt template.

_str_()

Serializes the pairwise metric prompt template to a string.

assemble(**kwargs)

Replaces only the provided variables in the template with specific values.

  • Parameters

    **kwargs – Keyword arguments where keys are placeholder names and values are the replacements.

  • Returns

    A new PromptTemplate instance with the updated template string.

get_default_pairwise_evaluation_steps()

Returns the default evaluation steps for the metric prompt template.

get_default_pairwise_instruction()

Returns the default instruction for the metric prompt template.

class vertexai.evaluation.PointwiseMetric(*, metric: str, metric_prompt_template: Union[vertexai.evaluation.metrics.metric_prompt_template.PointwiseMetricPromptTemplate, str])

Bases: vertexai.evaluation.metrics._base._ModelBasedMetric

A Model-based Pointwise Metric.

A model-based evaluation metric that evaluate a single generative model’s response.

For more details on when to use model-based pointwise metrics, see Evaluation methods and metrics.

Usage Examples:

``

` candidate_model = GenerativeModel(“gemini-1.5-pro”) eval_dataset = pd.DataFrame({

“prompt” : […],

}) fluency_metric = PointwiseMetric(

metric=”fluency”, metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(‘fluency’),

) pointwise_eval_task = EvalTask(

dataset=eval_dataset, metrics=[

fluency_metric, MetricPromptTemplateExamples.Pointwise.GROUNDEDNESS,

],

) pointwise_result = pointwise_eval_task.evaluate(

model=candidate_model,

Initializes a pointwise evaluation metric.

  • Parameters

    • metric – The pointwise evaluation metric name.

    • metric_prompt_template – Pointwise metric prompt template for performing the model-based evaluation. A freeform string is also accepted.

class vertexai.evaluation.PointwiseMetricPromptTemplate(*, criteria: Dict[str, str], rating_rubric: Dict[str, str], input_variables: Optional[List[str]] = None, instruction: Optional[str] = None, metric_definition: Optional[str] = None, evaluation_steps: Optional[Dict[str, str]] = None, few_shot_examples: Optional[List[str]] = None)

Bases: vertexai.evaluation.metrics.metric_prompt_template._MetricPromptTemplate

Pointwise metric prompt template for pointwise model-based metrics.

Initializes a pointwise metric prompt template.

  • Parameters

    • criteria – The standards and measures used to evaluate the model responses. It is a dictionary of criterion names and criterion definitions.

    • rating_rubric – A dictionary mapping of rating name and rating definition, used to assign ratings or scores based on specific criteria.

    • input_variables – An optional list of input fields to use in the metric prompt template for generating model-based evaluation results. Model “response” column is included by default. If metric_column_mapping is provided, the mapping values of the input fields will be used to retrieve data from the evaluation dataset.

    • instruction – The general instruction to the model that performs the evaluation. If not provided, a default pointwise metric instruction will be used.

    • metric_definition – The optional metric definition. It is a string describing the metric to be evaluated at a high level. If not provided, this field will not be included in the prompt template.

    • evaluation_steps – The optional gudelines of evaluation steps. A dictionary of evaluation step name and evaluation step definition. If not provided, a default pointwise metric evaluation steps will be used.

    • few_shot_examples – The optional list of few-shot examples to be used in the prompt, to provide the model with demonstrations of how to perform the evaluation, and improve the evaluation accuracy. If not provided, this field will not be included in the prompt template.

_str_()

Serializes the pointwise metric prompt template to a string.

assemble(**kwargs)

Replaces only the provided variables in the template with specific values.

  • Parameters

    **kwargs – Keyword arguments where keys are placeholder names and values are the replacements.

  • Returns

    A new PromptTemplate instance with the updated template string.

get_default_pointwise_evaluation_steps()

Returns the default evaluation steps for the metric prompt template.

get_default_pointwise_instruction()

Returns the default instruction for the metric prompt template.

class vertexai.evaluation.PromptTemplate(template: str)

Bases: object

A prompt template for creating prompts with variables.

The PromptTemplate class allows users to define a template string with variables represented in curly braces {variable}. The variable names cannot contain spaces. These variables can be replaced with specific values using the assemble method, providing flexibility in generating dynamic prompts.

Usage:

\ template_str = "Hello, {name}! Today is {day}. How are you?" prompt_template = PromptTemplate(template_str) completed_prompt = prompt_template.assemble(name="John", day="Monday") print(completed_prompt) ``

Initializes the PromptTemplate with a given template.

  • Parameters

    template – The template string with variables. Variables should be represented in curly braces {variable}.

_repr_()

Returns a string representation of the PromptTemplate.

_str_()

Returns the template string.

assemble(**kwargs)

Replaces only the provided variables in the template with specific values.

  • Parameters

    **kwargs – Keyword arguments where keys are placeholder names and values are the replacements.

  • Returns

    A new PromptTemplate instance with the updated template string.

class vertexai.evaluation.Rouge(*, rouge_type: Literal['rouge1', 'rouge2', 'rouge3', 'rouge4', 'rouge5', 'rouge6', 'rouge7', 'rouge8', 'rouge9', 'rougeL', 'rougeLsum'], use_stemmer: bool = False, split_summaries: bool = False)

Bases: vertexai.evaluation.metrics._base._AutomaticMetric

The ROUGE Metric.

Calculates the recall of n-grams in prediction as compared to reference and returns a score ranging between 0 and 1. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.

Initializes the ROUGE metric.

  • Parameters

    • rouge_type – Supported rouge types are rougen[1-9], rougeL, and rougeLsum.

    • use_stemmer – Whether to use stemmer to compute rouge score.

    • split_summaries – Whether to split summaries while using ‘rougeLsum’ to compute rouge score.

Classes for working with reasoning engines.

class vertexai.preview.reasoning_engines.LangchainAgent(model: str, *, system_instruction: Optional[str] = None, prompt: Optional[RunnableSerializable] = None, tools: Optional[Sequence[_ToolLike]] = None, output_parser: Optional[RunnableSerializable] = None, chat_history: Optional[GetSessionHistoryCallable] = None, model_kwargs: Optional[Mapping[str, Any]] = None, model_tool_kwargs: Optional[Mapping[str, Any]] = None, agent_executor_kwargs: Optional[Mapping[str, Any]] = None, runnable_kwargs: Optional[Mapping[str, Any]] = None, model_builder: Optional[Callable] = None, runnable_builder: Optional[Callable] = None, enable_tracing: bool = False)

Bases: object

A Langchain Agent.

Reference: * Agent: https://python.langchain.com/docs/modules/agents/concepts * Memory: https://python.langchain.com/docs/expression_language/how_to/message_history

Initializes the LangchainAgent.

Under-the-hood, assuming .set_up() is called, this will correspond to

``

` model = model_builder(model_name=model, model_kwargs=model_kwargs) runnable = runnable_builder(

prompt=prompt, model=model, tools=tools, output_parser=output_parser, chat_history=chat_history, agent_executor_kwargs=agent_executor_kwargs, runnable_kwargs=runnable_kwargs,

When everything is based on their default values, this corresponds to

``

`

model_builder

from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name=model,

**

model_kwargs)

runnable_builder

from langchain import agents from langchain_core.runnables.history import RunnableWithMessageHistory llm_with_tools = llm.bind_tools(tools=tools,

**

model_tool_kwargs) agent_executor = agents.AgentExecutor(

agent=prompt | llm_with_tools | output_parser, tools=tools,

**

agent_executor_kwargs,

) runnable = RunnableWithMessageHistory(

runnable=agent_executor, get_session_history=chat_history,

**

runnable_kwargs,

  • Parameters

    • model (str) – Optional. The name of the model (e.g. “gemini-1.0-pro”).

    • system_instruction (str) – Optional. The system instruction to use for the agent. This argument should not be specified if prompt is specified.

    • prompt (langchain_core.runnables.RunnableSerializable) – Optional. The prompt template for the model. Defaults to a ChatPromptTemplate.

    • tools (Sequence[langchain_core.tools.BaseTool, **Callable]) – Optional. The tools for the agent to be able to use. All input callables (e.g. function or class method) will be converted to a langchain.tools.base.StructuredTool. Defaults to None.

    • output_parser (langchain_core.runnables.RunnableSerializable) – Optional. The output parser for the model. Defaults to an output parser that works with Gemini function-calling.

    • chat_history (langchain_core.runnables.history.GetSessionHistoryCallable) – Optional. Callable that returns a new BaseChatMessageHistory. Defaults to None, i.e. chat_history is not preserved.

    • model_kwargs (Mapping[str, **Any]) – Optional. Additional keyword arguments for the constructor of chat_models.ChatVertexAI. An example would be

```python
``
```

\`
{

> # temperature (float): Sampling temperature, it controls the
> # degree of randomness in token selection.
> “temperature”: 0.28,
> # max_output_tokens (int): Token limit determines the
> # maximum amount of text output from one prompt.
> “max_output_tokens”: 1000,
> # top_p (float): Tokens are selected from most probable to
> # least, until the sum of their probabilities equals the
> # top_p value.
> “top_p”: 0.95,
> # top_k (int): How the model selects tokens for output, the
> # next token is selected from among the top_k most probable
> # tokens.
> “top_k”: 40,



* **model_tool_kwargs** (*Mapping**[*[*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)*, **Any**]*) – Optional. Additional keyword arguments when binding tools to the
model using model.bind_tools().


* **agent_executor_kwargs** (*Mapping**[*[*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)*, **Any**]*) – Optional. Additional keyword arguments for the constructor of
langchain.agents.AgentExecutor. An example would be


```python
``
```

\`
{

> # Whether to return the agent’s trajectory of intermediate
> # steps at the end in addition to the final output.
> “return_intermediate_steps”: False,
> # The maximum number of steps to take before ending the
> # execution loop.
> “max_iterations”: 15,
> # The method to use for early stopping if the agent never
> # returns AgentFinish. Either ‘force’ or ‘generate’.
> “early_stopping_method”: “force”,
> # How to handle errors raised by the agent’s output parser.
> # Defaults to False, which raises the error.
> “handle_parsing_errors”: False,



* **runnable_kwargs** (*Mapping**[*[*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)*, **Any**]*) – Optional. Additional keyword arguments for the constructor of
langchain.runnables.history.RunnableWithMessageHistory if
chat_history is specified. If chat_history is None, this will be
ignored.


* **model_builder** (*Callable*) – Optional. Callable that returns a new language model. Defaults
to a a callable that returns ChatVertexAI based on model,
model_kwargs and the parameters in vertexai.init.


* **runnable_builder** (*Callable*) – Optional. Callable that returns a new runnable. This can be used
for customizing the orchestration logic of the Agent based on
the model returned by model_builder and the rest of the input
arguments.


* **enable_tracing** ([*bool*](https://python.readthedocs.io/en/latest/library/functions.html#bool)) – Optional. Whether to enable tracing in Cloud Trace. Defaults to
False.
  • Raises

    • ValueError – If both prompt and system_instruction are specified.

    • TypeError – If there is an invalid tool (e.g. function with an input

    • that did not specify its type).

clone()

Returns a clone of the LangchainAgent.

query(*, input: Union[str, Mapping[str, Any]], config: Optional[RunnableConfig] = None, **kwargs: Any)

Queries the Agent with the given input and config.

  • Parameters

    • input (Union[str, **Mapping[str, **Any]]) – Required. The input to be passed to the Agent.

    • config (langchain_core.runnables.RunnableConfig) – Optional. The config (if any) to be used for invoking the Agent.

    • **kwargs – Optional. Any additional keyword arguments to be passed to the .invoke() method of the corresponding AgentExecutor.

  • Returns

    The output of querying the Agent with the given input and config.

set_up()

Sets up the agent for execution of queries at runtime.

It initializes the model, binds the model with tools, and connects it with the prompt template and output parser.

This method should not be called for an object that being passed to the ReasoningEngine service for deployment, as it initializes clients that can not be serialized.

class vertexai.preview.reasoning_engines.Queryable(*args, **kwargs)

Bases: Protocol

Protocol for Reasoning Engine applications that can be queried.

abstract query(**kwargs)

Runs the Reasoning Engine to serve the user query.

class vertexai.preview.reasoning_engines.ReasoningEngine(reasoning_engine_name: str)

Bases: google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager, vertexai.reasoning_engines._reasoning_engines.Queryable

Represents a Vertex AI Reasoning Engine resource.

Retrieves a Reasoning Engine resource.

  • Parameters

    reasoning_engine_name (str) – Required. A fully-qualified resource name or ID such as “projects/123/locations/us-central1/reasoningEngines/456” or “456” when project and location are initialized or passed.

client_class()

alias of google.cloud.aiplatform.utils.ReasoningEngineClientWithOverride

classmethod create(reasoning_engine: vertexai.reasoning_engines._reasoning_engines.Queryable, *, requirements: Optional[Union[str, Sequence[str]]] = None, reasoning_engine_name: Optional[str] = None, display_name: Optional[str] = None, description: Optional[str] = None, gcs_dir_name: str = 'reasoning_engine', sys_version: Optional[str] = None, extra_packages: Optional[Sequence[str]] = None)

Creates a new ReasoningEngine.

The Reasoning Engine will be an instance of the reasoning_engine that was passed in, running remotely on Vertex AI.

Sample src_dir contents (e.g. ./user_src_dir):

user_src_dir/
|-- main.py
|-- requirements.txt
|-- user_code/
|   |-- utils.py
|   |-- ...
|-- ...

To build a Reasoning Engine:

remote_app = ReasoningEngine.create(
    local_app,
    requirements=[
        # I.e. the PyPI dependencies listed in requirements.txt
        "google-cloud-aiplatform==1.25.0",
        "langchain==0.0.242",
        ...
    ],
    extra_packages=[
        "./user_src_dir/main.py", # a single file
        "./user_src_dir/user_code", # a directory
        ...
    ],
)
  • Parameters

    • reasoning_engine (ReasoningEngineInterface) – Required. The Reasoning Engine to be created.

    • requirements (Union[str, **Sequence[str]]) – Optional. The set of PyPI dependencies needed. It can either be the path to a single file (requirements.txt), or an ordered list of strings corresponding to each line of the requirements file.

    • reasoning_engine_name (str) – Optional. A fully-qualified resource name or ID such as “projects/123/locations/us-central1/reasoningEngines/456” or “456” when project and location are initialized or passed. If specifying the ID, it should be 4-63 characters. Valid characters are lowercase letters, numbers and hyphens (“-“), and it should start with a number or a lower-case letter. If not provided, Vertex AI will generate a value for this ID.

    • display_name (str) – Optional. The user-defined name of the Reasoning Engine. The name can be up to 128 characters long and can comprise any UTF-8 character.

    • description (str) – Optional. The description of the Reasoning Engine.

    • gcs_dir_name (CreateReasoningEngineOptions) – Optional. The GCS bucket directory under staging_bucket to use for staging the artifacts needed.

    • sys_version (str) – Optional. The Python system version used. Currently supports any of “3.8”, “3.9”, “3.10”, “3.11”. If not specified, it defaults to the “{major}.{minor}” attributes of sys.version_info.

    • extra_packages (Sequence[str]) – Optional. The set of extra user-provided packages (if any).

  • Returns

    The Reasoning Engine that was created.

  • Return type

    ReasoningEngine

  • Raises

    • ValueError – If sys.version is not supported by ReasoningEngine.

    • ValueError – If the project was not set using vertexai.init.

    • ValueError – If the location was not set using vertexai.init.

    • ValueError – If the staging_bucket was not set using vertexai.init.

    • ValueError – If the staging_bucket does not start with “gs://”.

    • FileNotFoundError – If extra_packages includes a file or directory

    • that does not exist.

    • IOError – If requirements is a string that corresponds to a

    • nonexistent file.

property create_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was created.

delete(sync: bool = True)

Deletes this Vertex AI resource. WARNING: This deletion is permanent.

  • Parameters

    sync (bool) – Whether to execute this deletion synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed.

property display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Display name of this resource.

property encryption_spec(: Optional[google.cloud.aiplatform_v1.types.encryption_spec.EncryptionSpec )

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

property gca_resource(: [proto.message.Message](https://proto-plus-python.readthedocs.io/en/latest/reference/message.html#proto.message.Message )

The underlying resource proto representation.

property labels(: Dict[str, str )

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

classmethod list(filter: Optional[str] = None, order_by: Optional[str] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, parent: Optional[str] = None)

List all instances of this Vertex AI Resource.

Example Usage:

aiplatform.BatchPredictionJobs.list(

filter=’state=”JOB_STATE_SUCCEEDED” AND display_name=”my_job”’,

)

aiplatform.Model.list(order_by=”create_time desc, display_name”)

  • Parameters

    • filter (str) – Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

    • order_by (str) – Optional. A comma-separated list of fields to order by, sorted in ascending order. Use “desc” after a field name for descending. Supported fields: display_name, create_time, update_time

    • project (str) – Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.

    • location (str) – Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.

    • parent (str) – Optional. The parent resource name if any to retrieve list from.

  • Returns

    List[VertexAiResourceNoun] - A list of SDK resource objects

property name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Name of this resource.

operation_schemas()

Returns the (Open)API schemas for the Reasoning Engine.

query(**kwargs)

Runs the Reasoning Engine to serve the user query.

This will be based on the .query(…) method of the python object that was passed in when creating the Reasoning Engine.

  • Parameters

    **kwargs – Optional. The arguments of the .query(…) method.

  • Returns

    The response from serving the user query.

  • Return type

    dict[str, Any]

property resource_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Fully-qualified resource name.

to_dict()

Returns the resource proto as a dictionary.

update(*, reasoning_engine: Optional[vertexai.reasoning_engines._reasoning_engines.Queryable] = None, requirements: Optional[Union[str, Sequence[str]]] = None, display_name: Optional[str] = None, description: Optional[str] = None, gcs_dir_name: str = 'reasoning_engine', sys_version: Optional[str] = None, extra_packages: Optional[Sequence[str]] = None)

Updates an existing ReasoningEngine.

This method updates the configuration of an existing ReasoningEngine running remotely, which is identified by its resource name. Unlike the create function which requires a reasoning_engine object, all arguments in this method are optional. This method allows you to modify individual aspects of the configuration by providing any of the optional arguments. Note that you must provide at least one argument (except sys_version).

  • Parameters

    • reasoning_engine (ReasoningEngineInterface) – Optional. The Reasoning Engine to be replaced. If it is not specified, the existing Reasoning Engine will be used.

    • requirements (Union[str, **Sequence[str]]) – Optional. The set of PyPI dependencies needed. It can either be the path to a single file (requirements.txt), or an ordered list of strings corresponding to each line of the requirements file. If it is not specified, the existing requirements will be used. If it is set to an empty string or list, the existing requirements will be removed.

    • display_name (str) – Optional. The user-defined name of the Reasoning Engine. The name can be up to 128 characters long and can comprise any UTF-8 character.

    • description (str) – Optional. The description of the Reasoning Engine.

    • gcs_dir_name (CreateReasoningEngineOptions) – Optional. The GCS bucket directory under staging_bucket to use for staging the artifacts needed.

    • sys_version (str) – Optional. The Python system version used. Currently updating sys version is not supported.

    • extra_packages (Sequence[str]) – Optional. The set of extra user-provided packages (if any). If it is not specified, the existing extra packages will be used. If it is set to an empty list, the existing extra packages will be removed.

  • Returns

    The Reasoning Engine that was updated.

  • Return type

    ReasoningEngine

  • Raises

    • ValueError – If sys.version is updated.

    • ValueError – If the staging_bucket was not set using vertexai.init.

    • ValueError – If the staging_bucket does not start with “gs://”.

    • FileNotFoundError – If extra_packages includes a file or directory

    • that does not exist.

    • ValueError – if none of display_name, description,

    • requirements

    • specified.

    • IOError – If requirements is a string that corresponds to a

    • nonexistent file.

property update_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was last updated.

wait()

Helper method that blocks until all futures are complete.

The vertexai resources module.

The vertexai resources preview module.

class vertexai.resources.preview.ml_monitoring.ModelMonitor(model_monitor_name: str, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None)

Bases: google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager

Initializer for ModelMonitor.

  • Parameters

    • model_monitor_name (str) – Required. A fully-qualified model monitor resource name or model monitor ID. Example: “projects/123/locations/us-central1/modelMonitors/456” or “456” when project and location are initialized or passed.

    • project (str) – Required. Project to retrieve model monitor from. If not set, project set in aiplatform.init will be used.

    • location (str) – Required. Location to retrieve model monitor from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to retrieve this model monitor. Overrides credentials set in aiplatform.init.

Initializes class with project, location, and api_client.

  • Parameters

    • project (str) – Optional. Project of the resource noun.

    • location (str) – Optional. The location of the resource noun.

    • credentials (google.auth.credentials.Credentials) – Optional. custom credentials to use when accessing interacting with resource noun.

    • resource_name (str) – A fully-qualified resource name or ID.

client_class()

alias of google.cloud.aiplatform.utils.ModelMonitoringClientWithOverride

classmethod create(model_name: str, model_version_id: str, training_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, display_name: Optional[str] = None, model_monitoring_schema: Optional[vertexai.resources.preview.ml_monitoring.spec.schema.ModelMonitoringSchema] = None, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, model_monitor_id: Optional[str] = None)

Creates a new ModelMonitor.

  • Parameters

    • model_name (str) – Required. A model resource name as model monitoring target. Format: projects/{project}/locations/{location}/models/{model}

    • model_version_id (str) – Required. Model version id.

    • training_dataset (objective.MonitoringInput) – Optional. Training dataset used to train the model. It can serve as a baseline dataset to identify changes in production.

    • display_name (str) – Optional. The user-defined name of the ModelMonitor. The name can be up to 128 characters long and can comprise any UTF-8 character. Display name of the ModelMonitor.

    • model_monitoring_schema (schema.ModelMonitoringSchema) – Required for most models, but optional for Vertex AI AutoML Tables unless the schema information is not available. The Monitoring Schema specifies the model’s features, prediction outputs and ground truth properties. It is used to extract pertinent data from the dataset and to process features based on their properties. Make sure that the schema aligns with your dataset, if it does not, Vertex AI will be unable to extract data form the dataset.

    • tabular_objective_spec (objective.TabularObjective) – Optional. The default tabular monitoring objective spec for the model monitor. It can be overriden in the ModelMonitoringJob objective spec.

    • output_spec (output.OutputSpec) – Optional. The default monitoring metrics/logs export spec, it can be overriden in the ModelMonitoringJob output spec. If not specified, a default Google Cloud Storage bucket will be created under your project.

    • notification_spec (notification.NotificationSpec) – Optional. The default notification spec for monitoring result. It can be overriden in the ModelMonitoringJob notification spec.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The default explanation spec for feature attribution monitoring. It can be overriden in the ModelMonitoringJob explanation spec.

    • project (str) – Optional. Project to retrieve model monitor from. If not set, project set in aiplatform.init will be used.

    • location (str) – Optional. Location to retrieve model monitor from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to create this model monitor. Overrides credentials set in aiplatform.init.

    • model_monitor_id (str) – Optional. The unique ID of the model monitor, which will become the final component of the model monitor resource name. If not specified, it will be generated by Vertex AI.

  • Returns

    The model monitor that was created.

  • Return type

    ModelMonitor

create_schedule(cron: str, target_dataset: vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput, display_name: Optional[str] = None, model_monitoring_job_display_name: Optional[str] = None, start_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, end_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, baseline_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None)

Creates a new Scheduled run for model monitoring job.

  • Parameters

    • cron (str) – Required. Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: “CRON_TZ=${IANA_TIME_ZONE}” or “TZ=${IANA_TIME_ZONE}”. The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, “CRON_TZ=America/New_York 1 * * *

      *
      

      ”, or “TZ=America/New_York 1 * * *

      *
      

      ”.

    • target_dataset (objective.MonitoringInput) – Required. The target dataset for analysis.

    • display_name (str) – Optional. The user-defined name of the Schedule. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of the Schedule.

    • model_monitoring_job_display_name (str) – Optional. The user-defined name of the ModelMonitoringJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of the ModelMonitoringJob.

    • start_time (timestamp_pb2.Timestamp) – Optional. Timestamp after which the first run can be scheduled. Default to Schedule create time if not specified.

    • end_time (timestamp_pb2.Timestamp) – Optional. Timestamp after which no new runs can be scheduled. If specified, The schedule will be completed when the end_time is reached. If not specified, new runs will keep getting scheduled until this Schedule is paused or deleted. Already scheduled runs will be allowed to complete. Unset if not specified.

    • tabular_objective_spec (objective.TabularObjective) – Optional. The tabular monitoring objective spec. If not set, the default tabular objective spec in ModelMonitor will be used. You must either set here or set the default one in the ModelMonitor.

    • baseline_dataset (objective.MonitoringInput) – Optional. The baseline dataset for monitoring job. If not set, the training dataset in ModelMonitor will be used as baseline dataset.

    • output_spec (output.OutputSpec) – Optional. The monitoring metrics/logs export spec. If not set, will use the default output_spec defined in ModelMonitor.

    • notification_spec (notification.NotificationSpec) – Optional. The notification spec for monitoring result. If not set, will use the default notification_spec defined in ModelMonitor.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The explanation spec for feature attribution monitoring. If not set, will use the default explanation_spec defined in ModelMonitor.

  • Returns

    The created schedule.

  • Return type

    Schedule

property create_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was created.

delete(force: bool = False, sync: bool = True)

Force delete the model monitor.

  • Parameters

    • force (bool) – Required. If force is set to True, all schedules on this ModelMonitor will be deleted first. Default is False.

    • sync (bool) – Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. Default is True.

delete_model_monitoring_job(model_monitoring_job_name: str)

Delete a model monitoring job.

  • Parameters

    model_monitoring_job_name (str) – Required. The resource name of the model monitoring job that needs to be deleted. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job} or {model_monitoring_job}

delete_schedule(schedule_name: str)

Deletes an existing Schedule.

  • Parameters

    schedule_name (str) – Required. The resource name of schedule that needs to be deleted. Format: projects/{project}/locations/{location}/schedules/{schedule} or {schedule}

property display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Display name of this resource.

property encryption_spec(: Optional[google.cloud.aiplatform_v1.types.encryption_spec.EncryptionSpec )

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

property gca_resource(: [proto.message.Message](https://proto-plus-python.readthedocs.io/en/latest/reference/message.html#proto.message.Message )

The underlying resource proto representation.

get_model_monitoring_job(model_monitoring_job_name: str)

Get the specified ModelMonitoringJob.

  • Parameters

    model_monitoring_job_name (str) – Required. The resource name of the ModelMonitoringJob that is needed. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job} or {model_monitoring_job}

  • Returns

    The model monitoring job get.

  • Return type

    ModelMonitoringJob

get_schedule(schedule_name: str)

Gets an existing Schedule.

  • Parameters

    schedule_name (str) – Required. The resource name of schedule that needs to be fetched. Format: projects/{project}/locations/{location}/schedules/{schedule} or {schedule}

  • Returns

    The schedule requested.

  • Return type

    Schedule

get_schema()

Get the schema of the model monitor.

property labels(: Dict[str, str )

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

classmethod list(filter: Optional[str] = None, order_by: Optional[str] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, parent: Optional[str] = None)

List all instances of this Vertex AI Resource.

Example Usage:

aiplatform.BatchPredictionJobs.list(

filter=’state=”JOB_STATE_SUCCEEDED” AND display_name=”my_job”’,

)

aiplatform.Model.list(order_by=”create_time desc, display_name”)

  • Parameters

    • filter (str) – Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

    • order_by (str) – Optional. A comma-separated list of fields to order by, sorted in ascending order. Use “desc” after a field name for descending. Supported fields: display_name, create_time, update_time

    • project (str) – Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.

    • location (str) – Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.

    • parent (str) – Optional. The parent resource name if any to retrieve list from.

  • Returns

    List[VertexAiResourceNoun] - A list of SDK resource objects

list_jobs(page_size: Optional[int] = None, page_token: Optional[str] = None)

List ModelMonitoringJobs.

  • Parameters

    • page_size (int) – Optional. The standard page list size.

    • page_token (str) – Optional. A page token received from a previous call.

  • Returns

    The list model monitoring jobs responses.

  • Return type

    ListJobsResponse.list_jobs

list_schedules(filter: Optional[str] = None, page_size: Optional[int] = None, page_token: Optional[str] = None)

List Schedules.

  • Parameters

    • filter (str) – Optional. Lists the Schedules that match the filter expression. The following fields are supported:

      • display_name: Supports =, != comparisons, and : wildcard.

      • state: Supports = and != comparisons.

      • request: Supports existence of the <request_type> check. (e.g. create_pipeline_job_request:\* –> Schedule has create_pipeline_job_request).

      • create_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.

      • start_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.

      • end_time: Supports =, !=, <, >, <=, >= comparisons and :\* existence check. Values must be in RFC 3339 format.

      • next_run_time: Supports =, !=, <, >, <=, and >= comparisons. Values must be in RFC 3339 format.

      Filter expressions can be combined together using logical operators (NOT, AND & OR). The syntax to define filter expression is based on https://google.aip.dev/160.

    • page_size (int) – Optional. The standard page list size.

    • page_token (str) – Optional. A page token received from a previous call.

  • Returns

    The model monitoring stats results.

  • Return type

    MetricsSearchResponse

property name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Name of this resource.

pause_schedule(schedule_name: str)

Pauses an existing Schedule.

  • Parameters

    schedule_name (str) – Required. The resource name of schedule that needs to be paused. Format: projects/{project}/locations/{location}/schedules/{schedule} or {schedule}

property resource_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Full qualified resource name.

resume_schedule(schedule_name: str)

Resumes an existing Schedule.

  • Parameters

    schedule_name (str) – Required. The resource name of schedule that needs to be resumed. Format: projects/{project}/locations/{location}/schedules/{schedule} or {schedule}

run(target_dataset: vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput, display_name: Optional[str] = None, model_monitoring_job_id: Optional[str] = None, sync: Optional[bool] = False, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, baseline_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None)

Creates a new ModelMonitoringJob.

  • Parameters

    • target_dataset (objective.MonitoringInput) – Required. The target dataset for analysis.

    • display_name (str) – Optional. The user-defined name of the ModelMonitoringJob. The name can be up to 128 characters long and can comprise any UTF-8 character. Display name of the ModelMonitoringJob.

    • model_monitoring_job_id (str) – Optional. The unique ID of the model monitoring job run, which will become the final component of the model monitoring job resource name. The maximum length is 63 characters, and valid characters are /^a-z?$/. If not specified, it will be generated by Vertex AI.

    • sync (bool) – Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. Default is False.

    • tabular_objective_spec (objective.TabularObjective) – Optional. The tabular monitoring objective spec for the model monitoring job.

    • baseline_dataset (objective.MonitoringInput) – Optional. The baseline dataset for monitoring job. If not set, the training dataset in ModelMonitor will be used as baseline dataset.

    • output_spec (output.OutputSpec) – Optional. The monitoring metrics/logs export spec. If not set, will use the default output_spec defined in ModelMonitor.

    • notification_spec (notification.NotificationSpec) – Optional. The notification spec for monitoring result. If not set, will use the default notification_spec defined in ModelMonitor.

    • explanation_config (explanation.ExplanationSpec) – Optional. The explanation spec for feature attribution monitoring. If not set, will use the default explanation_spec defined in ModelMonitor.

  • Returns

    The model monitoring job that was created.

  • Return type

    ModelMonitoringJob

search_alerts(stats_name: Optional[str] = None, objective_type: Optional[str] = None, model_monitoring_job_name: Optional[str] = None, start_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, end_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, page_size: Optional[int] = None, page_token: Optional[str] = None)

Search ModelMonitoringAlerts.

  • Parameters

    • stats_name (str) – Optional. The stats name filter for the search, if not set, all stats will be returned. For tabular models, provide the name of the feature to return alerts from.

    • objective_type (str) – Optional. Return alerts from one of the supported monitoring objectives:

      raw-feature-drift prediction-output-drift feature-attribution

    • model_monitoring_job_name (str) – Optional. The resource name of a particular model monitoring job that the user wants to search metrics result from. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job}

    • start_time (timestamp_pb2.Timestamp) – Optional. Inclusive start of the time interval for which alerts should be returned.

    • end_time (timestamp_pb2.Timestamp) – Optional. Exclusive end of the time interval for which alerts should be returned.

    • page_size (int) – Optional. The standard page list size.

    • page_token (str) – Optional. A page token received from a previous call.

  • Returns

    The model monitoring alerts results.

  • Return type

    AlertsSearchResponse

search_metrics(stats_name: Optional[str] = None, objective_type: Optional[str] = None, model_monitoring_job_name: Optional[str] = None, schedule_name: Optional[str] = None, algorithm: Optional[str] = None, start_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, end_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, page_size: Optional[int] = None, page_token: Optional[str] = None)

Search ModelMonitoringStats.

  • Parameters

    • stats_name (str) – Optional. The stats name filter for the search, if not set, all stats will be returned. For tabular model it’s the feature name.

    • objective_type (str) – Optional. One of the supported monitoring objectives:

      raw-feature-drift prediction-output-drift feature-attribution

    • model_monitoring_job_name (str) – Optional. The resource name of a particular model monitoring job that the user wants to search metrics result from. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job}

    • schedule_name (str) – Optional. The resource name of a particular model monitoring schedule that the user wants to search metrics result from. Format: projects/{project}/locations/{location}/schedules/{schedule}

    • algorithm (str) – Optional. The algorithm type filter for the search, eg: jensen_shannon_divergence, l_infinity.

    • start_time (timestamp_pb2.Timestamp) – Optional. Inclusive start of the time interval for which results should be returned.

    • end_time (timestamp_pb2.Timestamp) – Optional. Exclusive end of the time interval for which results should be returned.

    • page_size (int) – Optional. The standard page list size.

    • page_token (str) – Optional. A page token received from a previous call.

  • Returns

    The model monitoring stats results.

  • Return type

    MetricsSearchResponse

show_feature_attribution_drift_stats(model_monitoring_job_name: str)

The method to visualize the feature attribution drift result from a model monitoring job as a histogram chart and a table.

  • Parameters

    model_monitoring_job_name (str) – Required. The resource name of model monitoring job to show the feature attribution drift stats from. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job} or {model_monitoring_job}

show_feature_drift_stats(model_monitoring_job_name: str)

The method to visualize the feature drift result from a model monitoring job as a histogram chart and a table.

  • Parameters

    model_monitoring_job_name (str) – Required. The resource name of model monitoring job to show the drift stats from. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job} or {model_monitoring_job}

show_output_drift_stats(model_monitoring_job_name: str)

The method to visualize the prediction output drift result from a model monitoring job as a histogram chart and a table.

  • Parameters

    model_monitoring_job_name (str) – Required. The resource name of model monitoring job to show the drift stats from. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}/modelMonitoringJobs/{model_monitoring_job} or {model_monitoring_job}

to_dict()

Returns the resource proto as a dictionary.

update(display_name: Optional[str] = None, training_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, model_monitoring_schema: Optional[vertexai.resources.preview.ml_monitoring.spec.schema.ModelMonitoringSchema] = None, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None)

Updates an existing ModelMonitor.

  • Parameters

    • display_name (str) – Optional. The user-defined name of the ModelMonitor. The name can be up to 128 characters long and can comprise any UTF-8 character. Display name of the ModelMonitor.

    • training_dataset (objective.MonitoringInput) – Optional. Training dataset used to train the model. It can serve as a baseline dataset to identify changes in production.

    • model_monitoring_schema (schema.ModelMonitoringSchema) – Optional. The Monitoring Schema specifies the model’s features, prediction outputs and ground truth properties. It is used to extract pertinent data from the dataset and to process features based on their properties. Make sure that the schema aligns with your dataset, if it does not, Vertex AI will be unable to extract data form the dataset.

    • tabular_objective_spec (objective.TabularObjective) – Optional. The default tabular monitoring objective spec for the model monitor. It can be overriden in the ModelMonitoringJob objective spec.

    • output_spec (output.OutputSpec) – Optional. The default monitoring metrics/logs export spec, it can be overriden in the ModelMonitoringJob output spec.

    • notification_spec (notification.NotificationSpec) – Optional. The default notification spec for monitoring result. It can be overriden in the ModelMonitoringJob notification spec.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The default explanation spec for feature attribution monitoring. It can be overriden in the ModelMonitoringJob explanation spec.

  • Returns

    The updated model monitor.

  • Return type

    ModelMonitor

update_schedule(schedule_name: str, display_name: Optional[str] = None, model_monitoring_job_display_name: Optional[str] = None, cron: Optional[str] = None, baseline_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, target_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None, end_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None)

Updates an existing Schedule.

  • Parameters

    • schedule_name (str) – Required. The resource name of schedule that needs to be updated. Format: projects/{project}/locations/{location}/schedules/{schedule} or {schedule}

    • display_name (str) – Optional. The user-defined name of the Schedule. The name can be up to 128 characters long and can be consist of any UTF-8 characters. Display name of the Schedule.

    • model_monitoring_job_display_name (str) – Optional. The user-defined display name of the ModelMonitoringJob that needs to be updated.

    • cron (str) – Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) to launch scheduled runs. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: “CRON_TZ=${IANA_TIME_ZONE}” or “TZ=${IANA_TIME_ZONE}”. The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, “CRON_TZ=America/New_York 1 * * *

      *
      

      ”, or “TZ=America/New_York 1 * * *

      *
      

      ”.

    • baseline_dataset (objective.MonitoringInput) – Optional. The baseline dataset for monitoring job.

    • target_dataset (objective.MonitoringInput) – Optional. The target dataset for analysis.

    • tabular_objective_spec (objective.TabularObjective) – Optional. The tabular monitoring objective spec.

    • output_spec (output.OutputSpec) – Optional. The monitoring metrics/logs export spec.

    • notification_spec (notification.NotificationSpec) – Optional. The notification spec for monitoring result.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The explanation spec for feature attribution monitoring.

    • end_time (timestamp_pb2.Timestamp) – Optional. Timestamp after which no new runs can be scheduled.

  • Returns

    The updated schedule.

  • Return type

    Schedule

property update_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was last updated.

wait()

Helper method that blocks until all futures are complete.

class vertexai.resources.preview.ml_monitoring.ModelMonitoringJob(model_monitoring_job_name: str, model_monitor_id: Optional[str] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None)

Bases: google.cloud.aiplatform.base.VertexAiStatefulResource

Initializer for ModelMonitoringJob.

Example Usage:

my_monitoring_job = aiplatform.ModelMonitoringJob(

model_monitoring_job_name=’projects/123/locations/us-central1/modelMonitors/my_model_monitor_id/modelMonitoringJobs/my_monitoring_job_id’

) or my_monitoring_job = aiplatform.aiplatform.ModelMonitoringJob(

model_monitoring_job_name=’my_monitoring_job_id’, model_monitor_id=’my_model_monitor_id’,

)

  • Parameters

    • model_monitoring_job_name (str) – Required. The resource name for the Model Monitoring Job if provided alone, or the model monitoring job id if provided with model_monitor_id.

    • model_monitor_id (str) – Optional. The model monitor id depends on the way of initializing ModelMonitoringJob.

    • project (str) – Required. Project to retrieve endpoint from. If not set, project set in aiplatform.init will be used.

    • location (str) – Required. Location to retrieve endpoint from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to init model monitoring job. Overrides credentials set in aiplatform.init.

Initializes class with project, location, and api_client.

  • Parameters

    • project (str) – Optional. Project of the resource noun.

    • location (str) – Optional. The location of the resource noun.

    • credentials (google.auth.credentials.Credentials) – Optional. custom credentials to use when accessing interacting with resource noun.

    • resource_name (str) – A fully-qualified resource name or ID.

client_class()

alias of google.cloud.aiplatform.utils.ModelMonitoringClientWithOverride

classmethod create(model_monitor_name: Optional[str] = None, target_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, display_name: Optional[str] = None, model_monitoring_job_id: Optional[str] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, baseline_dataset: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput] = None, tabular_objective_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, output_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.output.OutputSpec] = None, notification_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.notification.NotificationSpec] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None, sync: bool = False)

Creates a new ModelMonitoringJob.

  • Parameters

    • model_monitor_name (str) – Required. The parent model monitor resource name. Format: projects/{project}/locations/{location}/modelMonitors/{model_monitor}

    • target_dataset (objective.MonitoringInput) – Required. The target dataset for analysis.

    • display_name (str) – Optional. The user-defined name of the ModelMonitoringJob. The name can be up to 128 characters long and can comprise any UTF-8 character.

    • model_monitoring_job_id (str) – Optional. The unique ID of the model monitoring job run, which will become the final component of the model monitoring job resource name. The maximum length is 63 characters, and valid characters are /^a-z?$/. If not specified, it will be generated by Vertex AI.

    • project (str) – Optional. Project to retrieve endpoint from. If not set, project set in aiplatform.init will be used.

    • location (str) – Optional. Location to retrieve endpoint from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to create model monitoring job. Overrides credentials set in aiplatform.init.

    • baseline_dataset (objective.MonitoringInput) – Optional. The baseline dataset for monitoring job. If not set, the training dataset in ModelMonitor will be used as baseline dataset.

    • output_spec (output.OutputSpec) – Optional. The monitoring metrics/logs export spec. If not set, will use the default output_spec defined in ModelMonitor.

    • notification_spec (notification.NotificationSpec) – Optional. The notification spec for monitoring result. If not set, will use the default notification_spec defined in ModelMonitor.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The explanation spec for feature attribution monitoring. If not set, will use the default explanation_spec defined in ModelMonitor.

    • sync (bool) – Required. Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. Default is False.

  • Returns

    The model monitoring job that was created.

  • Return type

    ModelMonitoringJob

property create_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was created.

delete()

Deletes an Model Monitoring Job.

property display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Display name of this resource.

done()

Method indicating whether a job has completed.

  • Returns

    True if the job has completed.

property encryption_spec(: Optional[google.cloud.aiplatform_v1.types.encryption_spec.EncryptionSpec )

Customer-managed encryption key options for this Vertex AI resource.

If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.

property gca_resource(: [proto.message.Message](https://proto-plus-python.readthedocs.io/en/latest/reference/message.html#proto.message.Message )

The underlying resource proto representation.

property labels(: Dict[str, str )

User-defined labels containing metadata about this resource.

Read more about labels at https://goo.gl/xmQnxf

classmethod list(filter: Optional[str] = None, order_by: Optional[str] = None, project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, parent: Optional[str] = None)

List all instances of this Vertex AI Resource.

Example Usage:

aiplatform.BatchPredictionJobs.list(

filter=’state=”JOB_STATE_SUCCEEDED” AND display_name=”my_job”’,

)

aiplatform.Model.list(order_by=”create_time desc, display_name”)

  • Parameters

    • filter (str) – Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.

    • order_by (str) – Optional. A comma-separated list of fields to order by, sorted in ascending order. Use “desc” after a field name for descending. Supported fields: display_name, create_time, update_time

    • project (str) – Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.

    • location (str) – Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.

    • credentials (auth_credentials.Credentials) – Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.

    • parent (str) – Optional. The parent resource name if any to retrieve list from.

  • Returns

    List[VertexAiResourceNoun] - A list of SDK resource objects

property name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Name of this resource.

property resource_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )

Full qualified resource name.

property state(: google.cloud.aiplatform_v1beta1.types.job_state.JobStat )

Fetch Job again and return the current JobState.

  • Returns

    Enum that describes the state of a Model Monitoring Job.

  • Return type

    state (job_state.JobState)

to_dict()

Returns the resource proto as a dictionary.

property update_time(: [datetime.datetime](https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime )

Time this resource was last updated.

wait()

Helper method that blocks until all futures are complete.

class vertexai.resources.preview.ml_monitoring.spec.DataDriftSpec(features: Optional[List[str]] = None, categorical_metric_type: Optional[str] = 'l_infinity', numeric_metric_type: Optional[str] = 'jensen_shannon_divergence', default_categorical_alert_threshold: Optional[float] = None, default_numeric_alert_threshold: Optional[float] = None, feature_alert_thresholds: Optional[Dict[str, float]] = None)

Bases: object

Data drift monitoring spec.

Data drift measures the distribution distance between the current dataset and a baseline dataset. A typical use case is to detect data drift between the recent production serving dataset and the training dataset, or to compare the recent production dataset with a dataset from a previous period.

Example

feature_drift_spec=DataDriftSpec(

features=[“feature1”]
categorical_metric_type=”l_infinity”,
numeric_metric_type=”jensen_shannon_divergence”,
default_categorical_alert_threshold=0.01,
default_numeric_alert_threshold=0.02,
feature_alert_thresholds={“feature1”:0.02, “feature2”:0.01},

)

features()

Optional. Feature names / Prediction output names interested in monitoring. These should be a subset of the input feature names or prediction output names specified in the monitoring schema. If not specified, all features / prediction outputs outlied in the monitoring schema will be used.

  • Type

    List[str]

categorical_metric_type()

Optional. Supported metrics type: l_infinity, jensen_shannon_divergence

numeric_metric_type()

Optional. Supported metrics type: jensen_shannon_divergence

default_categorical_alert_threshold()

Optional. Default alert threshold for all the categorical features.

default_numeric_alert_threshold()

Optional. Default alert threshold for all the numeric features.

feature_alert_thresholds()

Optional. Per feature alert threshold will override default alert threshold.

class vertexai.resources.preview.ml_monitoring.spec.FeatureAttributionSpec(features: Optional[List[str]] = None, default_alert_threshold: Optional[float] = None, feature_alert_thresholds: Optional[Dict[str, float]] = None, batch_dedicated_resources: Optional[google.cloud.aiplatform_v1beta1.types.machine_resources.BatchDedicatedResources] = None)

Bases: object

Feature attribution spec.

Example

feature_attribution_spec=FeatureAttributionSpec(

features=[“feature1”]
default_alert_threshold=0.01,
feature_alert_thresholds={“feature1”:0.02, “feature2”:0.01},
batch_dedicated_resources=BatchDedicatedResources(

> starting_replica_count=1,
> max_replica_count=2,
> machine_spec=my_machine_spec,

),

)

features()

Optional. Input feature names interested in monitoring. These should be a subset of the input feature names specified in the monitoring schema. If not specified, all features outlied in the monitoring schema will be used.

  • Type

    List[str]

default_alert_threshold()

Optional. Default alert threshold for all the features.

feature_alert_thresholds()

Optional. Per feature alert threshold will override default alert threshold.

batch_dedicated_resources()

Optional. The config of resources used by the Model Monitoring during the batch explanation for non-AutoML models. If not set, n1-standard-2 machine type will be used by default.

  • Type

    machine_resources.BatchDedicatedResources

class vertexai.resources.preview.ml_monitoring.spec.FieldSchema(name: str, data_type: str, repeated: Optional[bool] = False)

Bases: object

Field Schema.

The class identifies the data type of a single feature, which combines together to form the Schema for different fields in ModelMonitoringSchema.

name()

Required. Field name.

data_type()

Required. Supported data types are: float, integer boolean, string, categorical.

repeated()

Optional. Describes if the schema field is an array of given data type.

class vertexai.resources.preview.ml_monitoring.spec.ModelMonitoringSchema(feature_fields: MutableSequence[vertexai.resources.preview.ml_monitoring.spec.schema.FieldSchema], ground_truth_fields: Optional[MutableSequence[vertexai.resources.preview.ml_monitoring.spec.schema.FieldSchema]] = None, prediction_fields: Optional[MutableSequence[vertexai.resources.preview.ml_monitoring.spec.schema.FieldSchema]] = None)

Bases: object

Initializer for ModelMonitoringSchema.

  • Parameters

    • feature_fields (MutableSequence[FieldSchema]) – Required. Feature names of the model. Vertex AI will try to match the features from your dataset as follows: * For ‘csv’ files, the header names are required, and we will

      extract thecorresponding feature values when the header names align with the feature names.

      • For ‘jsonl’ files, we will extract the corresponding feature values if the key names match the feature names. Note: Nested features are not supported, so please ensure your features are flattened. Ensure the feature values are scalar or an array of scalars.

      • For ‘bigquery’ dataset, we will extract the corresponding feature values if the column names match the feature names. Note: The column type can be a scalar or an array of scalars. STRUCT or JSON types are not supported. You may use SQL queries to select or aggregate the relevant features from your original table. However, ensure that the ‘schema’ of the query results meets our requirements.

      • For the Vertex AI Endpoint Request Response Logging table or Vertex AI Batch Prediction Job results. If the prediction instance format is an array, ensure that the sequence in feature_fields matches the order of features in the prediction instance. We will match the feature with the array in the order specified in feature_fields.

    • prediction_fields (MutableSequence[FieldSchema]) – Optional. Prediction output names of the model. The requirements are the same as the feature_fields. For AutoML Tables, the prediction output name presented in schema will be: predicted_{target_column}, the target_column is the one you specified when you train the model. For Prediction output drift analysis: * AutoML Classification, the distribution of the argmax label will

      be analyzed.

      • AutoML Regression, the distribution of the value will be analyzed.
    • ground_truth_fields (MutableSequence[FieldSchema]) – Optional. Target /ground truth names of the model.

to_json(output_dir: Optional[str] = None)

Transform ModelMonitoringSchema to json format.

  • Parameters

    output_dir (str) – Optional. The output directory that the transformed json file would be put into.

class vertexai.resources.preview.ml_monitoring.spec.MonitoringInput(vertex_dataset: Optional[str] = None, gcs_uri: Optional[str] = None, data_format: Optional[str] = None, table_uri: Optional[str] = None, query: Optional[str] = None, timestamp_field: Optional[str] = None, batch_prediction_job: Optional[str] = None, endpoints: Optional[List[str]] = None, start_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, end_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None, offset: Optional[str] = None, window: Optional[str] = None)

Bases: object

Model monitoring data input spec.

vertex_dataset()

Optional. Resource name of the Vertex AI managed dataset. Format: projects/{project}/locations/{location}/datasets/{dataset} At least one source of dataset should be provided, and if one of the fields is set, no need to set other sources (vertex_dataset, gcs_uri, table_uri, query, batch_prediction_job, endpoints).

gcs_uri()

Optional. Google Cloud Storage URI to the input file(s). May contain wildcards.

data_format()

Optional. Data format of Google Cloud Storage file(s). Should be provided if a gcs_uri is set. Supported formats:

“csv”, “jsonl”, “tf-record”

table_uri()

Optonal. BigQuery URI to a table, up to 2000 characters long. All the columns in the table will be selected. Accepted forms:

  • BigQuery path. For example:

    bq://projectId.bqDatasetId.bqTableId.

  • Type

    str

query()

Optional. Standard SQL for BigQuery to be used instead of the table_uri.

timestamp_field()

Optional. The timestamp field in the dataset. the timestamp_field must be specified if you’d like to use start_time, end_time, offset or window. If you use query to specify the dataset, make sure the timestamp_field is in the selection fields.

batch_prediction_job()

Optional. Vertex AI Batch Prediction Job resource name. Format: projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}

endpoints()

Optional. List of Vertex AI Endpoint resource names. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

  • Type

    List[str]

start_time()

Optional. Inclusive start of the time interval for which results should be returned. Should be set together with end_time.

  • Type

    timestamp_pb2.Timestamp

end_time()

Optional. Exclusive end of the time interval for which results should be returned. Should be set together with

``

start_time`.`

  • Type

    timestamp_pb2.Timestamp

offset()

Optional. Offset is the time difference from the cut-off time. For scheduled jobs, the cut-off time is the scheduled time. For non-scheduled jobs, it’s the time when the job was created. Currently we support the following format: ‘w|W’: Week, ‘d|D’: Day, ‘h|H’: Hour E.g. ‘1h’ stands for 1 hour, ‘2d’ stands for 2 days.

window()

Optional. Window refers to the scope of data selected for analysis. It allows you to specify the quantity of data you wish to examine. It refers to the data time window prior to the cut-off time or the cut-off time minus the offset. Currently we support the following format: ‘w|W’: Week, ‘d|D’: Day, ‘h|H’: Hour E.g. ‘1h’ stands for 1 hour, ‘2d’ stands for 2 days.

class vertexai.resources.preview.ml_monitoring.spec.NotificationSpec(user_emails: Optional[List[str]] = None, notification_channels: Optional[List[str]] = None, enable_cloud_logging: Optional[bool] = False)

Bases: object

Initializer for NotificationSpec.

  • Parameters

    • user_emails (List[str]) – Optional. The email addresses to send the alert to.

    • notification_channels (List[str]) – Optional. The notification channels to send the alert to. Format: projects/{project}/notificationChannels/{channel}

    • enable_cloud_logging (bool) – Optional. If dump the anomalies to Cloud Logging. The anomalies will be put to json payload. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.

class vertexai.resources.preview.ml_monitoring.spec.ObjectiveSpec(baseline_dataset: vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput, target_dataset: vertexai.resources.preview.ml_monitoring.spec.objective.MonitoringInput, tabular_objective: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.TabularObjective] = None, explanation_spec: Optional[google.cloud.aiplatform_v1beta1.types.explanation.ExplanationSpec] = None)

Bases: object

Initializer for ObjectiveSpec.

  • Parameters

    • baseline_dataset (MonitoringInput) – Required. Baseline datasets that are used by all the monitoring objectives. It could be the training dataset or production serving dataset from a previous period.

    • target_dataset (MonitoringInput) – Required. Target dataset for monitoring analysis, it’s used by all the monitoring objectives.

    • tabular_objective (TabularObjective) – Optional. The tabular monitoring objective.

    • explanation_spec (explanation.ExplanationSpec) – Optional. The explanation spec. This spec is required when the objectives spec includes feature attribution objectives.

class vertexai.resources.preview.ml_monitoring.spec.OutputSpec(gcs_base_dir: str)

Bases: object

Initializer for OutputSpec.

  • Parameters

    data_source (str) – Optional. Google Cloud Storage base folder path for metrics, error logs, etc.

class vertexai.resources.preview.ml_monitoring.spec.TabularObjective(feature_drift_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.DataDriftSpec] = None, prediction_output_drift_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.DataDriftSpec] = None, feature_attribution_spec: Optional[vertexai.resources.preview.ml_monitoring.spec.objective.FeatureAttributionSpec] = None)

Bases: object

Initializer for TabularObjective.

feature_drift_spec()

Optional. Input feature distribution drift monitoring spec.

  • Type

    DataDriftSpec

prediction_output_drift_spec()

Optional. Prediction output distribution drift monitoring spec.

  • Type

    DataDriftSpec

feature_attribution_spec()

Optional. Feature attribution monitoring spec.

  • Type

    FeatureAttributionSpec