Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client (v0.46.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.

REST client for the LlmUtilityService service.

Service for LLM related utility functions.

Inherits

  • Object

Methods

.configure

def self.configure() { |config| ... } -> Client::Configuration

Configure the LlmUtilityService Client class.

See Configuration for a description of the configuration fields.

Yields
  • (config) — Configure the Client client.
Yield Parameter
Example
# Modify the configuration for all LlmUtilityService clients
::Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.configure do |config|
  config.timeout = 10.0
end

#compute_tokens

def compute_tokens(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse
def compute_tokens(endpoint: nil, instances: nil, model: nil, contents: nil) -> ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse

Return a list of tokens based on the input text.

Overloads
def compute_tokens(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse
Pass arguments to compute_tokens via a request object, either of type ComputeTokensRequest or an equivalent Hash.
Parameters
  • request (::Google::Cloud::AIPlatform::V1::ComputeTokensRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
  • options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries etc. Optional.
def compute_tokens(endpoint: nil, instances: nil, model: nil, contents: nil) -> ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse
Pass arguments to compute_tokens via keyword arguments. Note that at least one keyword argument is required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash as a request object (see above).
Parameters
  • endpoint (::String) — Required. The name of the Endpoint requested to get lists of tokens and token ids.
  • instances (::Array<::Google::Protobuf::Value, ::Hash>) — Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.
  • model (::String) — Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers//models/
  • contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) — Optional. Input content.
Yields
  • (result, operation) — Access the result along with the TransportOperation object
Yield Parameters
Raises
  • (::Google::Cloud::Error) — if the REST call is aborted.
Example

Basic example

require "google/cloud/ai_platform/v1"

# Create a client object. The client can be reused for multiple calls.
client = Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.new

# Create a request. To set request fields, pass in keyword arguments.
request = Google::Cloud::AIPlatform::V1::ComputeTokensRequest.new

# Call the compute_tokens method.
result = client.compute_tokens request

# The returned object is of type Google::Cloud::AIPlatform::V1::ComputeTokensResponse.
p result

#configure

def configure() { |config| ... } -> Client::Configuration

Configure the LlmUtilityService Client instance.

The configuration is set to the derived mode, meaning that values can be changed, but structural changes (adding new fields, etc.) are not allowed. Structural changes should be made on Client.configure.

See Configuration for a description of the configuration fields.

Yields
  • (config) — Configure the Client client.
Yield Parameter

#count_tokens

def count_tokens(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::CountTokensResponse
def count_tokens(endpoint: nil, model: nil, instances: nil, contents: nil, system_instruction: nil, tools: nil) -> ::Google::Cloud::AIPlatform::V1::CountTokensResponse

Perform a token counting.

Overloads
def count_tokens(request, options = nil) -> ::Google::Cloud::AIPlatform::V1::CountTokensResponse
Pass arguments to count_tokens via a request object, either of type CountTokensRequest or an equivalent Hash.
Parameters
  • request (::Google::Cloud::AIPlatform::V1::CountTokensRequest, ::Hash) — A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.
  • options (::Gapic::CallOptions, ::Hash) — Overrides the default settings for this call, e.g, timeout, retries etc. Optional.
def count_tokens(endpoint: nil, model: nil, instances: nil, contents: nil, system_instruction: nil, tools: nil) -> ::Google::Cloud::AIPlatform::V1::CountTokensResponse
Pass arguments to count_tokens via keyword arguments. Note that at least one keyword argument is required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash as a request object (see above).
Parameters
  • endpoint (::String) — Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
  • model (::String) — Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
  • instances (::Array<::Google::Protobuf::Value, ::Hash>) — Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.
  • contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) — Optional. Input content.
  • system_instruction (::Google::Cloud::AIPlatform::V1::Content, ::Hash) — Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
  • tools (::Array<::Google::Cloud::AIPlatform::V1::Tool, ::Hash>) — Optional. A list of Tools the model may use to generate the next response.

    A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

Yields
  • (result, operation) — Access the result along with the TransportOperation object
Yield Parameters
Raises
  • (::Google::Cloud::Error) — if the REST call is aborted.
Example

Basic example

require "google/cloud/ai_platform/v1"

# Create a client object. The client can be reused for multiple calls.
client = Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.new

# Create a request. To set request fields, pass in keyword arguments.
request = Google::Cloud::AIPlatform::V1::CountTokensRequest.new

# Call the count_tokens method.
result = client.count_tokens request

# The returned object is of type Google::Cloud::AIPlatform::V1::CountTokensResponse.
p result

#iam_policy_client

def iam_policy_client() -> Google::Iam::V1::IAMPolicy::Rest::Client

Get the associated client for mix-in of the IAMPolicy.

Returns
  • (Google::Iam::V1::IAMPolicy::Rest::Client)

#initialize

def initialize() { |config| ... } -> Client

Create a new LlmUtilityService REST client object.

Yields
  • (config) — Configure the LlmUtilityService client.
Yield Parameter
Returns
  • (Client) — a new instance of Client
Example
# Create a client using the default configuration
client = ::Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.new

# Create a client using a custom configuration
client = ::Google::Cloud::AIPlatform::V1::LlmUtilityService::Rest::Client.new do |config|
  config.timeout = 10.0
end

#location_client

def location_client() -> Google::Cloud::Location::Locations::Rest::Client

Get the associated client for mix-in of the Locations.

Returns
  • (Google::Cloud::Location::Locations::Rest::Client)

#universe_domain

def universe_domain() -> String

The effective universe domain

Returns
  • (String)