使用 OpenAI 库调用 Vertex AI 模型

借助 Chat Completions API,您可以使用 Python 版和 REST 版 OpenAI 库向 Vertex AI 模型发送请求。如果您已经在使用 OpenAI 库,则可以使用此 API 在调用 OpenAI 模型和 Vertex AI 托管模型之间切换,以比较输出、成本和可伸缩性,而无需更改现有代码。如果您尚未使用 OpenAI 库,我们建议您直接调用 Gemini API

支持的模型

Chat Completions API 同时支持 Gemini 模型和来自 Model Garden 的部分自行部署模型。

Gemini 模型

下表显示了支持的 Gemini 模型:

模型 版本
Gemini 1.5 Flash google/gemini-1.5-flash
Gemini 1.5 Pro google/gemini-1.5-pro
Gemini 1.0 Pro Vision google/gemini-1.0-pro-vision
google/gemini-1.0-pro-vision-001
Gemini 1.0 Pro google/gemini-1.0-pro-002
google/gemini-1.0-pro-001
google/gemini-1.0-pro

来自 Model Garden 的自行部署模型

HuggingFace 文本生成接口 (HF TGI)Vertex AI Model Garden 预构建 vLLM 容器支持 Chat Completions API。不过,并非部署到这些容器中的所有模型都支持 Chat Completions API。下表包含按容器列出的最常用受支持模型:

HF TGI

vLLM

身份验证

如需使用 OpenAI Python 库,请安装 OpenAI SDK:

pip install openai

如需使用 Chat Completions API 进行身份验证,您可以修改客户端设置或更改环境配置以使用 Google 身份验证和 Vertex AI 端点。选择更简单的方法,然后根据您要调用 Gemini 模型还是自行部署的 Model Garden 模型,按照相应的设置步骤操作。

Model Garden 中的某些模型和受支持的 Hugging Face 模型需要先部署到 Vertex AI 端点,然后才能处理请求。从 Chat Completions API 调用这些自行部署的模型时,您需要指定端点 ID。如需列出现有的 Vertex AI 端点,请使用 gcloud ai endpoints list 命令

客户端设置

如需在 Python 中以编程方式获取 Google 凭据,您可以使用 google-auth Python SDK:

pip install google-auth
pip install requests

将 OpenAI SDK 更改为指向 Vertex AI Chat Completions 端点:

# Programmatically get an access token
creds, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)
# Note: the credential lives for 1 hour by default (https://cloud.google.com/docs/authentication/token-types#at-lifetime); after expiration, it must be refreshed.

# Pass the Vertex endpoint and authentication to the OpenAI SDK
PROJECT_ID = 'PROJECT_ID'
LOCATION = 'LOCATION'

##############################
# Choose one of the following:
##############################

# If you are calling a Gemini model, set the MODEL_ID variable and set
# your client's base URL to use openapi.
MODEL_ID = 'MODEL_ID'
client = openai.OpenAI(
    base_url = f'https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/openapi',
    api_key = creds.token)

# If you are calling a self-deployed model from Model Garden, set the
# ENDPOINT_ID variable and set your client's base URL to use your endpoint.
MODEL_ID = 'MODEL_ID'
client = openai.OpenAI(
    base_url = f'https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT}',
    api_key = creds.token)

默认情况下,访问令牌的持续时间为 1 小时。您可以延长访问令牌的有效期,也可以定期刷新令牌并更新 openai.api_key 变量。

环境变量

安装 Google Cloud CLI。 OpenAI 库可以读取 OPENAI_API_KEYOPENAI_BASE_URL 环境变量,以更改其默认客户端中的身份验证和端点。 设置以下变量:

$ export PROJECT_ID=PROJECT_ID
$ export LOCATION=LOCATION
$ export OPENAI_API_KEY="$(gcloud auth application-default print-access-token)"

如需调用 Gemini 模型,请设置 MODEL_ID 变量并使用 openapi 端点:

$ export MODEL_ID=MODEL_ID
$ export OPENAI_BASE_URL="https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi"

如需从 Model Garden 调用自行部署的模型,请设置 ENDPOINT 变量,并改为在网址中使用该变量:

$ export ENDPOINT=ENDPOINT_ID
$ export OPENAI_BASE_URL="https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/${ENDPOINT}"

接下来,初始化客户端:

client = openai.OpenAI()

Gemini Chat Completions API 使用 OAuth 和短期有效的访问令牌进行身份验证。 默认情况下,访问令牌的持续时间为 1 小时。您可以延长访问令牌的有效期,也可以定期刷新令牌并更新 OPENAI_API_KEY 环境变量。

使用 Chat Completions API 调用 Gemini

以下示例展示了如何发送非流式请求:

curl

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

如需了解如何安装或更新 Vertex AI SDK for Python,请参阅安装 Vertex AI SDK for Python。 如需了解详情,请参阅 Python API 参考文档

import vertexai
import openai

from google.auth import default, transport

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
location = "us-central1"

vertexai.init(project=PROJECT_ID, location=location)

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_request = transport.requests.Request()
credentials.refresh(auth_request)

# # OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-1.5-flash-002",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)

print(response.choices[0].message.content)
# Example response:
# The sky is blue due to a phenomenon called **Rayleigh scattering**.
# Sunlight is made up of all the colors of the rainbow.
# As sunlight enters the Earth's atmosphere ...

以下示例展示了如何使用 Chat Completions API 向 Gemini 模型发送流式传输请求:

curl

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://${LOCATION}-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/${LOCATION}/endpoints/openapi/chat/completions \
  -d '{
    "model": "google/${MODEL_ID}",
    "stream": true,
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'
  

Python

如需了解如何安装或更新 Vertex AI SDK for Python,请参阅安装 Vertex AI SDK for Python。 如需了解详情,请参阅 Python API 参考文档

import vertexai
import openai

from google.auth import default, transport

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
location = "us-central1"

vertexai.init(project=PROJECT_ID, location=location)

# Programmatically get an access token
credentials, _ = default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
auth_request = transport.requests.Request()
credentials.refresh(auth_request)

# OpenAI Client
client = openai.OpenAI(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{location}/endpoints/openapi",
    api_key=credentials.token,
)

response = client.chat.completions.create(
    model="google/gemini-1.5-flash-002",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].delta.content)
# Example response:
# The sky is blue due to a phenomenon called **Rayleigh scattering**. Sunlight is
# made up of all the colors of the rainbow. When sunlight enters the Earth 's atmosphere,
# it collides with tiny air molecules (mostly nitrogen and oxygen). ...

使用 Chat Completions API 调用自行部署的模型

以下示例展示了如何发送非流式请求:

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/endpoints/${ENDPOINT}/chat/completions \
  -d '{
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'

以下示例展示了如何使用 Chat Completions API 向自行部署的模型发送流式传输请求:

  curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
  https://us-central1-aiplatform.googleapis.com/v1beta1/projects/${PROJECT_ID}/locations/us-central1/endpoints/${ENDPOINT}/chat/completions \
  -d '{
    "stream": true,
    "messages": [{
      "role": "user",
      "content": "Write a story about a magic backpack."
    }]
  }'

支持的参数

对于 Google 模型,Chat Completions API 支持以下 OpenAI 参数。如需了解每个参数的说明,请参阅 OpenAI 有关创建聊天补全的文档。 针对第三方模型的参数支持因模型而异。如需查看支持的参数,请参阅模型的文档。

messages
  • System message
  • User message:支持 textimage_url 类型。image_url 类型支持以 "data:<MIME-TYPE>;base64,<BASE64-ENCODED-BYTES>" 形式采用 Cloud Storage URI 或 base 64 编码存储的图片。如需了解如何创建 Cloud Storage 存储桶并向其中上传文件,请参阅探索对象存储。 不支持 detail 选项。
  • Assistant message
  • Tool message
  • Function message:此字段已弃用,但仍受支持,以实现向后兼容性。
model
max_tokens
n
frequency_penalty
presence_penalty
response_format
  • json_object:解释为将“application/json”传递给 Gemini API。
  • text:解释为将“text/plain”传递给 Gemini API。
  • 任何其他 MIME 类型都会按原样传递给模型,例如直接传递“application/json”。
stop
stream
temperature
top_p
tools
  • type
  • function
    • name
    • description
    • parameters:使用 OpenAPI 规范指定参数。该字段与 OpenAPI 参数字段不同,后者被描述为 JSON 架构对象。如需了解 OpenAPI 和 JSON 架构的关键字有何差异,请参阅 OpenAPI 指南
tool_choice
  • none
  • auto
  • required:对应于 FunctionCallingConfig 中的 ANY 模式。
function_call 此字段已弃用,但仍受支持,以实现向后兼容性。
functions 此字段已弃用,但仍受支持,以实现向后兼容性。

如果您传递任何不受支持的参数,系统会忽略该参数。

刷新凭据

以下示例展示了如何根据需要自动刷新凭据:

Python

from typing import Any

import google.auth
import google.auth.transport.requests
import openai


class OpenAICredentialsRefresher:
    def __init__(self, **kwargs: Any) -> None:
        # Set a dummy key here
        self.client = openai.OpenAI(**kwargs, api_key="DUMMY")
        self.creds, self.project = google.auth.default(
            scopes=["https://www.googleapis.com/auth/cloud-platform"]
        )

    def __getattr__(self, name: str) -> Any:
        if not self.creds.valid:
            auth_req = google.auth.transport.requests.Request()
            self.creds.refresh(auth_req)

            if not self.creds.valid:
                raise RuntimeError("Unable to refresh auth")

            self.client.api_key = self.creds.token
        return getattr(self.client, name)


# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
location = "us-central1"

client = OpenAICredentialsRefresher(
    base_url=f"https://{location}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{location}/endpoints/openapi",
)

response = client.chat.completions.create(
    model="google/gemini-1.5-flash-002",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)

print(response.choices[0].message.content)
# Example response:
# The sky is blue due to a phenomenon called **Rayleigh scattering**.
# Sunlight is made up of all the colors of the rainbow.
# When sunlight enters the Earth's atmosphere, it collides with ...

后续步骤