The Chat Completions API works as an Open AI-compatible endpoint, designed to make it easier to interface with Gemini on Vertex AI by using the OpenAI libraries for Python and REST. If you're already using the OpenAI libraries, you can use this API as a low-cost way to switch between calling OpenAI models and Vertex AI hosted models to compare output, cost, and scalability, without changing your existing code. This helps ensure compatibility across providers and consistency with community standards. If you aren't already using the OpenAI libraries, we recommend that you use the Google Gen AI SDK.
Supported models
The Chat Completions API supports both Gemini models and select self-deployed models from Model Garden.
Gemini models
The following models provide support for the Chat Completions API:
Self-deployed models from Model Garden
The Hugging Face Text Generation Interface (HF TGI) and Vertex AI Model Garden prebuilt vLLM containers support the Chat Completions API. However, not every model deployed to these containers supports the Chat Completions API. The following table includes the most popular supported models by container:
HF TGI |
vLLM |
---|---|
Supported parameters
For Google models, the Chat Completions API supports the following OpenAI parameters. For a description of each parameter, see OpenAI's documentation on Creating chat completions. Parameter support for third-party models varies by model. To see which parameters are supported, consult the model's documentation.
messages |
|
model |
|
max_tokens |
|
n |
|
frequency_penalty |
|
presence_penalty |
|
response_format |
|
stop |
|
stream |
|
temperature |
|
top_p |
|
tools |
|
tool_choice |
|
function_call |
This field is deprecated, but supported for backwards compatibility. |
functions |
This field is deprecated, but supported for backwards compatibility. |
If you pass any unsupported parameter, it is ignored.
Gemini-specific parameters
There are several features supported by Gemini that are not available in OpenAI models.
These features can still be passed in as parameters, but must be contained within an
extra_content
or extra_body
or they will be ignored.
extra_body
features
safety_settings |
This corresponds to Gemini's SafetySetting . |
cached_content |
This corresponds to Gemini's GenerateContentRequest.cached_content . |
thought_tag_marker |
Used to separate a model's thoughts from its responses for models with Thinking available. If not specified, no tags will be returned around the model's thoughts. If present, subsequent queries will strip the thought tags and mark the thoughts appropriately for context. This helps preserve the appropriate context for subsequent queries. |
What's next
- Learn more about authentication and credentialing with the OpenAI-compatible syntax.
- See examples of calling the Chat Completions API with the OpenAI-compatible syntax.
- See examples of calling the Inference API with the OpenAI-compatible syntax.
- See examples of calling the Function Calling API with OpenAI-compatible syntax.
- Learn more about the Gemini API.
- Learn more about migrating from Azure OpenAI to the Gemini API.